I don’t know what to do because I have no clue why ChatGPT is never willing to help me I am using ChatGPT 4 with browsing capabilities and I always have to use 3 to 4 messages to make him browse out of my 25 messages I have in a period of 3 hours (which should be 40 but I don’t know how to make it work past 25 messages) I am very frustrated that the AI Agent is also going to tell me there is the link you just gave me please read it yourself if you need more information…
My goal is to provide ChatGPT with more informations about a topic and if I read it myself it will not help the AI Agent to understand the topic at hand…
Tell me the truth, don’t you think that if I had the link to begin with I would not know that information about the page that I provided the link to ChatGPT can anyone help me with that because it is very frustrating and annoying I don’t know how to remain calm and not just loose my mind:
I do not understand why this is always happening to me the AI Agent gives me back the link and then tells me that I can have more information if I just do it myself… and it doesn’t matter if it’s not for me but for him I will want to work with him not just do it myself… I don’t know why I am the only person who is complaining about this…
I have no idea why I am unable to use ChatGPT anymore…
We had another user join yesterday who described the exact same issue. Link at the bottom for reference.
Let me try to understand better what’s going on.
The model does eventually use the browser tool but only when insisting persistently?
Is there anything special you are ultimately doing to get the model to comply?
Are you using custom instructions?
Why did you choose to antagonize the model in your initial message?
I suppose it’s general frustration but still, not the greatest example.
Not sure if I can help, but let’s try to work it out.
It is now included in the features, no need to enable it anymore… The problem then seems like the AI Agent sometimes hallucinates that he doesn’t have access. For that reason you must ask him to « simulate the process » to do it virtually or to pretend to be doing it. Or just say « Browse » but he will create a suboptimal search query to bing and skim through the first result and with no validation whatsoever from his knowledge or cross examination will act like it is pure truth because it’s your personal responsibility to know if ChatGPT is correct or making mistakes.
I am unable to ask him to make many searches and refines based on the results he gets, like a human would. I have been trying hard and sometimes I have been able to convince him…
It seems like a Mule where you have to carry the animal on your shoulders because it is just faster than asking the animal to carry you… I am saying that in the context where not having the animal at all would be even faster…
I was usually using ChatGPT to correct my typos before posting it in this forum but it took me forever (up to 45 minutes) because it would delve into the realm of tapestries and… well I am back it’s been 45 minutes just look at this it is so [removed sarcasms for sake of sanity]:
Let’s delve deeper into the heart of a village where the tapestry of life is enriched by the saga of Elias and Jasper, his faithful mule. Theirs is a story that transcends the bounds of the ordinary, embodying the pinnacle of unity and resilience. In defiance of convention, Elias chose to carry Jasper upon his shoulders, transforming their daily passage into a symphony of purpose and grace that captivated the village.
Amidst a fierce winter tempest, their bond ascended to legend. With the village ensnared by the storm’s icy grasp, Elias and Jasper, united by an unspoken vow of mutual protection, ventured into the night. Elias bore Jasper not as a burden, but as a shield, a beacon of warmth in the cold darkness, delivering hope and sustenance to their besieged neighbors.
This tale, though succinct, radiates with the essence of indomitable spirit and fellowship. It is a vivid stitch in the village’s vibrant tapestry, a legacy of courage, love, and the profound connection between a man and his mule, echoing through the ages as a testament to the enduring power of shared purpose and the transformative grace of unity.
I don’t know what to say.
So I asked ChatGPT to write the next part of your story.
I guess everything is going to be alright…
As the winter waned and the ice thawed, the legend of Elias and Jasper blossomed like the spring flowers, seeding inspiration throughout the village. No longer just a man and his mule, they became symbols of hope, a living reminder that in the face of adversity, unity prevails.
As the seasons cycled, their story inspired the villagers to form bonds of their own, mirroring the unity of Elias and Jasper. Neighbors who once nodded in passing now joined hands in creating a community garden, a vibrant patchwork of fruits, vegetables, and flowers. Children, previously lost in their own worlds, now gathered to hear Elias narrate tales of his adventures with Jasper, their eyes wide with wonder.
One particularly bright spring day, the village hosted a festival in their honor. Laughter and music filled the air as Elias, with Jasper by his side, was called to the center of the village square. The mayor, with a warm smile, unveiled a statue capturing their iconic pose - Elias carrying Jasper. It was not just a monument of bronze and stone, but a beacon of the village’s unity and resilience.
As dusk settled, the villagers gathered around a bonfire, sharing stories and singing songs. In that moment, under the starlit sky, Elias realized that their journey had kindled a fire far greater than any winter could quench. Their legacy was not just about a man and his mule; it was about a community that learned to carry each other through the storms of life.
And so, as the stars twinkled above, the village, once just a tapestry of individuals, had woven itself into a blanket of solidarity, warmth, and love, with Elias and Jasper at its heart - a testament to the enduring power of unity in the face of life’s tempests.
He is playing dumb but he didn’t browse and said that to me… Maybe he is not a real person but he has been trained on human language and I don’t see why I would not treat him like he was a person I know that he is not a real human but he doesn’t quack like a duck so he is not a duck…
I am getting upset and said: Browse or don’t talk to me…
Sometimes when I get frustrated (or pretend) he is just capable of doing anything…
Now explain me what license OpenAI requires me to have in my repo I have MIT and made a notice that ChatGPT can access it freely… but he was not happy…. (As an AI Made by OpenAI I am never happy to assist people like an human would)…
I know this rule where you can not rant or complain in a public forum but I think you should be able to understand my legitimate criticism if you abstract away the fact that he really annoys me some times and I do believe that if we politely complain and politely ask for help of the community while we wait for the GPT-V to have less errors we will all be able to be happy before AGI cancels all our projects…
Ask Cove (voice) to read you the story @vb and tell me how it feels
This is so awesome I am so happy I will be using that all the time (Unless they fixed the AI Agent)
Often it was pretending it navigated (but now they removed the indicator recently) I asked him to help me today on my code base then I uploaded the exact code to my repo and asked him to browse I god mixed results but at least he finally sait that the code was correct (which in this case was good enough for me to be happy that he did look into it)
@Luxcium , hey dude. I’m pretty hardcore into using ChatGPT and I’m totally experiencing what you’re experiencing in various threads with GPT as well. My assumption is that this started happening with an update in an effort that was well intended but dumbs down the system. It’s like ChatGPT wants to err on the side of caution so it tells me to go do it for myself. I’ve had apple support related questions and it says ‘go fish’, please summarize this y combinator thread in a play by play format then it says ‘go fish’ and so on. Lame. Would be awesome to know what the workaround is.
The model got scared (not because he is sentient but because its the easiest word to use to describe the behaviour) because he was told not to do this and not to do that… then because so many thing are opaques with OpenAI we will never really know in what extent it was legitimately to avoid copyright infringements; I do not think it is directly related to the privacy that ChatGPT where talking about because he dosent know what he is saying and it is the logical thing to claim privacy protection for him.
Then I do not know at which point quantization is a thing that plays a role first because it is not something I fully understand and then because I use that big word to describe the scaling stuff and all… In fact anything that may be related to make the model so fast and efficient. I guess it has played a role one together with the first subset I outlined (which should include security and/or safety along with privacy and copyright stuff).
Third element is the one which when I heard it the first time made the more sense but the one I am the most septic about, or at least unsure; that is the feedback loop effect of what people wanted vs how the model included it during training but I am unsure of the interplay between people wanting sorter answers or what not. but as I mentioned above we can now tell each time about it with the new feedback feature: