I am constantly puzzled by how much the recent version of ChatGPT is now unable to be used with any links to a page. I was happy with the first version of ChatGPT back when the initial browsing capabilities were introduced. Back then I had put a lot of effort creating different kinds of workflows and explaining how each repo of my codebase was created. This is no longer useful because providing a link will transform ChatGPT in a lazy version of himself.
Today I was asking my GPT about JavaScript and I got this message:
« For comprehensive insights, I recommend referring to documentation and tutorials on MDN Web Docs and JavaScript.info, which provide in-depth discussions on these advanced topics, including examples and practical use cases »
At first I thought maybe it is because of copyright protection or for some sort of safety concerns, perhaps because OpenAI wants to reduce the cost and speed up the model… But now I’m just wondering what if this was a side effect of something unwanted??? Maybe OpenAI is not purposefully watering down its model, maybe it is just a bug and not something intentional…
I would like to know why I can’t use ChatGPT on my public and open source repository… I think it is obviously not something that will lead to a copyright infringement… I don’t think it is something that will pose a safety risk because the AI Agent is still accessing the information…
I think it’s very frustrating to say « Hey I wrote this and I would like you to understand that to help me » and get back « For comprehensive insights, I recommend referring to documentation at this link you just provided to me » similarly if I say something like I just read all this text and I would like you to take it in consideration while you are helping me and again: « For comprehensive insights, I recommend referring to the link you just gave me and read the article yourself »
Or asking to write something for me and then the AI will say « to write the text you need to start by this or that »… I am sure I am not the only person who thinks that ChatGPT is beyond lazy and I don’t think at that point it is just something that OpenAI did on purpose it is way too useless I don’t think it is OpenAI who said let’s just make the AI ask it’s users to do the job themselves so we can save money on inferences… I think it’s more of a bug that needs to be fixed…
I would love to know what the community thinks about this and how they are mitigating this behaviour and I would hope that OpenAI works on a fix so that people can use ChatGPT to do actual things and not just be an expensive toy…
I would normally have been using ChatGPT to review my text but I would be losing 40 minutes trying to get the same message so why don’t you just find out my typos and fix them yourself instead…