ChatGPT is getting useless

Chat GPT is getting uselles and worse every day. In the last two weeks it became useless in a level that made me cancel my subscription.

I send him an article asking to help me understand some things and it just starts inventing figures that are not on the file, references that does not exists, etc. If it cannot even properly read a file that I sent him, it is completely useless. I have just sent him a molecular biology article and it started saying that the figure has RNA-seq, etc when they are just microscopies. If it could interpret the figure or read the legend, it should be able to see that it was wrong.

I fell that if I trust in it, It will take me to commit severe mistakes. And if I cannot trust that he can read what I sent him, what is the point of using it? Honestly, the inability of the model to ask and confirm what he needs, instead of just guessing and inventing is annoying and killed completely the reason of asking ChatGPT for anything. Some months ago it could understand what I was asking or was more careful about the answer.

From molecular biology articles to helping me programming small things (Like an MTG card organizer) it is being a pain to ask ChatGPT for help. Has anyone having issues for the past weeks? Because since the beginning of april, it is a pain in the ass.

Hi Marcalrepoles,
ChatGPT is a probability machine, it takes chunks of your words and spits out an new piece of a word that would best fit the most probable answer. but there is error in probability just the same as there is answer. just curious have you been using the same chat instance for all of your messages? or are they individual ones seperated? sometimes if yo’ve been using the same instance it can become confused and the model that gets trained via your messages can become unstable.

Dude, don’t trust GPT especially GPT for critical works. First, look at Windows or Android marketplaces, how many GPT based cash grab app on there? Then look how they capped GPT 4.5 for plus users. Then you get the picture. Resources management. Sad to say, fun is over.

Then look at online multiplayer games. It also has an AI with tons of user interacting at the same times. One user file, it can recall any setting any progress. Then how open AI can’t implement simple local save on user local drive. You get my drift? All those errors, hallucinations, guardrails, ignoring user instructions, all engineered to frustrate users. Push their Pro subs or corporate users.

It is money man. Wise choices you ended your subscription. I’m canceling this month too. It is useless.

1 Like

These models face two major difficulties:

  1. Image interpretation: Their ability to read and interpret diagrams, drawings, and graphs is still quite limited. The o3 model made a slight improvement in this area by enhancing interactions with images, but image-reading capabilities remain very restricted. The best approach is still to manually describe key points to the model instead of simply providing the image.

  2. Context Window: When dealing with large amounts of content at once, the model becomes less effective (I’ve written a paper about this: Reasoning Degradation in LLMs with Long Context Windows: New Benchmarks).

However, this doesn’t mean ChatGPT isn’t useful for your work. My recommendations are:

  • Include a description beneath each image, highlighting all relevant details.

  • Use reasoning models (such as o4-mini-high or o3) instead of GPT-4o, as they handle large context windows much better.

  • To further enhance results, summarize less relevant sections if you want detailed reasoning about a specific part of the document, thus reducing the overall context window size.

By adopting these practices, you’ll notice significant improvements.

1 Like

try different models, like o4-mini or o3, not just 4o.

use it to generate drafts, form hypotheses, organize ideas, provide overviews, assist in interpretations — but not as a final version without review.

create a new document with a pre-written description of each image, since these models struggle significantly when mixing image analysis with embedded text — even more so when dealing with multiple images. analyze individual images separately beforehand.

be more specific and demanding in your prompts, such as:
“you are a careful assistant and always check and verify the evidence in the text,”
“if you don’t have enough data, say that you need more details,”
“explain step by step how you reached that conclusion,” or similar.

if possible, break the document into chapters or smaller sections. the more input content, the more the models hallucinate or get confused. some models on the market advertise millions of context tokens, but it works better as marketing — hallucination becomes overwhelming with too much content.

1 Like

Yeap, I do not trust. Usually I ask for something related to the paper I sent to him and ask for the references. THen I go online and look at the references (i use ChatGPT almost as a better google in that case). BUt lately is really noticeable how much allucinations and things being made up by him arecoming in the answers. And the programming is being really annoying. Before I could give him some scripts, context, and it helped correct my script based on that. But now, sometimes I ask him something, and he just deletes everything on my script and keep only the point I asked. It was not like that some weeks ago. Anyway, thanks for answering

Thank you, I am doing that and also opening a new chat for each point I want clarification. Is being easier like that. But for me is pretty clear that it is worse. In my usage at least

I am doing that now. I open a new chat and break the paper in each figure to make the discussion.

In my experience with documents, Google Notebook LM has been excellent at analyzing documents more precisely. The service supports multiple documents in a manipulable way in the side tab and responds strictly about the documents provided, including clickable references that take you exactly to each page.

The downside of this is that it is so restricted to documents that it is terrible for asking related questions, creating new examples, mentioning similar cases, etc.

So when I want more precision about the strict content of documents, I use Notebook LM. When I want to expand knowledge in the chat, I use ChatGPT. Or I use a hybrid, inserting text generated by ChatGPT as a reference in Notebook LM, or taking the more precise analyses of the documents made by Notebook LM and inserting them in ChatGPT to expand the conversation.