How to deal with "lazy" GPT 4

What an incredible explanation! Thank you very much, that is super helpful.

1 Like

This thread is a perfect example of how people love to blame everything around them when something doesn’t go their way and then look for self-validation online. Rather than actually getting to the bottom of it and A/B test the new model against the old one. :man_facepalming:

If they did do that, they would realize that it is just as capable as before. And instead they just got used to it - no longer so easily impressed and noticing more flaws naturally now But where’s the fun in that! So we get these conspiracy theories instead…

This was my first thought when I kept seeing people complain about it. Although I don’t code, I’m a heavy user most days with my own white label site.

I feel like the wonder is missing for some people. Or they’re leaning on it without as much intent in instructions. Not saying that’s the whole story, but I am concerned with these assertions because new users are worried about this. Noobs aren’t likely to be coding and just need to get their custom instructions and requests voiced properly.

I’m just offering most writing tasks in one shot format now. Much easier for new users to digest

1 Like

A possible solution might be the ‘fine-tuning’ technique, which adjusts the model’s parameters based on task-specific data to improve its performance. Although this option is currently in experimental access, it could be a way to enhance the accuracy of GPT-4 for specific coding tasks.

I hope this information is helpful to you and that the issues you’ve been facing are resolved soon. :sunflower:

1 Like

“conciseness” or “brevity”. These terms reflect the quality of being direct and to the point, without unnecessary elaboration or chatter.

1 Like

Not only has GPT-4 gotten lazy, the support staff has gone lazy too. They do not feel like responding. They cancelled my GPT Plus in the mid of my subscription phase and now are no more responding.

1 Like

Yes, same here! That’s funny. It was fun having GPT write all that copy but “good UX writing” these days is like one sentence whether for desktop or mobile interfaces. :man_shrugging:

If I’m customizing the writing topic, I’ll make sure the GPT is trained on the keyword data for the client, too. I specifically mention top-keyword choices in the Instructions for added emphasis.

Nah, don’t be. Have fun debunking the things. I’m literally just going to tell them what @razvan.i.savin said a few posts back. It is an excellent and succinct explanation.

And you’re also right. I think the key for simple CustomGPTs are well-organized Instructions and clear references to any documents. You can quiz the GPT itself and it will help you optimize everything. It’s super neat.

I’ve seen the way a GPT might struggle for an answer from a long document, versus the immediacy of how well it knows one that’s well referenced in the Instructions. It’s all coming down to technical writing.

1 Like

Hi @nicoloceneda,
Could you please share your experience with Gemini in comparison to Chat GPtT4 or other similar tools? Specifically, I am interested to know if it is good in analyzing and summarizing legal documents, and if it is capable of keeping track of lengthy legal arguments. If you could provide any feedback, it would be greatly appreciated.

1 Like

Based on my experience, I would say that ChatGPT (I am not talking about the API) is definitely not the right tool for you. The context window is very limited (in the low thousands), which makes it unable to process long documents. Moreover, ChatGPT does not output the content of the documents 1:1, no matter how hard you try. For these kinds of tasks, I would recommend using Gemini 1.5 Pro, which has a context window of 1 million tokens and outputs the content of documents with high fidelity.

1 Like

Just a clarification, since my posts have already been deleted by the admins for “promoting Gemini”: I am not trying to promote Gemini; I just wanted to give you my honest opinion on the topic. Also I don’t work for Google and I couldn’t care less who wins the race.

1 Like

Yes, this is true.

I could see the chatGPT 4 fanboys are insisting on. For basic writing, I think chatGPT 4 covers that well, along with other free AGI providers.

Problem is when you want a complex and complete free-of-error codes with the least prompting and regenerating. In addition, any other automation-related tasks using API should be prioritized too, to save costs.


For “remembering” long documents and conversations there isn’t an AI solution yet. It’s the hot topic. For now, it’s up to you to summarize each section and keep it stored because of the Context Window @nicoloceneda was talking about.

For long documents and discussions, look into the topic Recursive Summarization.

Generally speaking, and for now, it’s easier to break long documents into subsections and then summarize or access as necessary.


Yeah it’s pretty broken at the moment. If you make an empty custom GPT and enable dall-e etc, it should still work though.

1 Like

Good idea. But even though I created a custom GPT, when I ask if he is based on gpt4 or gpt3.5, it still answers himself with 3.5. and for an identical question in math, the answer is now significantly shorter and has some errors.


Thanks for the info, until they fix their errors and laziness, as well as restrictions on certain keywords, I won’t stop singing their “silent” regressive behaviors at the expense of regular users who pay (donate) $20/month to support the free users and OpenAI bloating employees.

The only worth of OpenAI/GPT is their enterprise solution as of now, as they provide the highest priority with highest service to the enterprise (including MS). I can feel the stark difference as I have an enterprise account when I am working.


Hello, I also use ChatGPT for coding projects. I don’t explicitly use ChatGPT Pus, or CHatGPT4, but I can say for me that it isn’t following what I ask exactly. I’ll ask it to fix a problem with something, and it will provide me with new code, but the code will either make the problem worse, or other times it will just do nothing to help fix the problems at all. I’m not saying it is all bad for coding. It has helped me a lot since I only code in Python, and not what it is using. But sometimes it can be lazy or just get annoying when trying to fix some problems.

1 Like

there was definitely a period a 2-3 weeks ago where GPT 4 was returning terrible useless stuff, like OpenAI was cutting corners on requests or context instead of denying requests. The same prompts in my app that were working great months ago, were suddenly returning dogshit, but I was paying the same amount for the tokens :frowning:. Thankfully, it seems to have returned back to normal since I tried it again late last week.

I am using this for work, so I’m not paying for it personally. I cancelled my personal subscription months ago because of degrading performance and censorship. I can use free alternatives and local models for things I need.

1 Like

I’ll +1

I find if I start a new (web) chat window it can respond better. I am convinced it’s a throttle for heavy web users. I don’t see it via the API responses.

If you create a GPT then a error happens in the browser etc and it refreshes you lose everything. I’d rather keep that history than the history being shown here.