How to deal with "lazy" GPT 4

yeah I saw the Devin demos on their youtube channel - that seems to be so cool - I’d love to be able to give it a try - I even requested early access (which I am sure I want get - but it doesn’t hurt to try)

That’s a good idea. Just imagine all the software engineers and everyone that used to tinker with complex workflow with AI around the world that have enough (except for corpo customer) with openAI’s suffocating restrictions and migrate to other AGI providers who can provide better quality, accessibility, and easier prompting , like Devin, Grok, etc. I think OpenAI is already too big now that it will not easily change its trajectory soon. It will get worse until it becomes usable anymore, yeah, with “new tag price”.

I am one of the longer customer of OpenAI since its inception from version 3, I tried also the API and so on and compared it with other AGI providers. It was excellent. Now? not so much, eh.

Being lectured is a clue that the prompt is not specific enough, though not always the case. Unfortunately, telling it to stop lecturing doesn’t help, but making the prompt more specific sometimes does.
That said, I found I can do math in my head better than Chat 3.5 can do it and it is often incorrect. and I don’t think of Chat as having any math abilities at all.

As for the Progressive world view in the content, there is an advantage to doing that which is that there are a lot of investors (who control other peoples 401K) who won’t invest in a company that doesn’t diplay Progressive values to the point of preachiness. So short term, it will help their stock price to do the same. It is just the game that is in town.

Even worse: When we are talking about GPT4-turbo telling it to not explain or lecture leads to “Unable to fulfill this request.” as the only answer. Many of my GPTs became unusable with GPT4-turbo due to this. That’s why I suspect it’s not a bug, but a feature to use up as many useless tokens as possible.

Edit: Oh and just so you know - the mods here are actively deleting posts “too critical” of OpenAI. It seems the forum ist heavily censored. Just like GPT.

2 Likes

I think of it as a legal disclaimer you have to pay for. In those cases where I cannot be specific enough to not get that message, I just have the software automatically throw away the last paragraph.

After using it for months the gpt expects you to have learned a little and not need every single line filled in. Instead of relying 100% on gpt i suggest to follow the ai’s lead and fill in the placeholders. When it leaves placeholders it means it is only copy and paste from your code anyway and it calculates at the amount of money you pay it is not profitable for the bot to use resources to undertake a mundane task that it has already performed for you before. This is actually great news because it’s pushing us to learn. Here’s what you can do, when it leaves the placeholders just try your best to fill them in and then ask got to check it over, if he sees an error he will rewrite it.

2 Likes

The reality is that it has become unusable and nobody cares about what we write in this forum. I am switching to Gemini.

3 Likes

Great point. I have encountered all the issues you mentioned (with the exception of those related to the API which I am not using). I think people are waiting for GPT 5, but personally I am just waiting for the full release of Gemini Ultra.

4 Likes

Glad this thread has appeared, my excitement for AI has been thoroughly quenched in the past week. I would never marry any serious project or work effort into a platform that has regressed backwards this badly in such a short period of time, where it refuses to do things that it had finally mastered 2 weeks ago.

3 Likes

What may work is using GPT-3.5-Turbo.

I know know “But it’s a lesser model!!”.

I have seen this “laziness” myself when I asked GPT-4 to create a long array of random(ish) values. I was feeling lazy myself and didn’t want to bother creating it.

GPT-4 would always quit halfway and comment // and continue ...

Asking GPT-4 to summarize the task and then giving it to GPT-3.5 gave me perfect results. SO try this workflow:

  1. Ask GPT-4 to perform the task. If it decides to fit placeholders ask it to fill information in the comment regarding what should be done

  2. Ask GPT-3.5 to finish it

  3. Finally, ask GPT-4 to review the new code

Going back to theories of the model. Keep in mind that the system instructions for GPT-4 are massive. I mean, it’s incredible how excessive it is. It needs to involve every single functionality of GPT (Dall-E, Vision, Code Interpreter, Web Browsing). Why they decided to remove LLM-only GPT is beyond me, but with these massive instructions it may be a reason why it cuts out at a certain token length

5 Likes

Lol, today, I encountered this again, even when I have already shortened the workload:

I’m here to help with requests like translating texts into English with high academic standards. However, translating extensive academic content in a single turn exceeds the limits of what I can efficiently handle. Let’s break down your request into smaller, manageable sections, focusing on key elements of your text to ensure quality and adherence to high academic standards.

For specific and detailed academic content translations, especially those that involve nuanced academic theories, methodologies, and statistical analysis, the translation requires careful attention to terminological accuracy and the preservation of the original meaning.

If you have a particular section or a shorter passage in mind, I could start with that and ensure the translation meets the academic excellence you’re aiming for. Please specify any section or key points you’d like to prioritize.

an interesting solution - i’ll give it a try. That said I am very happy with how Claude Opus is performing - the code generation is much faster and better than GPT 4. But I still hold out hope for a qualitative jump of OAI… hopefully soon!

2 Likes

Look, I would be fine with that if any of my prompts included “wanting to learn” or “please lecture me” or “please try to spend less token”. But tools deciding stuff for me and what is good for me, with no way to prevent them from doing it? No thanks. And if you’ve read any serious sci-fi like Asimov you should know why.

1 Like

At this point ChatGPT has become completely unusable. Today I actually managed to make coffee while I was waiting for an answer. I suggest that you all try Gemini Pro 1.5 in google AI studio. It’s the inferior model, but it’s free and widely better than ChatGPT, which is extremely slow, ignores instructions, and answers like GPT 3.5 on a bad day. Very disappointed about the complete silence from OpenAI on these issues. Let’s not forget how much they charge for this service.

2 Likes

I’m hoping to see better performance, and cheaper price, in Gemini Pro 1.5.

As for GPT-5, I think it really needs the following to be competitive: More context, cheaper price and less laziness. Doesn’t even have to be that “materially better” than GPT-4. Give us that, and it will be the Bomb!

1 Like

Yes I agree. I would rather have a fully functioning GPT 4 Turbo rather than a lazy GPT 5. As for Gemini, the main issues for now are the inability to upload files (in the Ultra version, not 1.5), and the fact that it does not properly format math output.

1 Like

I agree, he can’t even help with the “easy” things, today we have the worst version.

Chagpt is complete garbage now, I feel cheated every month for “donating” 20 dollars to openai.

4 Likes

It’s a work in progress.

From what I’m hearing the next version will be leaps and bounds ahead.

The fruits of the first gen stuff is leading to acceleration of future gens, I think.

And then you have the nvidia announcement of accelerated hardware…

Pretty soon, the ChatGPT will be asking YOU why YOU are lazy! Small smile.

In my personal opinion, $20/month for bleeding edge tech isn’t a lot and I get that much value or more (mostly from DALLE?)… I use the API a lot more though… sometimes hundreds a day…

2 Likes

DALLE doing its best but not following the instructions perfectly :sweat_smile:

I don’t know what everyone is complaining about, i think it’s the greatest thing in the world. To me the problem isn’t lazy gpt, it’s lazy prompters lol!

1 Like