Teknium says what we're all thinking

Teknium, a well known OS AI developer, made this comment about gpto

Here is his comment :

@petrroyce It ****** cannot debug anything it does wrong and instead just repeats its same past attempt at a solution ad infinitum

I absolutely agree with it and have been running into the same behavior. I’ve wanted to post something similar many times in this community forums.

I’ve edited out the profanity as @vb reasonably asked me to in private mail.


I don’t know what it is but it’s become useless for me when trying to do anything with an API it’s not super trained on.

I think there needs to be a great deal of clever RAG going on in order to get this to work.

Eg, something that can pull up the right APIs and sample code would be quite effective I think.


It’s interesting to see how it did so well on the charts but from what I’ve seen, not many people really like it.

It just reminds me so much working in construction. Theoretical measurements do not conform well to reality. That’s where the applied learners excel.

I agree. It feels over fitted. It fails in conversations. I moved back to GPT-4 but it’s throwing spelling errors and is also unusable.

For API I’ve been very happy with gpt-4o though. The pricing and capabilities are amazing.

1 Like

Sure,. I’ll repeat exactly the same thing again!


Their cost savings and our speed increase are nice, but it’s worth nothing if it’s not useful. It can kinda do small one-off tasks okay, but past 10 or so message forget about it!

1 Like

All raw models do that though. It’s just a question of how many of its attempts it can keep in attention… :thinking:

I guess the answer for omni is 1 :laughing:

1 Like

I’m hoping that recent OpenAI RAG company acquisition recently helps. :wink:


1 Like

LOL perfect meme for gpto. I dont’ know how many times I’ve had to resist typing obscenities to chatgpt when it spits out exactly the same thing.

“Chat, what freaking changed!?”

Hmm, I didn’t recall seeing that with earlier versions. It would usually try something else. Which admittedly, often didn’t work, but at least it tried something.

hey, just a nitpick, maybe it’s just me, but could you say gpt4o or omni rather than gpto? thanks a bunch :grimacing:

Blame OpenAI for introducing it as gpt2 :slight_smile:

I do

One thing that I’ve learned while working with RAG chatbots is that you should avoid asking the model to do multi-step tasks (complex tasks).

Let’s say you are trying to continue a conversation about a coding issue (as above), if you ask it to solve the issue again and again, you will end up in the loop. One possible solution would be to create another prompt that analyzes the current result and list possible paths to solve the task.

Now regarding your specific use case, the model doesn’t have access to updated docs or information, a lot of times when I ask for python or typescript codes I get outdated lib issues, that would be solved with RAG or breaking the task into smaller units. IMO it shines when trying to find paths to the desired solution, doing a single simple thing or finding “the next step”, not the full solution.

It has limitations and I’ve read somewhere in this forum that you shouldn’t expect the model to do everything, instead you should for help in the things it really does well.

Let’s hope that this changes in the future versions :pray:


I asked AI studio the same question that I asked 4o. Studio forgot the give me the .htaccess modifications. Claude was about the same as 4o. If I hit a road block, I’ll just ask all three.