GPT 3.5 Turbo is Lazy when Generating Code

I am trying to call chat GPT 3.5 Turbo multiple times in order to get it to fill up a method of my C# class at a time but the model just returns the method as opposed to the whole class even though I ask it to.

What can I do to force GPT 3.5 Turbo to be less lazy and write everything.

Sorry to disappoint you, but:

I would not use GPT3.5 for day to day coding unless you have a GPT debugger for your code. GPT3.5 is not the best at coding by any means. simple code it can do but you have to watch it close or you will find things that break or go missing as its not smart compared to the GPT4 models for coding. Instructions are also key so adding your preferences to how to handle the code it produces is important to ensure your results are more consistent.


Do you have any suggestions for models other than GPT 4 that can code well. I was using GPT 4 Turbo before and it was good but it is very slow.

I don’t know if there is something in the middle where the code is decent and it doesn’t take ages to write a small file.

Someone told me ChatGPT getting lazy is an issue only when using the ChatGPT website and via API it still works fine…

Does anyone have any evidence to support that? :thinking:

I am using the API and it is still lazy for me, maybe there is a way of making it less lazy with the API, but I haven’t found it.

1 Like

the turbo preview that has specifics for api laziness

gpt-4-0125-preview New

GPT-4 Turbo
The latest GPT-4 model intended to reduce cases of “laziness” where the model doesn’t complete a task. Returns a maximum of 4,096 output tokens. Learn more.|128,000 tokens|Up to Dec 2023|

as to speed better understanding comes always at a cost of speed. There was note though with GPT5 around the corner something about faster was mentioned so lets hope :slight_smile:

this is where you can run into issues :4,096 output tokens
so if you are feeding it large code you will have to do it in smaller sections or function level. it can look at it all but can only respond on 4k


From my impression - I am using API and chat for exactly the same tasks simultaneously most of the time - the problems are the same for both. But this is purely anecdotal.

For both instructions are key to everything. ChatGPT allows you to customize the prompt instructions I put my same ones in there that steer the ai to complete code more efficiently.

If you search the GPT market various prompt Eng bots for this reason to help model the ai to follow logical. Super prompts are what I do in that you pass a lot of instruction sets so like logic the ai applies when looking at the data with steps to follow to make sure outcomes are shaped and formatted and responded correctly. takes a lot of patiences and even if you get it just right you have to remember responses every time are a seed so they change so every little cork that is possible could be a pattern so it’s never 100% bullet proof.