I am trying to call chat GPT 3.5 Turbo multiple times in order to get it to fill up a method of my C# class at a time but the model just returns the method as opposed to the whole class even though I ask it to.
What can I do to force GPT 3.5 Turbo to be less lazy and write everything.
I would not use GPT3.5 for day to day coding unless you have a GPT debugger for your code. GPT3.5 is not the best at coding by any means. simple code it can do but you have to watch it close or you will find things that break or go missing as its not smart compared to the GPT4 models for coding. Instructions are also key so adding your preferences to how to handle the code it produces is important to ensure your results are more consistent.
the turbo preview that has specifics for api laziness
MODEL
DESCRIPTION
CONTEXT WINDOW
TRAINING DATA
gpt-4-0125-preview
New
GPT-4 Turbo
The latest GPT-4 model intended to reduce cases of ālazinessā where the model doesnāt complete a task. Returns a maximum of 4,096 output tokens. Learn more.|128,000 tokens|Up to Dec 2023|
as to speed better understanding comes always at a cost of speed. There was note though with GPT5 around the corner something about faster was mentioned so lets hope
this is where you can run into issues :4,096 output tokens
so if you are feeding it large code you will have to do it in smaller sections or function level. it can look at it all but can only respond on 4k
From my impression - I am using API and chat for exactly the same tasks simultaneously most of the time - the problems are the same for both. But this is purely anecdotal.
For both instructions are key to everything. ChatGPT allows you to customize the prompt instructions I put my same ones in there that steer the ai to complete code more efficiently.
If you search the GPT market various prompt Eng bots for this reason to help model the ai to follow logical. Super prompts are what I do in that you pass a lot of instruction sets so like logic the ai applies when looking at the data with steps to follow to make sure outcomes are shaped and formatted and responded correctly. takes a lot of patiences and even if you get it just right you have to remember responses every time are a seed so they change so every little cork that is possible could be a pattern so itās never 100% bullet proof.
Maybe it is because of tokens restrictions.
I am using gpt4 and I requested to re-write more professionaly my 30 pages chapter of a book I am writing, and he cannot provide more than 2 pages long. Funny thing is that gpt4 apologizes and try to do it again but result is always interrupted after 2 pages long.