As you can see the text-davinci-003 return consistent structure in the response so that i can write a script to split the response into parts easily.
But using gpt-3.5-turbo-instruct with the same prompt, the response sometimes give me \n sometimes \n\n. The line breaks are unexpected so that my current script cannot split the response.
This instruct model is differently trained and tuned. You can’t change the model’s behavior, so you will have to change yourself - adapt the style of prompting and the sampling parameters that you use.
1 Like
Isn’t the vastly easier, simpler, and more reliable solution to fix your script to handle a variable number of line breaks?
As zero coding experience, I’m trying to communicate with ChatGPT to solve my current problem. I may fix my current splitting function to handle the new response structure.