Longer GPT 3.5-turbo Output Problem

We are train one AI Model for Quiz creation on base of GPT 3.5. Two months ago the response of this fine tuned model was very good But the response has become very bad since last two months. I don’t understand how the response of the model suddenly became so bad.

1 Like

September 7 at 8AM, a new model fine-tune or retrain or reparameterization hit API gpt-3.5-turbo-0613, causing major cognitive loss when following system programming instructions, accompanied by a significant increase in token production rate to let us know it was an architectural change. The AI couldn’t even write four-word summaries for ChatGPT titles without errors. And then the model was hit again with further stupidity with more developer apps not working a week ago.

The result is you can tell it to write simply a tweet with three parts and get nothing like what you asked for (and because I could demonstrate 0301 actually working in that post, they had to mess with that model also).

Today below. Did I ask for two tweets from gpt-3.5-turbo? No.

The go back to the quality I showed September 11 from 0301 in that post on multiple calls. Now 0301 also doesn’t follow the multi-part output instruction - and makes it even worse:

The only way you’ll likely succeed is in putting an instruction (not a rule or programming that you need followed) that the AI would follow from a user into a user role. And what can you do with system? “You are ChatGPT”.

API users not paying for GPT-4 are obviously OpenAI’s adversary.

Thank you so much for your guidance. I am 13 year old AI Builder and I am building one AI model for Quiz creation.

1 Like