Open AI APIs responses becoming random


I have integrated my application into open ai apis at it was producing ok results. Since I upgraded April 2024 the outputs have worsened significantly with the agent ignoring key aspects of my prompt. Any one else experiencing a change recently? Any ways I can improve performance?

Welcome to the Forum!

First of all, every model operates slightly differently. So it is not entirely unusual to see variances and it is advisable to always perform in-depth testing prior to switching to a new model version.

As for how to address the specific issues you are facing, can you share details of what you are using the model for including example system and user prompts and the deficiencies in the response it is returning. Thanks!

1 Like

In addition to @jr.2509 excellent response you can try to resolve the issue by selecting the model version that has produced better results previously.
Instead of sending the request to gpt-4, which is the default model name for the latest version snapshot, you can choose a variant from an earlier date. You will find the list here:



Thank you kindly for responding to me. I am enabling business processes to be documented via an automation tool zapier. I have two different flows using open ai apis.
The first will allow the user to upload long, unorganised text describing their processes and the prompt organises this into a table with a step by step breakdown. Even though the prompt explicity says break it down into 20 steps the prompt often returns much less than 20 steps like 10 steps even though the equivalent prompt on chatgpt directly returns good output.

The second conducts analysis based on a knowledge base and gives opportunities for improvement. It returns round balls either 1-5 to show the opportunity for improvement. Despite specifying the format of these they often return very random examples.

For both I am using gpt 4

Let me know if any other thoughts on how I can improve this.