The issue of the same code behaving differently in local Python and on the web server

I am testing the API Quick Start code provided by OpenAI, and the following situation occurred. (Of course, the payment is proceeding normally.)

  1. When running the Python code locally, I could confirm better results with the GPT-4 model compared to the GPT-3.5 model.
  2. When I uploaded the same Python code on a web server and applied it online (using Microsoft Azure virtual server, and the same code from GitHub provided in the Quick Start), the results were the same as GPT-3.5 even though I applied the GPT-4 model. (However, the log shows a response indicating that the GPT-4 model was used.) It was not like this from the beginning, but this phenomenon started to occur at some point during the more than two weeks of testing for development.
  3. Another problem is that the error message about the usage limit comes with the same criteria as GPT-4. However, the results are the same as GPT-3.5 when tested locally.

How can this phenomenon be explained? (I only took the Quick Start sample code and changed a little bit of the front end for testing…)

You can look at the by-the-day usage in your account and see that gpt-4 is being billed for your requests.

At some point did you go into the example code and edit in “gpt-4” plus write for chatCompletion - but yet you didn’t do the same on your deployment or reupload? Check the app code again.

The API quickstart, python flask I found, while having a simulated API key screen with newer dates, has 9-month-old code that uses davinci and the completion endpoint.

If you doubt the quality, you can ask GPT-4-only-answerable questions, like my banana test.

“Today I have four bananas. Yesterday, I ate two. How many bananas do I have now?”

1 Like

Just thought I’d let you know that your Banana test broke my grandson. :slight_smile: Looks like he’s running 3.5 and I’ve got to have a talk with his parents. :slight_smile:

Paul

1 Like

Happy cake day and :joy: that made me giggle!

1 Like

Thanks. :slight_smile: I try to have a little fun. LOL