I am working on an application which uses GPT-4 API calls. It was working last night, but as of this morning all of my API calls are failing. Here’s some example Python code for testing:
from openai import OpenAI
LLM = OpenAI()
response = LLM.chat.completions.create(
model='gpt-4',
messages=[{'role': 'user', 'content': 'What is 1+1?'}],
)
response
(My API token is stored in the environment variable OPENAI_API_KEY.)
The response I get is:
NotFoundError: Error code: 404 - {'error': {'message': 'The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
Interestingly, if I use gpt-3.5-turbo instead of gpt-4, then I get the error:
RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
I have confirmed in the admin UI that I am nowhere near any caps for this model.
It does work in the Playground, and as far as I can tell it’s using the same key as the one I’m using on my local machine. Does that mean it’s some sort of config issue on my local machine?
Update: after I got back from my meetings yesterday I generated a new API key and everything worked fine. I definitely also tried that yesterday morning, so I’m not sure what the problem was, but