Need help with ChatGPT API

Hello…

I need some help. I am totally newbie and struggling for hours to get this issue
resolved.

I have the below code (taken from deeplearning.ai course)… don’t know what i am missing

import openai
openai.api_key = “sk-”

import openai
import os
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())
openai.api_key = os.getenv(‘OPENAI_API_KEY’)

def get_completion(prompt,model=“gpt-3.5-turbo”):
messages = [{“role”: “user”,“content”:prompt}]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=0,
)
return response.choices[0].message[“content”]

text = f"“”
You should express what you want a model to do by
providing instructions that are as clear and
specific as possibly you can make them.
This will guide the model towards the desired output,
and reduce the chances of receiving irrelevant or incorrect responses.
“”"

prompt = f"“”
Summarize the text delimited by triple backticks
into single sentence.
{text}
“”"
response = get_completion(prompt)
print(response)

I am getting below error even after waiting for hours and days before trying to run this code. What am I missing ? It’s really frustrating as I am totally new to hands-on coding.


RateLimitError Traceback (most recent call last)
Cell In[4], line 11
1 text = f"“”
2 You should express what you want a model to do by
3 and reduce the chances of receiving irrelevant or incorrect responses.
4 “”"
6 prompt = f"“”
7 Summarize the text delimited by triple backticks
8 into single sentence.
9 {text}
10 “”"
—> 11 response = get_completion(prompt)
12 print(response)

Cell In[3], line 3, in get_completion(prompt, model)
1 def get_completion(prompt,model=“gpt-3.5-turbo”):
2 messages = [{“role”: “user”,“content”:prompt}]
----> 3 response = openai.ChatCompletion.create(
4 model=model,
5 messages=messages,
6 temperature=0,
7 )
8 return response.choices[0].message[“content”]

File C:\ProgramData\anaconda3\Lib\site-packages\openai\api_resources\chat_completion.py:25, in ChatCompletion.create(cls, *args, **kwargs)
23 while True:
24 try:
—> 25 return super().create(*args, **kwargs)
26 except TryAgain as e:
27 if timeout is not None and time.time() > start + timeout:

File C:\ProgramData\anaconda3\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py:153, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
127 @classmethod
128 def create(
129 cls,
(…)
136 **params,
137 ):
138 (
139 deployment_id,
140 engine,
(…)
150 api_key, api_base, api_type, api_version, organization, **params
151 )
→ 153 response, _, api_key = requestor.request(
154 “post”,
155 url,
156 params=params,
157 headers=headers,
158 stream=stream,
159 request_id=request_id,
160 request_timeout=request_timeout,
161 )
163 if stream:
164 # must be an iterator
165 assert not isinstance(response, OpenAIResponse)

File C:\ProgramData\anaconda3\Lib\site-packages\openai\api_requestor.py:298, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
277 def request(
278 self,
279 method,
(…)
286 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
287 ) → Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
288 result = self.request_raw(
289 method.lower(),
290 url,
(…)
296 request_timeout=request_timeout,
297 )
→ 298 resp, got_stream = self._interpret_response(result, stream)
299 return resp, got_stream, self.api_key

File C:\ProgramData\anaconda3\Lib\site-packages\openai\api_requestor.py:700, in APIRequestor._interpret_response(self, result, stream)
692 return (
693 self._interpret_response_line(
694 line, result.status_code, result.headers, stream=True
695 )
696 for line in parse_stream(result.iter_lines())
697 ), True
698 else:
699 return (
→ 700 self._interpret_response_line(
701 result.content.decode(“utf-8”),
702 result.status_code,
703 result.headers,
704 stream=False,
705 ),
706 False,
707 )

File C:\ProgramData\anaconda3\Lib\site-packages\openai\api_requestor.py:765, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
763 stream_error = stream and “error” in resp.data
764 if stream_error or not 200 <= rcode < 300:
→ 765 raise self.handle_error_response(
766 rbody, rcode, resp.data, rheaders, stream_error=stream_error
767 )
768 return resp

RateLimitError: You exceeded your current quota, please check your plan and billing details.

1 Like

I didn’t need all that code to see the error, which is:

RateLimitError: You exceeded your current quota, please check your plan and billing details.

You must have an API payment plan if you have no free trial. API costs money for the data of each AI input and response.

  1. Go to the billing overview of your account.

  2. Click “start payment plan” if you’ve never done that.

  3. Select account type personal/business. Enter credit card. Choose amount of prepaid credit to purchase.

(if you see an error when buying credits “please slow down”, you may need to try to buy credits later when the service is less busy)

You then must generate an API key in the account, and put it into your code directly or set it as an environment variable.

Good luck!

1 Like