I’ve paid for 5 dollars, and I’m teir 1.
I also go to the platform> settings > project >limits and the gpt-4o-mini is allowed.
I’ve copy the module name from here
here is my code
client = OpenAI(api_key=os.environ['OPEN_KEY'])
@main_bot.command()
async def ask(ctx, *, question: str):
try:
response = client.completions.create(
model="gpt-4o-mini", # Ensure you use 'model' and specify the correct model name
prompt=question,
max_tokens=150
)
answer = response.choices[0].text.strip()
await ctx.send(answer)
except openai.APIConnectionError as e:
await ctx.send("The server could not be reached")
await ctx.send(e.__cause__) # an underlying Exception, likely raised within httpx.
except openai.RateLimitError as e:
await ctx.send("A 429 status code was received; we should back off a bit.")
except openai.AuthenticationError as e:
await ctx.send(f"Authentication failed: {e}")
except openai.APIStatusError as e:
await ctx.send("Another non-200-range status code was received")
await ctx.send(e.status_code)
await ctx.send(e.response)
except Exception as e:
await ctx.send(f"An error occurred: {e}")
here is my output
I’ve been try for the module name"gpt-4o-mini-2024-07-18",“gpt-4o-mini”,“gpt-4-turbo”,“gpt-3.5-turbo”, and “gpt-3.5-turbo-instruct”. But only gpt-3.5-turbo-instruct work, other module return the same 404 error like above output.
I also renew the API key and make sure the env key is the same as my new key, but it still not working.
How can I use gpt-4o-mini?
Welcome to the Forum!
The code you’ve posted will only work for instruct models such as gpt-3.5-turbo-instruct.
gpt-4o-mini and the other models you’ve listed are chat completion models. The basic API request for chat completion models looks like this:
completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)
You can refer to the API specs for further details: https://platform.openai.com/docs/api-reference/chat
1 Like
Thanks for the reply!
I try to change the code into this
@main_bot.command()
async def ask(ctx, *, question: str):
try:
client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": f"{question}"}
]
)
await ctx.send(completions.choices[0].message)
But now it say 403 Forbidden.
A 403 error typically means that you are accessing the API from an unsupported country. You can check the list of supported countries here: https://platform.openai.com/docs/supported-countries
I pass through the 403error, but it come with the error An error occurred: 'Completions' object has no attribute 'choices'
My country is in the list, I’m not sure why I get the 403 error, but after I renew my API key, it didn’t happen anymore.
I change the completions.choices[0].message
into response.choices[0].message.content.strip()
, and it works!
here is the function code
@main_bot.command()
async def ask(ctx, *, question: str):
try:
response = client.chat.completions.create(
model="gpt-4o-mini-2024-07-18",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": question}
]
)
answer = response.choices[0].message.content.strip()#completions.choices[0].message
await ctx.send(answer)
except openai.APIConnectionError as e:
await ctx.send("The server could not be reached")
await ctx.send(e.__cause__) # an underlying Exception, likely raised within httpx.
except openai.RateLimitError as e:
await ctx.send("A 429 status code was received; we should back off a bit.")
except openai.AuthenticationError as e:
await ctx.send(f"Authentication failed: {e}")
except openai.APIStatusError as e:
await ctx.send("Another non-200-range status code was received")
await ctx.send(e.status_code)
await ctx.send(e.response)
except Exception as e:
await ctx.send(f"An error occurred: {e}")
1 Like