I’m attempting to call the completions api to get a streamed response.
I’m calling the API like in the documentation, with the stream
option enabled:
curl https://api.openai.com/v1/chat/completions \
-X POST
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4o-mini",
"temperature": 0.95,
"messages": [
{
"role": "system",
"content": <system messsage>,
},
{
"role": "user",
"content": <user messsage>,
}
],
"stream": true
}'
I’m able to run this example perfectly fine when using gpt-4o
, but when I switch to gpt-4o-mini
, I get a 403 response.
Do I need to enable this model somewhere or something? I am at API Tier 1, and on the limits page it looks like I should have tokens available for 4o-mini
.