Launching o3-mini in the API

I am having the same issue too

1 Like

Turns out this was because I had a temperature key which is a no-no with reasoning. I guess I kinda knew that, but forgot in the excitement over getting a reasoning model in the Assistants API. Hope that helps someone! (would be cool to add that into the docs!)

1 Like

I am very happy with this model, this is probably the first time I can “3 shot” a functioning 9x9 go board.

This is not something I could do with previous models, the only shortcut in the past was to hint the model to use a canvas (which works, but does not give the same quality artifact), I was able to do this in 10 shots or so on Sonnet, but o3 is just so far ahead of that with reasoning effort high.

Quality of the artifact is impressive.

Prompts were:

make me a fully functioning 9x9 go board
looks good but the stones are not rendering on the intersection, missing hoshi points, missing capturing logic
getting there, stones are not rendering on intersections like they should

Evolution was:

Shot 1

Shot 2


Shot 3 was working… reasoning effort was high.

8 Likes

It is too bad that it does not allow for streaming the “thinking” metadata … I tested the model in the azure playground … it just takes a while to respond - you wait and all of a sudden the answer shows up …

I’m sure I speak for a large number of developers when I say that all I want is a barebones model that specializes in function calling. Right now, tool calling is the only advantage that OpenAI has over DeepSeek (it’s broken in DS current API). You should be focusing on that advantage, rather than offering a similar product for 10x the cost.

Edit: INB4 4o-mini. I use 4o-mini for most functions right now, and it requires a considerable amount of cajoling. I’m sure that by now, OpenAI could improve on it.

1 Like

Can we expect more models to support compressed attachments besides 4 and 4o, on chat interface? Please!

1 Like

is o3-mini-high not available for free users?

i am only able to use o3 mini and not able to select low/medium/high. is that feature available only to subscription people?

getting rid of boomer prompts… finally :smile:

You can’t do that in ChatGPT. This topic is about the API (it enables external developers to use the model in their applications).

Brand new to AI and been using GPT-4o for a few months. Tried o3-mini-high this morning, and it’s amazing …what took hours back and fourth with GPT-4o was done in just 5 minutes! I’ve already maxed out my o3 tokens on Public Plus. Can’t wait until I can get to the API, where do we sign up for the advanced beta test? Can’t wait. lol

Would love to see! I think you should be able to put the URL here in the comments?

2 Likes

It told me I couldn’t post url’s but I’ll try again. Could be because this is really the first time I’ve engaged? Still won’t allow me to post a link but this will get ya there.

I was helping my son on a server build yesterday and really have not had a chance to get back to it. I am very excited for the future!

2 Likes

No worries, it’s a restriction on new users that disappears after a few posts. We do it to avoid spam, sorry for the inconvenience. :laughing:

1 Like

Oh no, absolutely understand that. Oddly there is my video in the threads, ol. Ya’ll are awesome, keep up the good work. o3 is amazing!

I’ll just have to interact more in the community for next time :slight_smile:

2 Likes

Man does it flag violations left and right, I can’t even paste its own results back to analyze why without it throwing a violation.

I asked it to portray a character in a sci fi situation, and to omit any internal thoughts, just return the speech. It flagged the prompt as a content violation. I eventually narrowed it down to that line of instruction so I asked it if that could be the problem, it then thought about it while printing out its internal thoughts, which consisted of it insisting its interal thoughts were private. I pasted that back to it, saying, no, they clearly are printed in chat, and that’s a content violation flag. Reacted to that, and it’s another flag. I’m almost scared to talk to the thing now… :wink:

1 Like

Can you share it?

Btw I found that when i share a chat, which previously created a new thread it now creates a general chat threadid… so when you continue chatting in that chat it will be visible in that shared chat.

I don’t think that is a good idea.

This shared link has been disabled by moderation.

I guess not :open_mouth:

edit with a screen shot…

1 Like

there is a thumbs down button - I think you should use it.

Hi, I asked o1-mini-high to compare itself to deepseek r1 and got this response: * Deepseek R1 LLM is likely optimized for depth and complex language understanding—making it a strong choice for retrieval and semantic search tasks that require a robust handling of longer and more nuanced contexts. Its architecture is probably enhanced with custom features to manage large-scale inputs, but this comes with higher computational costs.

  • O1‑mini‑high, by contrast, is designed to strike a balance between performance and efficiency. It is probably a smaller transformer variant intended for quick inference and deployment in resource‑constrained settings, delivering “high” performance relative to its size while accepting some limitations in capturing the deepest contextual details.

WOW!!

“George is the CEO of the company. Kelly is the VP of Sales. Asif is the global head of product development. Mohammed manages the shopping cart experience. Tim manages sales in South. Stefan is responsible for sales in the f100 company. Carol is in charge of user experience”

class Employee {
name string

subordinates Employee @description(#"
People reporting to employee
"#)
}

name=‘George’ subordinates=[Employee(name=‘Kelly’, subordinates=[Employee(name=‘Tim’, subordinates=), Employee(name=‘Stefan’, subordinates=)]), Employee(name=‘Asif’, subordinates=[Employee(name=‘Mohammed’, subordinates=), Employee(name=‘Carol’, subordinates=)])]

1 Like