Seeking clarity on limited availability of "o1-mini" and "o1-preview" models: Technical constraints or strategic decision?

Well, right now, their own models are competing with each other, Anthropic Claude, Google Gemini, and Meta LLAMA… outside of that, it is not much of a competition. They are even, sorta, competing with Azure Copilot/GPT if you think about it. So it will be interesting to see if the price goes down. Historically it has to drum up demand and likely draw in more revenue. Who knows what monstrous resources this model requires more than GPT-4o, but I surmise it must be quite a bit of additional processing power required. When we start to see it integrated more with functionality in ChatGPT that the traditional models have, I think we could see the cost start going down, but it is all just conjecture.

1 Like