Cost of chat gpt api in your deployments

guys, especially startups that raised money on early rounds, do you still use open ai api? Is it too expensive for you? Especially, when you scaled enough to think about using some other solution besides openai api? for example, train your LLAMA, Qwen or Claude in your startups or projects. I will be glad if you will share here with your experience

The API is not ChatGPT, they are two different products.

Your question is very hard to answer.

You say nothing about volume, user numbers calls per user per day etc.

I would suggest you build a prototype and estimate how it would scale in cost from real experience.

I personally spend about the same as it costs me to rent my VPSs but I might be using AI in a completely different way to the way you intend in a volume that’s completely different to what you plan.

How long is a piece of string?


Well, I think I will need to clarify in the next message

To better understand this, I’m particularly interested in hearing from startups that have raised early-stage funding and are using OpenAI’s API.
how many API calls do you typically make? How many users do you support, and what’s the average number of calls per user each day?

How do you handle and estimate costs as you grow? Have you found it necessary to switch to other solutions like training your own models (e.g., LLAMA, Qwen, Claude) to keep costs down?

If you’ve switched to or considered other solutions, what has your experience been like in terms of cost, performance, and implementation?

Any specific numbers or detailed experiences you can share would be incredibly helpful. This will give me a better understanding of the financial side and practical considerations for startups in a similar situation. Thanks!