Question related to the price of API.
Here is the test price for my real time api:
Real-time-gpt-4o, with system prompt: 0.18$/min
Real-time-gpt-4o, without system prompt: 1.63$/min
I was wondering why the price gap is so significant with and without the prompt?
Has anybody enconuter the same issue?
2 Likes
How long is the system prompt?
The system prompt is 1000 tokens
I think OpenAI prepends a default system-prompt if you don’t define one.
Try defining a system prompt that is one character or something like “Help the user.”
This should be shorter than the default one - thus being cheaper.
Cheers! 
1 Like
Hi, thanks for the suggestion. In my case, I want to build a AI-order system, so I need to put the menu to the system prompt. That’s why the system problem has to be so long
This seems like a perfect solution for RAG!
This basically means that you would have a knowledgebase that the AI can access and search for relevant information.
You can have a ton of data - even more than context length, and the AI will search whatever it needs.
(Utilizing RAG will lower costs as there isn’t as much text in the input tokens)
If you need some resources, do let me know! 
thanks, I know RAG, but I think it may hurt the latency of the system