Realtime Usage Tokens - Does Not Jibe

After performing light testing yesterday using the Realtime API I see that my token usage is much higher than my experience recollects.

Realtime usage data gleaned from the OpenAI Dashboard logs 20 requests with 30,424 tokens comprised of 27,774 input tokens and 2,650 output tokens.

I lightly tested Realtime. I don’t recall any conversation being more than a minute or so.

Upon querying ChatGPT 4o as to how long it would it would take to speak 27,000 words, I learned that " It would take approximately 3 to 3.6 hours to speak 27,000 words, depending on the speaking pace.".

Perhaps my rig left a session running and the microphone picked up ambient conversation, but I doubt it as I was not in the office for very long on a Saturday.

Also, all of the log files from yesterday’s testing are not available in the dashboard. Why?

Something is awry.

Any insights here?

1 Like

148 seconds of transcription costs $7.54.

Not sustainable.

1 Like

Any chance you saved the logs? Were you using the playground?

It’d be worth doing it again but this time saving the logs afterwards to understand what’s going on

I wouldn’t consider the audio transcription time a reliable metric.

Lastly, I would imagine that the input is a combination of your input + the context.

2 Likes

Interesting…I did not save the logs. There are a few logs from the most recent sessions.

Not sure why more [n] logs are not saved automatically.

Will update after next tinker.

Yes, was working in Playground.