After performing light testing yesterday using the Realtime API I see that my token usage is much higher than my experience recollects.
Realtime usage data gleaned from the OpenAI Dashboard logs 20 requests with 30,424 tokens comprised of 27,774 input tokens and 2,650 output tokens.
I lightly tested Realtime. I don’t recall any conversation being more than a minute or so.
Upon querying ChatGPT 4o as to how long it would it would take to speak 27,000 words, I learned that " It would take approximately 3 to 3.6 hours to speak 27,000 words, depending on the speaking pace.".
Perhaps my rig left a session running and the microphone picked up ambient conversation, but I doubt it as I was not in the office for very long on a Saturday.
Also, all of the log files from yesterday’s testing are not available in the dashboard. Why?
Something is awry.
Any insights here?