I want real-time spend tokens quantity in Usage. I do realize that it may increase lag of API.
I don’t want to explain why I want this.
I want real-time spend tokens quantity in Usage. I do realize that it may increase lag of API.
I don’t want to explain why I want this.
I understand why you want it. I also use API but it is easier for me to explain with GPT5.2 why I want visible token usage shown across all platform of OpenAI. 5.2 hallucinates and drifting often create obnoxiously long responses about the topic, then it reflects the topic in the same response. and then starts hallucinating all within the same response. I want to know how many tokens are wasted for that nonsense.