As an active user of the OpenAI Playground and API in client-facing work, I’ve found tremendous value in the platform. However, I’ve identified a feature that, if implemented, could greatly enhance the user experience and efficiency: the inclusion of a live token counter in chat mode, similar to the one currently present in completions mode.
In chat mode, users are unable to see the total token count (both the system prompt and user input) before hitting submit. This means users can exceed the token limit unknowingly, leading to trial-and-error submissions that are inefficient and disruptive, especially during time-sensitive client sessions.
In my experience, this lack of live token count has resulted in unexpected error messages after submitting large volumes of text, only to be informed the token count was too high. A live token counter would allow for proactive management of text inputs, helping avoid such issues.
The absence of this feature notably impacts the process of rapid prototyping. With a live token counter, users can make informed decisions on which text sections to prioritize for summarization or compression, enabling a smoother and more efficient experience when working with extensive unstructured data.
I would like to formally request the inclusion of a live token count feature in the chat mode on the OpenAI Playground, similar to the one currently available in completions mode. This addition would greatly enhance the user’s ability to manage token usage, leading to increased productivity during each session and an overall improved user experience.