ChatGPT Feature: Context window highlights/info

I have simply assumed this would be added “any day now” for the last year or two;
Please provide some level of visual feedback on token counts versus the given limit for the active model, both for input and chat history.

Most people know about https://tiktokenizer.vercel.app/ and it doesn’t do any network requests and it’s quite fast and isn’t using a lot of memory on my side, and it’s easily able to tell you how many tokens you have left, so why is this not part of the input field when you’re typing a request?

After making a request, the API tells me the exact counts I need in order to create a visual highlight to show exactly which part of my chat history was included when generating any given response, while tiktoken adds the overhead of the dictionary; this information can be delivered and highlighted at a near-zero cost.

Also I guess just general feedback incidentally related:
Considering the resources of talent available to OpenAI, I find things like this, and for example the implementation of the plugin-store and many parts of the ChatGPT interface to be severely disconnected from the actual value proposition (e.g. the model(s)). Custom GPTs is a great direction but you speak of monetization yet you literally send the full instruction set to the client rather than just middle-man embedding it into the request… we still don’t have folders/grouping for threads, I would love to understand or hear anything from OpenAI to speak about the plans for ChatGPT more than just the models. (or if they are relying on a third party to implement a “real” interface?)