I’m a heavy ChatGPT user and I use it extensively for long-running technical work, especially on a large trading system project. Most of that work lives inside Project chats, which naturally become very long over time.
One recurring problem is that at some point the chat starts getting noticeably slower. Response time degrades, sometimes quite a bit. The issue is that there’s no way to tell when you’re approaching that point or how far past it you are. Right now the only signal is intuition: if the chat “feels” slow, I start a new one. That’s not great.
It’s also unclear what “long” actually means in practice. I could theoretically count characters or messages, but that’s not realistic. OpenAI clearly knows what the effective limits are and when performance starts to drop, but none of that information is visible to the user.
What I’d like is some kind of simple, high-level indicator that shows whether a chat is still in a healthy range or whether it’s getting too large and likely to lag. This doesn’t need to expose token counts or internal details, even a basic signal would help. For example:
-
A rough context/load indicator
-
A warning when a chat is past a recommended size
-
A suggestion that it might be time to start a new chat
Right now this is all guesswork, even though the system clearly has the data. Some visibility here would make long-form, project-based use much smoother.
Thanks
Larry