Currently when the conversations gets longer the page starts hanging, i am not talking about model speed, just the UI responsiveness, even the operation like copy response takes time. May be need to add pagination in how much chat history we load, and load more on scroll up. Same issue is also there in the vs code codex plugin, LLM context gets optimized but long conversations impacts the usability and many times both browser and vs code crashes out.