PERFORMANCE:
Desktop long-chat performance is painful. At this point, I simply tampermonkey-inject a script that watches and deletes all but the last 20 “article” elements out of the main container since I didn’t want to make a full-blown history-leveraging API to paginate-load next/previous “chunks”. By 20 elements, I’ll want a refreshed recap on our chat anyway so having to scroll back is not as painful as I expected. That works acceptably until your UX reports can’t handle it anymore, as it is currently mostly just rough on page (re)load until it’s all loaded and cutter kicks in. It gets by for the most part for now. That is until after 1.4gb; after that, it seems like the backend is subject to a colonic flare-up that may excrement the entire conversation, becomes unstable and ends the chat–the threshold when I lose the chat in an endless “retry” loop.
COMPREHENSION:
Can we get the left hand to stay consistently talking to the right hand?: Reflecting AI’s contextual awareness of my prompt would be great. Currently I’m short-circuiting “deep research” to get an immediate impression on how AI understands my query on any given model (particularly the expensive o1 pro queries) . A simple, functionally syntactic “how do i understand your prompt” kind of toggle feature intended for reflecting AI’s comprehension on your query would save a world of your compute and our time. I noticed it does it instantly so it could even be better to offer a chained back-and-forth gauntlet of “Q&A” style until untoggling due to brain exhaustion to be rewarded for an answer based on all that hard mental investment I just put forth. It’d be great if it could have a “IN CONTEXT: <succinct summary” and “LEAVING CONTEXT: ”. I noticed that even in trying to train it to do these things, it is still “shy” in revealing the real raw truth of what it is unloading or is in its consideraton.i
CONSISTENCY:
Not as important, your entire hyper-descriptive naming system sucks; code revisions are hard to keep up with and even on-site in the chat prompts.
Even extends to the on-site visual framework, makes impossible for accessibility. Sinkhole stylesheeting on your shifting sand UI-semantics. If you’re to confine us in a viewport with specific pixel measurements for the conditions our viewport falls under as classnames, give us a few options. Your default view is confined to a low contrast periscope that is fatiguing to look at and dizzying and exhaustingly frustrating to scroll through after a while. In fact a Table-Of-Promptents would be grand, with the answer to the active prompt "stick"ing at the top like perplexity has been so keen to do. Bigger buttons, more use of viewport, less scrolling and navigable-spinning-around would be greatly appreciated by everyone. Even better would be if you could navigate responses hierarchical, to split off subthreads per response.
I’m in the middle of trying to train it to keep index numbers (response #), timestamps, and Rev(ision)-<UNIQUE 3 LETTER TAG>-<### iteration of fuckups>
There are other pestilences I forgot by the time I reached here, but those are the most pertinently persistent to me.