In the official frontend for ChatGPT (chat (.) openai (.) com), I’m experiencing rather insane amounts of lag, which I’ve never experienced previously.
Checking out what might be causing this problem, I noticed that loading a conversation uses 200 MB of RAM, and ChatGPT answering me makes it go to a whopping 450 MB, which is way too much for something that is supposedly being processed in the backend.
Then I noticed some bloat (possibly malware?) coming from featuregates (.) org, sentry (.) io, statsigapi (.) net, intercomcdn (.) com, and intercom (.) io.
And the only reason why I’m using the official frontend is because GPT4 doesn’t seem to work in any of the terminal-based clients for whatever reason, despite them all claiming to support both GPT3.5-turbo and GPT4.
Some more context, because I’m noticing more requests that are rather questionable to say the least.
sri.json pointing to the file “game_core_bootstrap.js”, very interesting name indeed.
Code inside that file seems to suggest it’s being used for Captcha purposes, little wierd considering that only logged in users can even use ChatGPT.
Then some HTML file loading up a Javascript file with the same name (enforcement).
Then API calls to “conversation” (perfectly understandable) and “moderations” that seems to log my input, and check whether it’s flagged or blocked (we really can’t have nice things, can we?), and finally an API call to settings and funcaptcha.
All fine and everything, but then after GPT is done responding,
It’s loading in sentry all the time, which I guess contributes to the lag.