Hi,
I reported the following bug via the ‘help’ tool as a bug report:
[Preconditions: you need two GPTs, GPT_ONE, GPT_TWO]
- Open GPT_ONE in a new chat-conversation.
- In the very first message call GPT_TWO right away, using the “@” feature, and ask it anything.
- Then exit GPT_TWO, by using the little ‘x’ button above the text input field.
- Continue talking with GPT_ONE, which should at this point still have access to all its GPT_ONE features and knowledge.
- Reload the page.
Expected behavior: You are in GPT_ONE, and it has GPT_ONE’s functionality and knowledge.
Actual behavior: You are in GPT_TWO, and it has GPT_TWO’s functionality and knowledge, only.
When I reported this, the agent - via email - didn’t understand it as a bug report, and told me not to use the “@” feature, in order to avoid to unintentionally trigger GPTs.
It looks like this is not a real person, but a GPT-3.5 bot, am I right? Am I right?
This support experience is horrible. You are either employing GPT-3.5 bots for support, or you get your support from Kenya or something. I want this to get fixed, I pay like 200$ a month to OpenAI, this is ridiculous!
OpenAI’s support at support@openai.com is still continuously mocking me, and blocking my legitimate bug report from getting triaged. They have been continuously mocking me, telling me that my bug report is a feature, not a bug, and keep mocking me that I don’t understand how to use the feature.
I have no way to properly report a bug, it always goes through first level support, who has zero knowledge of software engineering, and blocks my bug reports from getting triaged.
I want to know, where and how I can talk to a professional to properly address this bug report.
I have been a team leader in the largest software testing company in the world for 15 years, but I have never experienced a bad bug reporting process like this, ever.
Please, OpenAI if you’re reading this, make sure that I never have to deal with “Kiseh from OpenAI” from support@openai.com again, when I need support from OpenAI.
To illustrate just how disruptive this bug can be: I was working with my own custom GPT (“GPT_ONE”) and enriching it with live web-crawl data from another GPT (“GPT_TWO”) using GPT mentions. My goal was to boost accuracy by providing GPT-4 with RAG-enhanced data. This process took both time and money – I ran multiple web crawls via the API and set up six parallel conversations with GPT_ONE, each collecting valuable information for my ongoing coursework.
When I finished preparing the data, I started working with GPT_ONE in those six sessions. However, at some point my browser reloaded one of the pages. Suddenly, GPT_ONE lost all its enriched knowledge and reverted to a state as if it were GPT_TWO. Instead of building on the carefully gathered RAG data, it produced hallucinations and nonsense. All the time and resources I’d invested across those six conversations vanished the moment this bug kicked in. The experience was incredibly frustrating and deeply undermined my trust in the platform’s reliability.