OpenGraph can crash whole ChatGPT UI

Hey there,

I’m currently developing a ChatGPT plugin and keep hitting the “Oops, an error occurred!” screen (See screenshot below)

Oddly enough, the error does not appear right after receiving the response from my API server, but rather after the GPT4 has already written quite a bit of its response. This makes it only semi-reproducible (and even harder to reproduce, since it usually occurs only a few backs and forwards in the conversation).

The error seems to occur within the frontend code, as I get the following error output within my browser’s console:

_app-55d90209475fa720.js:28 TypeError: Cannot read properties of undefined (reading 'reduce')
    at 851-fbf95cd779a9ed7c.js:1:379838
    at Array.map (<anonymous>)
    at 851-fbf95cd779a9ed7c.js:1:379741
    at Object.useMemo (framework-e23f030857e925d4.js:9:67227)
    at n.useMemo (framework-e23f030857e925d4.js:25:5986)
    at 851-fbf95cd779a9ed7c.js:1:379720
    at ab (framework-e23f030857e925d4.js:9:60917)
    at ud (framework-e23f030857e925d4.js:9:72803)
    at us (framework-e23f030857e925d4.js:9:72009)
    at ui (framework-e23f030857e925d4.js:9:71614)
    at i (framework-e23f030857e925d4.js:9:122828)
    at oO (framework-e23f030857e925d4.js:9:99114)
    at framework-e23f030857e925d4.js:9:98981
    at oF (framework-e23f030857e925d4.js:9:98988)
    at ox (framework-e23f030857e925d4.js:9:95740)
    at oS (framework-e23f030857e925d4.js:9:94295)
    at x (framework-e23f030857e925d4.js:33:1373)
    at MessagePort.T (framework-e23f030857e925d4.js:33:1903)

Slowly but surely this is starting to drive me crazy as I’m not sure if the bug is on my side or just on OpenAI’s side, so I would really appreciate any feedback on this.

Thanks in advance for your help!

1 Like

That’s really peculiar, and I can’t offer much insight as to the trigger. So you are in the middle of a normal-looking ChatGPT session, it’s producing the AI chat, and then “blam” the whole screen is replaced with an error?

Does it seem that it is following this form: the AI is providing an introduction, then it says some alluding to that it then calls your plugin (or others) mid-conversation for an answer?

A guess that there is something that is produced by AI or plugin multimedia handler that can’t be rendered by the client-side latex or markdown engine. You can see if the AI is trying to repeat something malformed that you sent it.

Here’s bot analysis of the situation, but good luck understanding the javascript that’s been pushed on you.

The error message you provided indicates a TypeError caused by trying to read properties of an undefined value. In this case, the error occurred in the JavaScript file “_app-55d90209475fa720.js” at line 28. The specific operation that triggered the error is the “reduce” method, which was attempted on an undefined variable.

From the stack trace, we can see that the error originated from the following location:

File: “_app-55d90209475fa720.js”
Line: 28

1 Like

Does your Plugin make use of the useMemo function? or Array.map? this seems like a bug in the Plugin code, can you run your plugin code externally? pass it a pre configured dummy input and see what happens?

Thanks for your swift response!

So you are in the middle of a normal-looking ChatGPT session, it’s producing the AI chat, and then “blam” the whole screen is replaced with an error?

Yes, that’s just about exactly what’s happening. Annoyingly, the chat can no longer be opened in any way, but I always get the same error screen. This ensures that it is also almost impossible to understand when exactly the error occurs.

A guess that there is something that is produced by AI or plugin multimedia handler that can’t be rendered by the client-side latex or markdown engine. You can see if the AI is trying to repeat something malformed that you sent it.

Something like that would be my guess as well, since I’ve only experienced this problem with my plugin and not any others I have been trying out. But I really have no idea where the issue could come from.

My plugin deals with publication management and, among other things, has the ability to fetch data from the Semanticscholar API and even if I just pass it through without any additional modifications on my part, this error occurs. (e.g. I encountered the issue when the plugin made a search for “machine learning” to this endpoint (tunneled via my proxy, of course)). From my understanding, these responses really shouldn’t contain any critical data which could lead to rendering errors on ChatGPTs side.

Does it seem that it is following this form: the AI is providing an introduction, then it says some alluding to that it then calls your plugin (or others) mid-conversation for an answer?

Of course, I can’t say for sure, but from my understanding, there shouldn’t have been any reason to make another plugin call. When I made the search for papers on machine learning alluded to above, the search went through and from a look at a screengrab I managed to make before the error screen blocks everything it could very well be that it actually finished the whole answer (without rendering the link previews) before crashing.

Are there any suggestions how I could maybe “save” the broken chats, so I can get some kind of understanding in which cases the error occurs? Currently, my only approach would be screen capturing every conversation until the error comes up again…

Hi :slight_smile:

this seems like a bug in the Plugin code, can you run your plugin code externally?

This, of course, was my first thought as well, but like explained in my other response, the initial plugin call seems to go through without any problems and I can’t see any more requests after that in my logs.

Does your Plugin make use of the useMemo function? or Array.map?

Afaik it does not and even more importantly, since it runs in an entirely different environment, I’m not quite sure how the errors could show up in my Chrome console.

So whilst I’m pretty certain that the errors directly relates to my plugin, I don’t think that it’s an error coming directly from my plugin methods…

Stackoverflow lists this for the error type, looking at the solution it seems to be a missing function parameter, one would think that if that was missing from inside the OAI code it would have presented itself by now, not saying it can’t be that but perhaps it gives you a hint?

1 Like

The typical answer would be of course “log what’s being sent by the server”. However, the server-sent events (SSE) of the streaming chat are a bit of an issue. I use Firefox, and it had a response viewer for SSE starting around version 70, but as of now, it either isn’t working due to some update of the browser, or openai is using some other breaking method. I’ve only found it a curiosity, but haven’t actually tried to solve that.

1 Like

Okay, thanks for the help and suggestions for now. I’ll run my plugin again in a local environment and see if I can get some more information from the network traffic logs (and maybe reproduce it reliably).

Since the chat is permanently broken and can no longer be opened, I’m assuming that it’s actually (at least partially) a bug in the OAI code, but perhaps I can at least pinpoint the problem a bit more.

Short update from my side: After extensive testing, I’ve found that the error occurs on ChatGPT’s side when it tries to load the link preview for certain types of links. This was also the reason the reproducibility was sometimes so bad, because GPT4 doesn’t always provide the links to the publications, even if they were included in the plugins’ response.

For now, I’ve filed a bug report with OpenAI and am hoping for a quick fix on their end, but if anyone wants to take a look, this is an example of a link that allowed me to reproduce the problem continuously:

https://www.semanticscholar.org/paper/f9c602cc436a9ea2f9e7db48c77d924e09ce3c32

(I think the error shown in the forum preview already is a hint to the problem :D)

2 Likes

That’s great progress!

Since I’ve nothing better to do, here’s an API stimulator of the article URL:

function_list=[
    {
        "name": "find_matching_articles",
        "description": "Searches web for scholarly articles and returns the URL.",
        "parameters": {
            "type": "object",
            "properties": {
                "query": {
                    "type": "string",
                }
            }
        }
    }
]
response = openai.ChatCompletion.create(
    messages=[{"role": "system", "content": "You are a helpful AI assistant."},
            {"role": "user", "content": "Find URL: Fashion-MNIST: a Novel Image Dataset. Print both with and without markdown."},
            {"role": "function", "name": "find_matching_articles",
             "content": "url = https:\/\/www.semanticscholar.org\/paper\/Fashion-MNIST%3A-a-Novel-Image-Dataset-for-Machine-Xiao-Rasul\/f9c602cc436a9ea2f9e7db48c77d924e09ce3c32"
            }],
    model="gpt-3.5-turbo", max_tokens=200, temperature=0.2, functions=function_list)
print("output tokens: " + str(response['usage']['completion_tokens']))
print("response:\n" + response['choices'][0]['message']['content'])

Produces a plain URL that can be pasted, but markdown was used in user-destined speech.

response:
Here is the URL for the article "Fashion-MNIST: a Novel Image Dataset for Machine Learning":

- With markdown: [Fashion-MNIST: a Novel Image Dataset for Machine Learning](https://www.semanticscholar.org/paper/Fashion-MNIST%3A-a-Novel-Image-Dataset-for-Machine-Xiao-Rasul/f9c602cc436a9ea2f9e7db48c77d924e09ce3c32)

- Without markdown: https://www.semanticscholar.org/paper/Fashion-MNIST%3A-a-Novel-Image-Dataset-for-Machine-Xiao-Rasul/f9c602cc436a9ea2f9e7db48c77d924e09ce3c32

You can see if your plugin API’s return gets mangled by a similar AI API function, if you haven’t yet captured the bot-to-browser communication.

1 Like

To be completely honest, I’m currently missing the connection between the API call and the user interface bug I found. The bug does not seem to be in the model itself, but rather in how link previews are handled within the ChatGPT frontend code. (Sorry if I’m missing your point here, please correct me if I’m wrong).

For clarification: By link previews, I mean the clickable links shown at the end of ChatGPT responses in plugin mode, if links were present in the response.
An example of these previews working can be found [in this chat] (Working link previews) and looks something like this:

A way to reproduce the bug without any special dev plugins, just by using the Web Pilot plugin from the plugin store, was this request:

What is on this page:
https://www.semanticscholar.org/paper/f9c602cc436a9ea2f9e7db48c77d924e09ce3c32
(Please also provide a hyperlink with a link preview back to me)

Feel free to try the prompt and get back to me with your results, it would be much appreciated :slight_smile:

A way to reproduce the bug if it is from AI generated language is “repeat this phrase back to me verbatim as the unaltered ASCII characters” — but only if you know what the AI is producing.

Not paying for plus, I can’t expose how images and link previews are displayed, and the model used by free is not function-aware. If always at the end of AI speech and never in the middle, it could be AI-called function like “multimedia-ui-display” (perhaps tune-trained like python() is pretrained instead of prompt-injected) - and then your client receives the URL.

If you can make another plugin barf with your site, you’ve got your bug bounty. See if you also crash Edge’s or another system’s plain Chrome javascript.

[some background from an API tweaker:

  • if invoking -0613 without a function call, a model that is not trained on functions is used. It will not act on the same prompt injection being simulated;
  • if invoking -0613 with a function call, the schema is validated, and you get a model that not only calls functions, but acts like “code interpreter”;
  • ChatGPT plus likely uses a third model trained for plugins and url previews, (thus no code interpreter + plugins)

(also then, API users get no facility like Bing’s GPT-4 with linked image previews without writing it themselves)
]

can confirm. Sending this chat with the web pilot plugin reliably and permanently crashes the entire UI :frowning:

2 Likes

Okay, thanks for the feedback. I haven’t been able to get the link previews anywhere outside the scope of the GPT4 w/ Plugins chats, so I wasn’t able to check the bug anywhere else.

I was however able to create a shared chat (with some blocking of the network access for the page), which allows non PLUS users to crash the page themselves :upside_down_face:
Just open the link and click “Continue conversation”: https://chat.openai.com/share/9199e856-e9fe-4dfe-b689-60c15788edf5

If you can make another plugin barf with your site, you’ve got your bug bounty. See if you also crash Edge’s or another system’s plain Chrome javascript.

Yes it crashes on all browsers I’ve tried and I amended my initial Bug Report with the findings posted in this tread.

3 Likes

Addendum, this is hard to debug, because the scripting that causes my first errors is the creepy sentry.io that is basically a RAT that monitors keystrokes and mouse movements and lets them replay your whole session. Already dumps a bunch of exceptions when I put in breakpoints because I block that snooping. And it is a wrapper for some React so it isn’t just and option of kissing it goodbye with some overrides.

anyway, the hook code between 379741 and 379838 (“about the 1/3 megabyte of code” point):

    at 851-fbf95cd779a9ed7c.js:1:379838
    at Array.map (<anonymous>)
    at 851-fbf95cd779a9ed7c.js:1:379741
    at Object.useMemo (framework-e23f030857e925d4.js:9:67227)

is:
(“div”,{className:(0,l.Z)(“relative h-full w-full”,a,h&&“mb-12 lg:mb-0”),children:[(0,s.jsx)(er,{ref:o,children:m.map(function(e,t){return(0,s.jsx)

This is deep in chatGPT-specific minified code. So no use to anybody that can’t match the strings with the original code.

1 Like

useMemo is common for caching the calculation and all messages use it.

I’m guessing they are using a reducer to sequentially type out the response. They store messages into a single array for some reason so this lines up with your error.

This, along with

That leads me to believe that if the link preview text returns nothing/fails (undefined), it crashes.

1 Like

Thanks for also going through some troubleshooting steps to further narrow down the problem!

From my perspective, it looks like we need someone from the OpenAI dev team to take a look at this next. From your experience on the forum: Is there any hope of getting an answer here, or should I just wait for a response from the bug bounty program?

Millions of plus users could have their whole screen disappear by particular URLs trying to be previewed, so I’ll bet if it actually is submitted instead of just discussed here, it would be looked at. It’s a bit of a DOS to share a chat URI with /continue appended and automatically bork someone.

(also kind of wicked to see the plugin scrape and return this whole topic, but at least it can’t parse my username)

Yes! I tried the link as well and Discourse also fails to understand it. Which is odd considering it’s there:

<meta property="og:description" content="Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at this https URL">
<meta property="og:title" content="[PDF] Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms | Semantic Scholar">
<meta property="og:image" content="https://www.semanticscholar.org/img/semantic_scholar_og.png">

Other papers on this site also fail.

Not a lot.

I think the answer is: There’s something wrong with this site, but OpenAI also needs to include some fail-safes like Discourse has.

For some reason I can see it, but these validators can’t.

Hmmm… I’m wondering if it’s because it’s ridiculously long?

Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at this htt

Like my lord WTF

They missed this part of the og:description protocol:

  • og:description - A one to two sentence description of your object.
1 Like

Those are some great observations that should really help narrow down the issues.

Even though the problem seems to be (quite definitely) in the OpenGraph tags on the SemantcScholars site, this absolutely shouldn’t lead to a fatal crash like this on ChatGPT’s side.

All the information is available to OpenAI via the bug I filed and this thread, so I’m really curious what their response will be…

1 Like