Hi, there is currently an issue where if you copy a GPT chat link via the sharing tool and then open the link and try to use the shared GPT chat with o1 or o1-mini, you will automatically get the error “error in message stream”. This error has occurred regardless of the chat size. Is anyone else currently experiencing this?
To recreate this issue:
Write a prompt with any GPT model.
Save the chat link.
Use the chat link on the same account or a different one (doesn’t matter).
The blocked, send, and receive times are minimal, so the delay isn’t from network issues but rather from server-side processing IMO.
The postData payload includes metadata with an error: “error”: “{"error":{"reason":"request_failed","message":"Error in message stream","status_code":500}}”.
To me, this shows the error was already logged as part of the telemetry data sent to the server, even though the registration request itself didn’t fail.
yep. have the same issue. Its annoying because I have quite well educated Chats e.g. for SQL Queries that I wanted to share with colleagues so they can coninue where I stopped, however, they can only use 4o model. As soon as they use 4o1-preview it shoots this error message. problem is o1 preview is MUCH better at writing longer queries and its almost useless to start with 4o it seems. I rather start a new chat and teach it from scratch then