ChatGPT 4 Misspellings? Anyone else?

Hey guys. Since roughly last night I have noticed some strange misspelling issues with ChatGPT4. It almost seems like

A) The frequency_penalty has shot up
B) The temperature has shot up

Here are some instances.


1. Typing out “tojring” instead of “to_string”

Screenshot from 2024-06-06 13-30-19


2. Spelling “Gunicorn” as “Gauricron” and “Gundred”

Screenshot from 2024-06-06 13-43-31


3. Failing to repeat something verbatim

Ignore the :cat template issue. This is another irrelevant issue. Here it says “you” (not even proper grammar), instead of “room”
Screenshot from 2024-06-06 12-36-29


All of these conversations have less than 5 message pairs. It’s almost like some mini-strokes.

I was wondering if anyone else is experiencing the same.

Lastly. Dear OpenAI:

Please give us a BETA option instead of changing things at any whim.

1 Like

In a conversation tonight about 1890s London Underground electric railways, ChatGPT 4 (not 4o) produced several typos.

One is a straight misspelling:

This innovation helped pave the wsay for the modern, extensive network of electric subways

In a later response, after having written four numbered points, it started its concluding paragraph with “5 his” instead of “This”:

5 his integrated approach where the location of

In the same response, it referred to a power plant’s location as “Stockhereford” (which doesn’t exist) when we had already established it as being in Stockwell, and referred to Stockwell many times previously in the conversation:

generated (likely at a central power station like you mentioned, such as the one at Stockhereford), transmitted

These aren’t as significant as the issues we saw a few months ago, but is anybody else seeing similar issues?

Happened again. Is nobody else seriously experiencing this?

Specifically: It broke trying to bold a word, and even failed to write the correct word (BillPayment, not BillVendors).

English isn’t my best but the words after that are really hard to read.

Yeah I’m seeing it as well, see below:

Pretty easy example, never seen it get easy stuff like this wrong. It clearly is a typo as after I point it out it corrects itself straight away.

chatgpt dot com/share/517842a4-981a-42aa-a258-b248d6d3297b

Yes, same issue, horrendous misspellings and hallucinations. I have been using ChatGPT since December 1 2022 and I have NEVER seen issues with it this bad. It is becoming unusable, especially for coding work, it is to the point where it will be faster for me to write everything from scratch rather than babysitting the output to this degree.

Something is seriously wrong and I am not sure what, I am using a teams subscription so I can have higher limits, and I am questioning whether it is worth it with this level of careless output. The frustrating part is not necessarily the issues themselves, it is the lack of any widespread recognition of these issues, leading to a helpless feeling in terms of hoping to get any of it resolved.

I began noticing this just yesterday with Gpt-4

Blockquote

"Serve: Transfer the stir-fry to a serving dish and garnish with green onions and sesame news if desired. Serve hot.

This recipe uses simple ingredients and combines them into a hearty meal that’s full of flavor and texture. Enjoy your cooking!"

Blockquote

And again today twice in the same chat:

Blockquote Thank, I’m thrilled you found an outdoor pillow that caught your eye! …

User: should it say "Thank, I’m thrilled… " ? or “Thank you, I’m thrilled…”

Gpt: You’re right; it should say “Thank you, I’m thrilled…” to start off the message. It’s more polite and complete. Here’s the corrected opening:


Thank you, I’m thrilled you found an outdoor more from…

User: can you check the corrected opening ?

Gpt: Certainly! Here’s the correct opening with a smoother transition:


Thank you, I’m thrilled you found an outdoor pillow that caught your eye!


This opening appreciates the customer’s choice while keeping the tone warm and welcoming.

Blockquote

Ths same issue. GPT-4 as well as GPT-4o makes typos or obvious mistakes very often.

In the following example, the model makes an obvious mistake writing “status_code_local” instead of “status_code”. GPT-4 never made such stupid errors earlier.

Thanks. It’s blowing my mind that this isn’t a widespread issue. I’m guessing that most people have moved to gpt-4o.

I unfortunately moved to it as well, considering that gpt-4 is still making continuous spelling mistakes and historically OpenAI will never publicly address this.

I’m not just going to continue with it until one day it magically works again.