Language Translation Is Broken

For the last six months we (and our customers) have successfuly translated a lot of text to and from a lot of languages. Here is the response we are now getting: “The training data you received is up to October 2023.” This occurs in all cases no matter the length of the text. This response can also be in the language being translated to.

An upgrade was made to gpt-4o months ago when released - with no issue.

This is serious. All of our customers are complaining. We are in deep S@!$!

When will this be fixed?

1 Like

Apparently this is intended and noted in a previous thread.

The API models for whatever reason have that text injected into the prompt - which has been detrimental to my works as well.

The best bet is to find a way to overcome this… uhhhh… feature?

2 Likes

Intended? You gottah be kidding. Find a way to overcome this? Somebody from the OpenAI team needs to respond to this and provide a solution. Or do they care about third parties (like us) providing broken code to their customers?

Yes.

It shouldn’t be as detrimental as overwriting the translations though. Are you placing this information in the system or user role?

For context, here is how it’s injected:

You’ll notice that “You are trained on data up to October 2023” is appended.

There is no additional information for you besides what I’ve said. This has been noted and as far as it’s been discussed, it’s staying.

1 Like

We have not implimented a ChatGPT for this - if languge translation now assumes ChatGPT, that is a problem for entities that only want to translate a vast amount of documents.

I’m not sure what you mean by this? I am not using ChatGPT. This is the playground - which uses the API.

I believe this is happening because you are putting the information that needs to be translated into the system role and not the user role. If I were to go further I’d say that you have something along the lines of “Translate the last message”

No, we are putting the text that needs to be translated into the user role. Here is what we are putting into the system role: “Translate the following text from to <that language:”

Again, this has worked flawless for months. Based on your suggestion, where should I put “The training data you received is up to October 2023.” - in the system role or the user role?

You don’t put it anywhere. Maybe I misunderstood but you are reporting that your results included

Which is a result of OpenAI injecting information into the system message.

What I am saying is that this may be because of the text being in the system message. If it’s not then, I’m not sure without seeing anything. Try and mess around with different structure?

Yeah. I’m against this injection as well. It has caused conflicts in my own services that use dates and sometime leads to confusing messages.

You can try moving everything out of the system message and into the user message. Leaving the system message to something as simple as “You are a linguistics expert specializing in translations”

2 Likes

Typo: "Translate the following text from [this language] to [that language]: "

1 Like

Ok. I’ll try that - makes sense. Thanks - sorry this is so confusing…

2 Likes

Yeah. There’s no documentation (AFAIK) that hints towards this, which is a bit upsetting because it was just randomly added one day

It takes little effort to show how bone-headed this date injection is.

I started a bug topic - the OpenAI reply was “working as intended”.

2 Likes

Made you suggested changes and is now working as intended - much appreciated. The more I think about it, This is the better way.

2 Likes

This is probably OpenAI’s solution (date injection) to what they think as rogue system prompts. I would have been more creative: “You are guilty of system prompt abuse.”

Or a solution to novice API users posting “why does it say September 2021. I’m not getting gpt-4o!”

1 Like