For the last six months we (and our customers) have successfuly translated a lot of text to and from a lot of languages. Here is the response we are now getting: “The training data you received is up to October 2023.” This occurs in all cases no matter the length of the text. This response can also be in the language being translated to.
An upgrade was made to gpt-4o months ago when released - with no issue.
This is serious. All of our customers are complaining. We are in deep S@!$!
Intended? You gottah be kidding. Find a way to overcome this? Somebody from the OpenAI team needs to respond to this and provide a solution. Or do they care about third parties (like us) providing broken code to their customers?
We have not implimented a ChatGPT for this - if languge translation now assumes ChatGPT, that is a problem for entities that only want to translate a vast amount of documents.
I’m not sure what you mean by this? I am not using ChatGPT. This is the playground - which uses the API.
I believe this is happening because you are putting the information that needs to be translated into the system role and not the user role. If I were to go further I’d say that you have something along the lines of “Translate the last message”
No, we are putting the text that needs to be translated into the user role. Here is what we are putting into the system role: “Translate the following text from to <that language:”
Again, this has worked flawless for months. Based on your suggestion, where should I put “The training data you received is up to October 2023.” - in the system role or the user role?
You don’t put it anywhere. Maybe I misunderstood but you are reporting that your results included
Which is a result of OpenAI injecting information into the system message.
What I am saying is that this may be because of the text being in the system message. If it’s not then, I’m not sure without seeing anything. Try and mess around with different structure?
Yeah. I’m against this injection as well. It has caused conflicts in my own services that use dates and sometime leads to confusing messages.
You can try moving everything out of the system message and into the user message. Leaving the system message to something as simple as “You are a linguistics expert specializing in translations”
This is probably OpenAI’s solution (date injection) to what they think as rogue system prompts. I would have been more creative: “You are guilty of system prompt abuse.”