When using the gpt-3.5-turbo model and making API calls in a non-English language, I am unable to receive the corresponding responses

Why am I unable to receive responses in the corresponding language when I invoke the chat-API with a non-English language?Do I need to configure anything?

1 Like

I’m having the same problem, if I provide a non english text and explicitly ask to keep the text language, I still get the answer written in english

Hi and welcome to the Developer Forum!

Please can you provide a code snippet of your API call and an example of a request and response that does not meet your expectations.

1 Like
const contextMessages = [
    `Your task is to:`,
    `Understand the language of the text.`,
    `Create a summary in the same language that act as a trailer.`,
    `Follow theese two rules:`,
    `The digest must be short, less than 200 words`,
    `The digest must be in the same language of the text`,
  ];

  const contextPromptsLenght = contextMessages.reduce(
    (prev, current) => prev + current
  ).length;

  const finalPrompt: ChatCompletionMessageParam[] = contextMessages.map(
    (prompt) => {
      return {
        role: "system",
        content: prompt,
      };
    }
  );

// prompt is the text I need to summarize
  const promptMessage: ChatCompletionMessageParam = {
    role: "user",
    content: prompt,
  };

  const isLongPrompt = tokens + 423 + contextPromptsLenght > 4019;

  finalPrompt.push(promptMessage);

  return await openai.chat.completions
    .create({
      model: `gpt-3.5-turbo${isLongPrompt ? "-16k" : ""}`,
      messages: finalPrompt,
      temperature: 0.3,
      max_tokens: isLongPrompt ? 450 : 300,
      top_p: 1,
      frequency_penalty: 1.8,
      presence_penalty: 0.4,
    })

Hello @Foxabilo! Thank you!
With this prompt and settings I’m finally getting some results, but it looks like is super fragile, I’ve spent a whole day trying to have a short summary in the correct language. After all I only a summary in the same language of the text I provide.
I’ve noticed that using -16k it fail more in writing in the correct language.
:information_source: I’m mainly trying with Italian texts

If you want the AI to respond in a particular language then it’s best to have the system prompt and all other prompts in that language, it can get difficult to try and force the model to reply in a different language without detailed instructions on what exactly it’s tasks and expected outputs are going to be, so if you are working in Italian, put everything in Italian and you should not have any more issues.

I do not know about gpt-3.5-turbo-16k, but I have used gpt-4 api with multiple languages, and I can anecdotally confirm that it will respond in the requested language even if the system and/or user prompt is not in that language. Simply adding the text “respond in ” does the trick. Again, this is in my cases where I have had it respond in French, Spanish, Chinese and Korean. My verification of those responses using Google translator also confirms that they are reasonably good responses.

Ciao @imesse, I’m italian too.

Me too need to do some multi-language tasks, so system prompt is always in English, but user prompt can be English or Italian (and other languages).
I tried modifying your system prompt slightly, with a couple of rules that I use in my prompts, and using it with Italian newspaper articles and Italian Wikipedia content, with good results.
I configured the same temperature, presence and frequency penalty values used in your example and gpt-3.5-turbo-16k as model.

This is the modified system prompt:

Your task is to:
Create a summary in the same language that act as a trailer.
text can be in any language, so you need to be very helpful and answer in the same language of the text.
Follow theese two rules:
The digest must be short, less than 200 words
Strictly answer in the same language as the text

If this system prompt is not effective for your case, try adding the following rule:

as a first step print in which language is written text, then use that language for {YOUR_TASK_HERE}.

However I agree with @Foxalabs. If you don’t have multi-language needs, then it’s best to use all prompts in Italian.

Ciao

You can absolutely include simple english system prompts and then use italian for your user prompts.

If you use Italian for both, then there is “less” likleyhood that it will revert to english, especially with complex system prompts, this is just anecdotal information from other users posting over the past several years.

Thank you all for responding to me! :heart:
I don’t know what language the texts will be in, and I need the model to adapt dynamically each time based on the text I provide it.
For some reason, I get better results, even in the Italian language with the system instructions in English. @gianluca.emaldi I will try your prompt and let you know the results obtained!

:wave: everyone, @gianluca.emaldi I have some news, I tried your prompts but they increased the errors in the responses. I haven’t figured out why but I have a feeling that the model gets confused by increasing the length of the prompt, even if just a little.
This is a problem because overall I’m quite satisfied with the results I’m getting, but there is still a 5% of times when the model gets confused and either gets the language wrong, or it does a nice summary but then writes: “translation:” or "summary: and writes a second summary :person_facepalming:
Also starting from the last few days I started to notice how typos :thinking: Like double i’s at the end of words or lack of spaces in answers… Has this ever happened to you?

Try changing the model to gpt-3.5-turbo-0301. See if you come to the same conclusion as me, that they destroyed the newest AI for API system programming.

What do you mean to say? That they made things worse on purpose?

…that OpenAI could have

  • made it “more efficient” on purpose;
  • made it more trained on following ChatGPT’s one prompt;
  • made it more resistant to API system instructions by denying ‘custom instruction’ jailbreaks;
  • made it hate programmers;
  • made it so bad that it couldn’t even write titles any more for ChatGPT…

I just know that overnight tokens came faster and instructions started getting ignored.

Okay, I see what you mean, but it would seem like a bold move at a time when more and more competitors are coming out every day, right?
In any case, I agree that a suspected general "deterioration "is undeniable.
It also seems to me that for the same model the API responds much worse than the classic chatGPT

Hello everyone,
@imesse I apologize, but in my answer I took it for granted that the problem could be in the prompt, so I made some changes, based on my experience. This time I took it for granted only the problem, but not the fact that the problem can be your prompt.
I did a series of tests, using your prompt as a system prompt, and your settings, such as temperature 0.3, frequency penalty 1.8, and so on, as is. With ChatGPT I generated 20 different prompts of around 700/1000 tokens each, in 5 different European languages, including Italian. So 4 different prompts for each language.
Believe it or not, everything worked perfectly. I’m not talking about the quality of the summaries, but each summary corresponded to the language of the text to be summarized.
If you’re interested, I created a spreadsheet with the results: OpenAI API languages tests - Google Sheets

The problem must be looked for elsewhere. Between my tests and your code I see only one difference, but it could be important.

Let me start by saying that I don’t know Javascript, but I notice that the constant ‘contextMessages’ is an array of strings. I can’t interpret the rest of the code, but if my guess is correct, then the AI receives a system message broken into many strings. In my opinion the json payload that arrives at the AI is as the following example:

{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "system",
      "content": "Your task is to:"
    },
    {
      "role": "system",
      "content": "Understand the language of the text."
    },
                {
      "role": "system",
      "content": "Create a summary in the same language that act as a trailer."
    },
                .
                .
                {
      "role": "user",
      "content": prompt
    }
  ],
  "temperature": 0.3,
        .
        .
}

As you can see, it receive different system messages in the same time. Even though the API allows this, I suspect it may be confusing to the AI model.
I did my tests with the system message in a single string:

{                                                                                                                                                                       
  "model": "gpt-3.5-turbo",                                                                                                                                             
  "messages": [                                                                                                                                                         
    {                                                                                                                                                                   
      "role": "system",                                                                                                                                                 
      "content": "Your task is to:\nUnderstand the language of the text.\nCreate a summary in the same language that act as a trailer.\nFollow theese two rules:\nThe digest must be short, less than 200 words\nThe digest must be in the same language of the text"                                                                           
    },                                                                                                                                                                  
    {                                                                                                                                                                   
      "role": "user",                                                                                                                                                   
      "content": prompt                                                                                                                                                 
    }                                                                                                                                                                   
  ],                                                                                                                                                                    
  "temperature": 0.3,                                                                                                                                                   
  .                                                                                                                                                                     
  .                                                                                                                                                                     
}

You could do a console.log() of your JSON payload, to check if your system message is actually being broken into many different strings. Or, more simply, do some testing by passing the modified system message directly, and see what happens. Basically try declaring your ‘contextMessages’ constant as an array containing only one item of type string:

const contextMessages = [
    `Your task is to:
    Understand the language of the text.
    Create a summary in the same language that act as a trailer.
    Follow theese two rules:
    The digest must be short, less than 200 words
    The digest must be in the same language of the text`,
  ];

Again, with a console.log() try to check if the payload is what you expect.

Ciao

1 Like

Hi @gianluca.emaldi , thanks for the analysis!
This advice was so helpful! It drastically reduced the errors! :fire:
I had done several trials at first and switched to more system messages because I seemed to notice an improvement, but with the current prompt and settings this change made all the difference!
Thank you very much! :man_bowing:

2 Likes

Hi everyone, hi @imesse,
I’m glad to hear that my advice was helpful.
However, I have to thank you too, because you gave me the opportunity to learn something new.

Another tip would be to delimit a user prompt in a couple of delimiters. For AI model the word text can be ambiguous. Which text are you referring to? To system message or to user prompt?

I always delimit a user prompt with three hash, such as:

###USER_PROMPT###

So, you can expand your system prompt with something like this:

const contextMessages = [
    `Your task is to:
    Understand the language of the text delimited by three hash.
    Create a summary in the same language that act as a trailer.
    Follow these two rules:
    The digest must be short, less than 200 words
    The digest must be in the same language of the text delimited by three hash`,
  ];

The resulting json payload is:

{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "system",
      "content": "YOUR_SYSTEM_PROMPT_HERE"
    },
    {
      "role": "user",
      "content": "###USER_PROMPT_HERE###"
    }
  ],
  "temperature": 0.3,
  .
  .
}

You can also try to lower temperature, to values such as 0.20, 0.15, for more deterministic results.

@anforious, I saw that your post is a bit old. Maybe you have already solved the problem. If, however, you are still experiencing the problem, try summarizing this thread, to find some advice that may be useful to you. There are a couple examples of good system prompt to start with. Always check that the json payload is correct.

Ciao

Same experience, when you increase the lenght of the user message with gpt-3.5-turbo its completeley loosing the system prompt context no matter what your trying to improve in your prompt while in gpt-3.5-turbo-0301 it keeps in its context.