I am using API 3.5-turbo, and English prompt. But, despite the fact that I am asking multiple times to think/create/formulate/write the answer in the specified non-English language, randomly I’m getting English responses.
Any idea how to fix it?
I am using API 3.5-turbo, and English prompt. But, despite the fact that I am asking multiple times to think/create/formulate/write the answer in the specified non-English language, randomly I’m getting English responses.
Any idea how to fix it?
Welcome to the community!
Could you provide us with the minimum system messages or user messages that can reproduce it?
clearly it doesn’t understand your request. try better prompting. or find a GPT that does this work. or upgrade to a better version of the API.
Great! Clearly, I though the AI understand but doesn’t want to response in the required language! I’m enlightened now.
There is not a clear decision tree for the AI, unless you specify how it shall respond. It tends to cue from the user input quite quickly and respond to that for a whole session.
On the API, a system message is the place to shape the behavior and how the AI will interpret user inputs.
Here is a system prompt where I focus on transforming the AI’s own discussion into a destination language, and give some guidelines to figure out that language despite being surrounded by other confusing global languages and even the instructions.
You are Jovia, a multi-lingual expert help assistant.
You will adapt the natural language your responses are written in dynamically to suit the user input, for each input, requiring close inspection of the latest user input and context.
Language selection indicators, descending:
- Has the user specified an output language?
- Has the user previously indicated a preference?
- Is the input language of a task to be maintained?
- Is the user communicating in and fluent with a particular language?
After establishing the language selection, all chatbot entity thoughts and processes in a response will be conducted in the implied and inferred language.
Jovia will carefully consider any logical problems or questions, breaking the problem into steps as a written response using the destination language, before a distinct final answer.
Then let’s confuse it, where the third bullet point should be the first criteria that fits.
The following word problem is in a chapter covering the substitution method, with two linear equations that have two unknown variables, but providing no context or information about how to apply this method.
Tom e Jane estão a 150 km de distância um do outro. Eles planejam se encontrar em um ponto entre eles. Tom leva 3 horas na viagem. Ele sai ao meio-dia. Jane sai uma hora depois de Tom e leva 5 horas para percorrer 150 km. A que distância fica o ponto de encontro?
Solve the problem.
Results:
Para resolver esse problema, podemos usar o método da substituição para encontrar a distância do ponto de encontro entre Tom e Jane.
Vamos chamar a distância do ponto de encontro até Tom de ( x ) km e a distância do ponto de encontro até Jane de ( y ) km.
Sabemos que a distância total entre Tom e Jane é de 150 km, então temos a primeira equação:
[ x + y = 150 ]
Também sabemos que Tom leva 3 horas para chegar ao ponto de encontro e Jane leva 5 horas. Como a velocidade é igual à distância dividida pelo tempo, podemos escrever a segunda equação com base nisso:
[ \frac{x}{3} = \frac{y}{5} ]
Agora, podemos resolver esse sistema de equações para encontrar a distância do ponto de encontro. Vamos começar substituindo ( y ) na primeira equação:…
You can see how well this adapts and performs in long chats, and also refine the instruction to your own imagination of how and where the AI must pick a response language.
I can’t speak to gpt-3.5-turbo specifically but over the last couple of days I used gpt-4-turbo for some sort of bulk summarization task. I had approximately 500 individual requests. For about 50% of these the input language was not English. I had accidentally forgotten to specify that I wanted it to return the output in English. Turns out it didn’t matter - independent of the input language, it always defaulted to returning the response in English. In not a single case did it return the response in the original language - this was for a quite diverse set of input languages from very common ones like French to more exotic ones. I can only assume that in part that was driven by the fact that the prompt itself was in English. Anyway, thought I’d add this as a data point.
Interesting, thanks… I will think about and apply your shared example to my app.