Gpt-3.5-turbo not listening to my language requests

Hey all! I have a summary of a phone call in Javascript, and my prompt looks like this:

let prompt =  "Please summarize the following conversation. The summary and your response should be written in the language of the conversation."  +

Chatgpt keeps returning the summary in English, even though the whole transcription/phone call is in dutch. These are my settings:

        model: "gpt-3.5-turbo",
        messages: [
            role: "assistant",
            content: prompt,
        temperature: 1.5,

Anybody got any tips? I tried messing with promtps roles and temperatures but its always the same.

When I ask it to tell me a joke in dutch it works fine. Should I somehow find out the language of the call with whisper and then include that in the prompt?

use role system for instructions and user for any prompts

Dont ask assistant response with Assistant role

in system you can mention to always translate to x or something similar and then in user put the text

use assistant role only to store GPT answers in memory if you need that
Assistant gives me response, I store it as assistant in past context

also temperature 1.5 might cause some issues

you probably dont want to go over 1.0

Yeah was at 0.7 for a while was just testing out different things, will try ur solution thanks

let prompt_summary = "Please summarize the following phone call " +
let prompt_translate = "Please translate/write the summary in the language that is most spoken in the phone call. "

const response = await
    model: "gpt-3.5-turbo",
    messages: [
        role: "system",
        content: prompt_summary,
        role: "user",
        content: prompt_translate,
    temperature: 1.0,
    headers: {
      "Content-Type": "application/json",
      Authorization: `Bearer ${TOKEN}`,

Any tips? Now it just translates the whole phone call

I would probably try

system: Please always summarize and translate the following phone call. Write the summary in the language that is the most spoken in the phone call. Do not describe or explain.



user: Phone call: +

But maybe you want to consider first doing summary and then translate in another prompt

Cant say I did that before…summary + translate
But thats how I would approach it
If it has trouble doing both summary and translate, then I would split it into 2 calls

3.5 isnt great at doing multiple steps in 1

For example instead of asking ( just a time example )
Current time 16:00
How long until 22:00 in new york timezone?

what works better is

call 1
Current time 16:00
What time is it in new york?
GPT responds with the time in new york
call 2
How long until 22:00 in new york?

(with keeping context of call 1)

Because then it already has the time converted in the previous text and is less likely to do it wrong.

But I am sure it should be able to summarize and translate in 1, but have to play around with it

Me personally I’d try a function specifically for identifying the language in the phone call. Then you can explicitly say, “summarize the phone call and translate it to [language]” I feel like it would follow that instruction much better.

Here’s an idea, try using this prompt instead,

Vat het volgende gesprek samen,

and see if it correctly summarizes the conversation.

If you have meta data on the language of the conversation, you could choose the language in which to make the request.

If not, you might first ask the model to identify the language then summarize the conversation in the language identified.

My guess would be that since the initial request is in Englishand English dominates the training data the most probabilistic initial tokens for the response will correspond to the English language, and one it starts down that path it continues.

So, by asking in Dutch it should make it simpler for the model to respond in Dutch. If you don’t know the language beforehand, asking the model to identify the language with which it will be working before it starts the summarization should steer the model in the right direction.

I hope this helps.

First, based on your posts, I don’t actually understand what you’re trying to make the model do. So, I don’t know how to recommend a specific solution.

If you want it to dynamically pick a language to translate to, then you should be instructing the model to work in steps. Very explicitly prompt the model to analyze the conversation to determine what language is used most predominantly within the text. You might even prompt the model to store that value, or have the model say that value. Then based on that value, translate/summarize the conversation.

If you don’t work in steps, the model probably sees the english prompt and starts generation based on that queue. Once it is in english, subsequent tokens are more likely english, and it becomes hard to switch tracks. By explicitly working in steps, you are much more likely to get this to work.