Me personally I’d try a function specifically for identifying the language in the phone call. Then you can explicitly say, “summarize the phone call and translate it to [language]” I feel like it would follow that instruction much better.
and see if it correctly summarizes the conversation.
If you have meta data on the language of the conversation, you could choose the language in which to make the request.
If not, you might first ask the model to identify the language then summarize the conversation in the language identified.
My guess would be that since the initial request is in Englishand English dominates the training data the most probabilistic initial tokens for the response will correspond to the English language, and one it starts down that path it continues.
So, by asking in Dutch it should make it simpler for the model to respond in Dutch. If you don’t know the language beforehand, asking the model to identify the language with which it will be working before it starts the summarization should steer the model in the right direction.
First, based on your posts, I don’t actually understand what you’re trying to make the model do. So, I don’t know how to recommend a specific solution.
If you want it to dynamically pick a language to translate to, then you should be instructing the model to work in steps. Very explicitly prompt the model to analyze the conversation to determine what language is used most predominantly within the text. You might even prompt the model to store that value, or have the model say that value. Then based on that value, translate/summarize the conversation.
If you don’t work in steps, the model probably sees the english prompt and starts generation based on that queue. Once it is in english, subsequent tokens are more likely english, and it becomes hard to switch tracks. By explicitly working in steps, you are much more likely to get this to work.