E.g:
- for fr translation
JSON Request:
{
“model”: “text-davinci-003”,
“prompt”: "Translate this into fr : Hi \n How are you? \n what are you doing? ",
“temperature”: 0.1,
“max_tokens”: 1000
}
Response:
{
“id”: “cmpl-6rNQiJNAl1U2CUfglfFo8bCt0EwZ2”,
“object”: “text_completion”,
“created”: 1678179468,
“model”: “text-davinci-003”,
“choices”: [
{
“text”: “\n\nSalut \nComment vas-tu ?\nQue fais-tu ?”,
“index”: 0,
“logprobs”: null,
“finish_reason”: “stop”
}
],
“usage”: {
“prompt_tokens”: 21,
“completion_tokens”: 18,
“total_tokens”: 39
}
}
- for Chinese translation
JSON Request:
{
“model”: “text-davinci-003”,
“prompt”: "Translate this into zh : Hi \n How are you? \n what are you doing? ",
“temperature”: 0.1,
“max_tokens”: 1000
}
Response:
{
“id”: “cmpl-6rNST4EeqAddQncJFST8cGDEuEaZg”,
“object”: “text_completion”,
“created”: 1678179577,
“model”: “text-davinci-003”,
“choices”: [
{
“text”: “\n\n嗨,你好嗎?你在做什麼?”,
“index”: 0,
“logprobs”: null,
“finish_reason”: “stop”
}
],
“usage”: {
“prompt_tokens”: 22,
“completion_tokens”: 33,
“total_tokens”: 55
}
}
If you compare the response format of text under choice, it is different for both the responses. The one for fr has the \n character to break each line where as zh doesn’t has that, So it is difficult to parse the response.
Can you please provide the solution to read the multiple lines and translate those lines with consistent response.