Hi, I’ve just started switching over from Davinci to the newly released gpt-3.5-turbo model. I’m having an issue where the API only accepts message prompts that don’t include line breaks. Perhaps, it’s a formatting issue with my prompt, but I’m not sure. Any tips on how to fix this?
Here’s an example:
[
{“role”: “system”, “content”: “You are a helpful assistant that translates English to French.”},
{“role”: “user”, “content”: ‘Translate the following English email to French: “{
Hi John,
Want to grab lunch Tuesday?
Jeremy
}”’}
]
Thanks for the help. This works. Any idea why the Chat API doesn’t accept natural line breaks? For context, I’ve been calling the text-davinci-003 API with natural line breaks and having no issues.
user_message = " to make mulit-line input:\n1. identify positions to emit newline;\nuse the escape sequence ‘\n’ to represent the end of a line."
user_message += “\n\nAnything else you want to know?”
Personally I think yes, here is what I saw so far:
3 out of 3 fine-tuned (0.0000/0.0000/~750samples) models trained to deal with raw text formatting started using tabs (\t) to separate data between cells in a row and new lines (\n) between table rows on their own not being trained on table formatting nor seing a table sample in validation samples (domain is law and legal documents). I just gave them tables as edge cases to see how they would behave.