Hey there,
Sometimes the model via API returns strange characters at the beginning of the string.
For example, imagine that the model has to answer ‘apple’. I sometimes get strings in the format \n.\n apple or > '+ apple'
Also, I do notice that even via playground I get sometimes outputs with unnecessary newliners. So it looks like something with model.
I understand that I can parse, but I don’t want to hardcode regular expressions so not to accidentally cut the answer
Context: I am using davinci-003 with openai python library.
Has anyone experienced this and is there any workaround for this?
Same question, I get symbols like ,, ?, ! at the beginning (with two newlines after them, showing up before the actual answer).
Asking the ChatGPT itself, you’ll get this answer:
The symbols like "!", "?" and "," that you are seeing at the beginning of some of the responses generated by the OpenAI API are called "response prefixes." These prefixes are added to the beginning of the response to indicate the type of response or to add emphasis to the text.
For example, a response with a "!" prefix may indicate excitement or emphasis, while a response with a "?" prefix may indicate a question. Similarly, a response with a "," prefix may indicate a continuation of a thought.
These prefixes are part of the natural language generation process and are used to make the responses generated by the OpenAI API more human-like and engaging.
So, I think those newlines are because of that, to distinguish the response prefixes from the answer.
Unfortunately, I don’t know how many response prefixes are there; by knowing all of them, we could replace them with emojis.
You cannot ask ChatGPT and expect to get a technically accurate reply on technical matters. ChatGPT is a text generation type of autocompletion engine and so it really has no idea about these things. Sorry to disappoint you. You might get “lucky” but it is simply generating text for the most part.
It’s hard to know exactly, unless post the exact prompt you used (in text so we can test with you, not an image hopefully) and the completion.
Before asking from ChatGPT I read that on help.openai.com too but didn’t find the link to post it here.
The configuration I used is the same as the default chat config for Python on Playground.
I have resolved this issue for my use case - just add .\n\n to the end of your prompt.
Seems like when your prompt has no distinct ending, the model is trying to “complete” it and adds ending (sometimes it is just symbol like “:” or “,” sometimes it even adds some sentence after comma.
The problem is the prompt is not clear, you need to clarify the end of your prompt either type “.This is the end.” or simply use \n\n indicates output to start a new section. Remember the output is the extend of the input, thus if the input is not complete, output will first complete it(as you mentioned some weird symbols or some sentences) then start a new section(\n\n).