Weird symbols at the beginning of the answer

Hey there,
Sometimes the model via API returns strange characters at the beginning of the string.

For example, imagine that the model has to answer ‘apple’. I sometimes get strings in the format
\n.\n apple or > '+ apple'
Also, I do notice that even via playground I get sometimes outputs with unnecessary newliners. So it looks like something with model.
I understand that I can parse, but I don’t want to hardcode regular expressions so not to accidentally cut the answer

Context: I am using davinci-003 with openai python library.

Has anyone experienced this and is there any workaround for this?

Thanks in advance!

1 Like

I’d all your input just “apple?”

I am asking to provide summary.

What settings are you using? Sounds like you might have freq_penalty a bit too high?

If you can share your prompt, that helps us figure it out.

Thanks!

Same question, I get symbols like ,, ?, ! at the beginning (with two newlines after them, showing up before the actual answer).
Asking the ChatGPT itself, you’ll get this answer:

The symbols like "!", "?" and "," that you are seeing at the beginning of some of the responses generated by the OpenAI API are called "response prefixes." These prefixes are added to the beginning of the response to indicate the type of response or to add emphasis to the text.

For example, a response with a "!" prefix may indicate excitement or emphasis, while a response with a "?" prefix may indicate a question. Similarly, a response with a "," prefix may indicate a continuation of a thought.

These prefixes are part of the natural language generation process and are used to make the responses generated by the OpenAI API more human-like and engaging.

So, I think those newlines are because of that, to distinguish the response prefixes from the answer.
Unfortunately, I don’t know how many response prefixes are there; by knowing all of them, we could replace them with emojis.

Hi @Chronos,

You cannot ask ChatGPT and expect to get a technically accurate reply on technical matters. ChatGPT is a text generation type of autocompletion engine and so it really has no idea about these things. Sorry to disappoint you. You might get “lucky” but it is simply generating text for the most part.

It’s hard to know exactly, unless post the exact prompt you used (in text so we can test with you, not an image hopefully) and the completion.

Everything else is simply guesswork.

Before asking from ChatGPT I read that on help.openai.com too but didn’t find the link to post it here.
The configuration I used is the same as the default chat config for Python on Playground.

You need to post your prompt and your completion.

I did not request your code :slight_smile:

That is what we need to test and help you.

You should also post your completion params.

Oh right. sorry I’m newbie here & to all of this :slight_smile:
Should I post it on another thread?
My code:

import openai
# To optimize async requests to the openai
from aiohttp import ClientSession
openai.api_key = "####"

async def text_generator(prompt_str):

    # Prompt format: f"\n{HumanOrAI}: {TextBody}"
    prompt_str = "Human: Hi, my name is Chronos"
    # Given result:
    '''
    . 

    Nice to meet you, Chronos! What can I do for you?
    '''
    openai.aiosession.set(ClientSession())

    response = await openai.Completion.acreate(
        model="text-davinci-003",
        prompt=prompt_str,
        temperature=0.9,
        max_tokens=150,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0.6,
        stop=[" Human:", " AI:"]
    )

    await openai.aiosession.get().close()

    return response.choices

You can see an additional .\n\n before the answer.

Yes I got the same thing as you in my lab setup:

Let me test more and I’ll get back with you.

Thanks

:slight_smile:

I have resolved this issue for my use case - just add .\n\n to the end of your prompt.

Seems like when your prompt has no distinct ending, the model is trying to “complete” it and adds ending (sometimes it is just symbol like “:” or “,” sometimes it even adds some sentence after comma.

1 Like

OK. I set the temperature down to 0.1 and reset the presence penalty to 0 and it worked better:

In your example seems like model noticed that your message should ends with “.” and just adds this dot in the beginning of its answer =)

1 Like

Yes, that works as well. I just added a period at the end of the prompt and it worked fine (no new-lines needed)

Example 1

Example 2

1 Like

The problem is the prompt is not clear, you need to clarify the end of your prompt either type “.This is the end.” or simply use \n\n indicates output to start a new section. Remember the output is the extend of the input, thus if the input is not complete, output will first complete it(as you mentioned some weird symbols or some sentences) then start a new section(\n\n).

Bad advice, four months too late.