Playground vs API difference in completion

Hello everyone!

I am using API(model gpt-3.5-turbo-0301) for bot game that should present user story with options. In playground everything works like charm, but when I use same code with API I keep getting some responses like:

{
  "story": "You start running down the hallway as fast as you can. You hear the guard shouting behind you and the sound of his footsteps getting louder. Suddenly, you see a group of killer robots blocking your path. 
You realize you have nowhere to go.",
  "options": [
    "Fight the robots",
    "Try to find another way around"
  ]
}
After user select option 1) AI responds: 

I'm sorry, but fighting the robots is not a viable option. They are heavily armed and will easily overpower

While in playground AI response is:

{
  "story": "You prepare to fight the robots. You take a deep breath and get ready to charge. Suddenly, you hear a loud explosion behind you and the robots are destroyed. You turn around and see a group of rebels who have come to rescue you. They tell you they need your help.",
  "options": [
    "Agree to help the rebels",
    "Refuse their offer and try to escape on your own"
  ]
}

AI starts to give opinions on choices instead of continuing the story. It happens frequently. This problem does not exist in playground. What can cause this difference? I am using same settings from playground.

1 Like

Welcome @ramizdayi

Playground has certain hyperparams preset, which the API hasn’t. Otherwise there shouldn’t be any difference, assuming that your messages for both playground and API are identical.

@sps Thanks for the answer. The only difference is that after certain user choice API returns an answer which is like personal opinion e.g. "Sorry this was not good choice " or something similar. Really not sure what can cause this.

Hi - I’m the lead developer of the Earthgrid search engine and we’re in the process of adding GPT summaries to our search results and we can’t really take it out of BETA until we resolve this issue. Sometimes the API ‘completion’ tries to finish the sentence rather than providing a summary.

I’m looking for some documentation about the syntax. We’re currently using GPT-3 API and are producing some amazing test results, mostly when we give a lot more data to the API i.e. we spend more tokens on the input than on the output.

We applied for the GPT-4 beta already.

Do you have a syntax such as I read somewhere in a blog

[OUTPUT] : will tell the API “hey please don’t complete my sentence, instead give me what I asked for?”

I could not find any good documentation as most of the internet is plagued with ‘prompts’ for Chat GPT, instead of the JSON we need to give GPT the right syntax to give us consistent output.

At present, we’re referring to this page which has very little data about the specifics of the syntax…

do you have a more detailed explanation. Is the syntax of GPT 3 and 4 similar?

where’s the owner’s manual for those of us who are paying for the API tokens and want to build applications?

You’re following a completion tutorial but GPT 3/4 is a chat model, the chat guide is more what you’re looking for. What does your prompt look like? You should be able to just ask it “Summarize this text: ” or whatever your use-case is. Just give it instructions like you would an intern on the first day of the job.

Thank you for your answer, however I’m not interested in the chat aspect and that’s not what we’re looking for. We’re looking for the syntax guide. If you talk to it like an intern then it will give you responses like an intern. We’ve already been able to improve it greatly, but the data we’re using to program the completion is NOT from GPT. It is from other users in the community.

I’m trying to find some documentation FROM open AI that my team can use i.e. an owners manual of words it recognizes so we can generate CONSISTENT output.

The completion script is what we’re looking for, not the chat.

anyone else?

Hi - I found some additional documentation and have a quick question regarding that.

A lot of our challenges with GPT lie with it trying to finish the sentence (which we don’t want)

I found this syntax

<|endoftext|>

does this mean if we write a prompt and then finish the prompt with <|endoftext|>
that the ‘completion’ algorithm will begin to give us what we asked for?

or would the syntax be better

<|endoftext|>
[OUTPUT]:

I can’t seem to find any good examples of what we’re trying to do.

Well welcome to the community forums (there’s little official OpenAI presence here). You’ve pretty much found all the of the documentation provided by OpenAI, there’s not much but its all on platform.openai.com under Guides or API Reference. I’d recommend starting a new thread detailing what prompts/model you are using and what output you are looking for. I sent the chat model documentation because you mentioned using GPT3 which is a chat model, not a completion model, and there are differences that many newcomers get stuck on, sorry for that assumption.