I am using API(model gpt-3.5-turbo-0301) for bot game that should present user story with options. In playground everything works like charm, but when I use same code with API I keep getting some responses like:
{
"story": "You start running down the hallway as fast as you can. You hear the guard shouting behind you and the sound of his footsteps getting louder. Suddenly, you see a group of killer robots blocking your path.
You realize you have nowhere to go.",
"options": [
"Fight the robots",
"Try to find another way around"
]
}
After user select option 1) AI responds:
I'm sorry, but fighting the robots is not a viable option. They are heavily armed and will easily overpower
While in playground AI response is:
{
"story": "You prepare to fight the robots. You take a deep breath and get ready to charge. Suddenly, you hear a loud explosion behind you and the robots are destroyed. You turn around and see a group of rebels who have come to rescue you. They tell you they need your help.",
"options": [
"Agree to help the rebels",
"Refuse their offer and try to escape on your own"
]
}
AI starts to give opinions on choices instead of continuing the story. It happens frequently. This problem does not exist in playground. What can cause this difference? I am using same settings from playground.
Playground has certain hyperparams preset, which the API hasnât. Otherwise there shouldnât be any difference, assuming that your messages for both playground and API are identical.
@sps Thanks for the answer. The only difference is that after certain user choice API returns an answer which is like personal opinion e.g. "Sorry this was not good choice " or something similar. Really not sure what can cause this.
Hi - Iâm the lead developer of the Earthgrid search engine and weâre in the process of adding GPT summaries to our search results and we canât really take it out of BETA until we resolve this issue. Sometimes the API âcompletionâ tries to finish the sentence rather than providing a summary.
Iâm looking for some documentation about the syntax. Weâre currently using GPT-3 API and are producing some amazing test results, mostly when we give a lot more data to the API i.e. we spend more tokens on the input than on the output.
We applied for the GPT-4 beta already.
Do you have a syntax such as I read somewhere in a blog
[OUTPUT] : will tell the API âhey please donât complete my sentence, instead give me what I asked for?â
I could not find any good documentation as most of the internet is plagued with âpromptsâ for Chat GPT, instead of the JSON we need to give GPT the right syntax to give us consistent output.
At present, weâre referring to this page which has very little data about the specifics of the syntaxâŚ
do you have a more detailed explanation. Is the syntax of GPT 3 and 4 similar?
whereâs the ownerâs manual for those of us who are paying for the API tokens and want to build applications?
Youâre following a completion tutorial but GPT 3/4 is a chat model, the chat guide is more what youâre looking for. What does your prompt look like? You should be able to just ask it âSummarize this text: â or whatever your use-case is. Just give it instructions like you would an intern on the first day of the job.
Thank you for your answer, however Iâm not interested in the chat aspect and thatâs not what weâre looking for. Weâre looking for the syntax guide. If you talk to it like an intern then it will give you responses like an intern. Weâve already been able to improve it greatly, but the data weâre using to program the completion is NOT from GPT. It is from other users in the community.
Iâm trying to find some documentation FROM open AI that my team can use i.e. an owners manual of words it recognizes so we can generate CONSISTENT output.
The completion script is what weâre looking for, not the chat.
Hi - I found some additional documentation and have a quick question regarding that.
A lot of our challenges with GPT lie with it trying to finish the sentence (which we donât want)
I found this syntax
<|endoftext|>
does this mean if we write a prompt and then finish the prompt with <|endoftext|>
that the âcompletionâ algorithm will begin to give us what we asked for?
or would the syntax be better
<|endoftext|>
[OUTPUT]:
I canât seem to find any good examples of what weâre trying to do.
Well welcome to the community forums (thereâs little official OpenAI presence here). Youâve pretty much found all the of the documentation provided by OpenAI, thereâs not much but its all on platform.openai.com under Guides or API Reference. Iâd recommend starting a new thread detailing what prompts/model you are using and what output you are looking for. I sent the chat model documentation because you mentioned using GPT3 which is a chat model, not a completion model, and there are differences that many newcomers get stuck on, sorry for that assumption.