Hello to all, firstly i have to say that i am extremely new to this, i don’t even know the terminology yet. But i’ll try to do my best to explain.
I am going to conduct a research and i have to develop the most basic web app by using python and flask. But at some point, i am going to need the help of the AI. So, i want to integrate GPT-3.5 turbo to the app. I have seen some tutorials on how to do it. But the main question is that i’m not sure if i can get what i want.
I want API to answer my question. In a single request, for example, i want to have 10 different answers to that one particular question. And at the backend i want to seperate all those 10 answers and show (is this the correct terminology ) them one by one at random times as a list on the user interface. I asked GPT for an example code, and this is the response i got:
import openai
openai.api_key = ‘YOUR_API_KEY’
response = openai.Completion.create(
engine=“text-davinci-003”,
prompt=“my question”,
max_tokens=50,
n=5
)
for choice in response[‘choices’]:
print(choice[‘text’])
I understand “n” is the quantity of the response i want in a single request. The problem is, i’m not sure if the gpt is aware or unaware of the each response it gives in single request. Are the answers generated as “5 different answers/aspects for 1 question” in one request or the if GPT generates every answer by evaluating them as new requests? Like, as if you have requested 1 answer for 5 times for that particular question. In that case answers might repeat itself since gpt won’t recall if it gave that same answer before.
And lastly, if it can generate 5 different answers for 1 request, would it be possible for me to divide them and show them as a list for the user?
I hope i could explain myself. Thank you for your kind answers in advance.
I think you should use a prompt that is created with user and system roles.
formatted_question = {'role': "user", "content":question}
messages = [{"role": "system", "content":"You are helper about X etc. Return to 10 example answer with jSON type like this: { example jSON}"}]
messages.append(formatted_question)
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-1106",
messages=messages,
stream=False
)
or use new jSON mode : OpenAI Platform
1 Like
Hi, thank you for your answer.
So, i would be able to use those 10 unique answers in the user interface as a list just by defining a role, then? I need to get the answers, and list them on the interface at a given time ( at random times , for example, in 6 minutes). This is the main issue. I know how to have certain amount of unique answers, but in that list i should be able to divide those 10 answers and project them to the interface randomly. As another problem, i have to limit the length of each answer. As far as i understand max_token part does that. When assigning a role, can i also assign the desired lengt of the each answer like in the previous code?
You can save these 10 answers to your database and use them whenever you want. In the next request, you can send the answers you received before and ask them not to return these answers, or this will need to be done on the client side.
Did i understand your question correctly?
1 Like
Actually you did, but i didn’t explain the details i need properly. If i save these answers to my database, will they be saved as 5 different answers, or as 1 answer but 5 headings? Since all the answers needed to be seperated, i am having hard time to manage this problem. For another request, i don’t know if it would be easy to feed the previous answers to AI and get new ones with coding, since i am a newbie at this. I need my subject and api to not to interract but answer a single question seperately in a shared screen for the experimet i’m going to conduct.
Hope, this was a more elaborate description, thank you for your time.