Hello, I wonder if I can input multiple prompts (i.e. different generation tasks) into Chat Completions API all at one and get multiple reponse regarding each prompt. According to this OpenAI document, an array of prompts can be input into the message.content parameter. Here is my code:
prompt = ["hello", "nice to meet you"]
prompt = [{"text": p, "type": "text"} for p in prompt]
r = self.client.chat.completions.create(
model='gpt-3.5-turbo',
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt} # prompt is a list
],
temperature=0.9,
n=1,
max_tokens=max_tokens
)
output = r.choices[0].message.content
The output:
Hello! It's nice to meet you too.
The model seems to concatenate all prompts into one and generates a single response for it. However, I aim to obtain a response list such as ['hi', 'nice to meet you too.']for each prompt.
Is it possible to get a response to each individual prompt? How could I correct my code?
The Chat API does not accept “prompt” or an array of prompts.
Instead, it accepts messages, a list of turns in a conversation ending in the final query.
You would need to make multiple calls to send a variety of inputs.
You can also explore how gpt-3.5-turbo-instruct behaves on the completions endpoint, but it will need a “format” and stop sequences, although it does perform relatively well if just passing a question or instruction followed by two carriage returns, as long as that doesn’t look like a document to complete.
if its the same message and you want to do multiple prompts in a single message you can basically in your instructions give the ai multiple things to respond too.
example for instuctions
look at data and analyze this and return response
next look at the data again and return another response
if you need it in another format you can tell it in instructions to separate responses with , or what ever you like etc…
look up super prompts. my ai system uses super prompts to do a lot of things with one call.