We need to have multiple prompts in one request for the chat endpoint.
This is so us programmers can use the chat endpoint in a similar fashion to the completions endpoint where we can currently send 20 prompts per request.
This would be very useful as sending a whole request per prompt is very inefficient.
Is the completion for required for conversation or a singular prompt over the chat completion endpoint?
If I am understanding you correctly, I don’t think any context is required, however this is besides the point, I have a brief setup of an interaction, but I’d like to send these extra user prompts in bulk because it’s very inefficient using a hole request for each prompt. This is not supported with the https://api.openai.com/v1/chat/completions endpoint.
I have a suspicion this is what I’m supposed to use a ‘fine tuning’ model for but I haven’t properly looked into what that’s all about.
Edit: Had a look, fine-tuning isn’t yet available foe the gpt-3.5-turbo model so I guess that’s another feature request.