Hey,
i was using gpt-3 fine-tuning before and I could submit a list of prompts for the inference and get the list of completions i.e. predictions. Basically I use it as a classifier.
For gpt-3.5 seems openAI moved to chat-mode completely and if I submit a list of user messages, i always receive just one answer.
Do I get it right, that there is no way to get batch inferences with fine-tune for gpt-3.5?