Hello Folks,
I’m working on an application that utilizes the Assistant API. Below is a code snippet.
// Create a streaming response
const stream = await openai.beta.threads.runs.create(newMessage.threadId, {
assistant_id: assistantId,
stream: true,
tools: [{ type: "file_search" }],
});
return new Response(stream.toReadableStream());
The response performance is not as expected - often vague and inconsistent. I have provided system instructions for the assistant to name the file from which it retrieves information, but the accuracy is inconsistent. Sometimes it provides the correct file name, while other times it does not.
Is there a way to improve this by changing the model or adjusting token allocation?