OpenAI assistant usage for generating output and token limit

I have a use case where I need to send the output of a SQL to a LLM to create data narrative. Sometimes the output of the SQL is more than 30,000 rows. In such cases I get a token limit error. But if I upload the output from SQL as CSV to open ai assistant (code interpreter) and ask it to narrate based on the content, I do not get token limit issue. Does open ai assistant have more token limit? Also for my use case, is it recommended to use open ai assistant.

I recommend that if you really don’t need a context and it is simply what you mention, in your case I would use completions, but if it is true that if you need to send files, in completions there is no such option. you should shuffle or batch requests with completions so as not to reach the maximum number of tokens and then put together those “mini summaries” :smile:

In addition, completions go much faster

By completion you mean the completion api call, right? I tried both refine and map reduce summary. Open AI assistant looks like gives better narrative without the token limit issue