Problems about the parameter settings for File Search (Chunk size & Chunk overlap) and Temperature, Top p in the assistant

Hello everyone,

I’d like to ask about the parameter settings for File Search (Chunk size & Chunk overlap) and Temperature, Top p in the assistant.

I am currently using the GPT assistant to extract specific events and their timestamps from brief simulated case histories. For such tasks that involve specialized academic knowledge.

What would be the recommended settings for the Temperature and Top p parameters?

BTW, I saw that the assistant uploaded a file with an advanced options selection, as shown in the picture.

There are two parameters: Chunk size & Chunk overlap. The default values are 800 and 400.

I have one file with 1500 tokens and another with 4000 tokens. How should I set the Chunk size & Chunk overlap? Or should I just use the default settings since the content is not too large?


If you have the token counts, you likely also have the text.

I would just place the full text into chat completions and ask away. That way there is no extra 600 tokens of useless tool, no AI writing queries, no multiple calls reusing the same context, no AI trying to find out more with more search retries automatically…

top_p can constrain the outputs to only more likely tokens, so you don’t roll the dice on occasionally getting the wrong word that affects everything after.

that’s the problem man… i control max_num_token both at Run level and Assistant Level and also controlling my token count intake via chunking strategy of low token count… however everytime assistant runs, it just does an average of 15k tokens… it seems whatever query user write, it gets split into parallel searches and assistant does its thing to retrieve chunks accordingly… max_num_result feels like a waste then… you do 1 result or set it to 10… doesnt matter as you cant control the parallely queries happening… you think what i am telling is true and makes sense ? @nikunj do you think what i wrote makes sense and am i missing something here ?

@_j anything on this ? whatever i do it keeps bringing to token count to 16k tokens