Workaround with knowledge files bigger than context window


you query your database using the device type

Use case is other: the query response is recommendation of device.

Structuring the data as you recommend is surely the best case - but the data is too different (multiple sources, multiple formats) and efforts to structure it would exploit any thinkable frame.

you query your database using the device type

I don’t said “device type”, but “device model” - like “iphone 15 128gb”, “iphone 15 256gb”, or “seiko SSA343J1”, “seiko SNZG15”.

If you’re prompt is still bigger than 32k of tokens

My prompts are definitely smaller than 32k tokens - but the sum size of knowledge files is. Thsi is the main issue.

What are the criteria used by the model to base the recommendation on?

How the recommendation process is done by a human?

The answer to those two questions will open the door to proper data structure design. Once you have that, the data extraction and pre-processing will be much easier to put in written and this will basically draft your application workflows.

Ладно, это уже детали, смысл же ты и так понял.

1 Like