Ways to input prompts longer than 2000 tokens

Has anyone found a walkaround with the token input limit? I am trying to get 10k tokens as input for different use cases this also can include chat bots etc.

What are the best way around this issue to provide longer memory ?

Have you tried fine-tuning?

1 Like

Thanks for asking. Did you get a chance to check out our Fine-tuning feature? It allows you to fine-tune a custom model based on your use case. Once a model has been fine-tuned, you won’t need to provide examples in the prompt anymore

1 Like

I have had a look at the fine tuning option and would use this to fine tune the model to provide the desired output, however, for it to provide the desired output it will need a large amount on data/information to be entered as a prompt as it will use this data to produce the output. The data is over 10k tokens and would need to somehow find a way to go around the 2k limit.

I need to supply large amount of data into prompt for it to analyze?
how do we do this ? Im searching the forum and web and not finding good answers.
Im seeing about fine tuning, but I dont think that solves our problem or at least I dont know how it does, as what if the data is constantly changing, do we keep fine tuning it it everytime we want it to analyze new data?, that doesnt seem proper, as everytime to analyze new data need a new fine tune model.
please help thanks

hi please help see my post in this thread above