Ways to input prompts longer than 2000 tokens

Has anyone found a walkaround with the token input limit? I am trying to get 10k tokens as input for different use cases this also can include chat bots etc.

What are the best way around this issue to provide longer memory ?

Have you tried fine-tuning?

1 Like

Thanks for asking. Did you get a chance to check out our Fine-tuning feature? It allows you to fine-tune a custom model based on your use case. Once a model has been fine-tuned, you won’t need to provide examples in the prompt anymore

1 Like

I have had a look at the fine tuning option and would use this to fine tune the model to provide the desired output, however, for it to provide the desired output it will need a large amount on data/information to be entered as a prompt as it will use this data to produce the output. The data is over 10k tokens and would need to somehow find a way to go around the 2k limit.