Hi folks - I’ll start by apologising, I’m not a programmer or familiar with coding, just fascinated by the possibilities of ChatGPT and the API. My capablities just about start and end with “Hello World!”
I’ve been using a beebom.com online guide to creating a custom knowledge base chatbot with Python and Gradio as a chatbot UI.
As a test, I built myself a (very small) custom model for fantasy recipes (I recommend the Stormy Sky Pie!). It took me all weekend and a lot of swearing, so please ELI5: is there any way to increase the token limit of the ChatBot outputs? I’m not sure if it’s an in-built restriction or just the way I’ve set up this particular chatbot.
Any assistance or points in the right direction very much appreciated! Up to and including rebuilding the approach from the ground up and working in a different method than a chatbot; I’m here to (very slowly and painstakingly) learn!
the output is cut off: you’ve set the maximum tokens that can be generated too low, or the model ran out of context length space to form an answer
The composition is much shorter than you wish or instructed: The chat AIs from OpenAI have been increasingly trained to generate even smaller outputs, no matter your prompting technique, likely part of decreasing the computational resources used.
But - and please forgive my very limited knowledge of code, I’m basically just copying and pasting - I’m wondering if this is a pre-set limit in the code: