Understanding Context Tokens and Generated Tokens Ratio

Hey folks!

I’ve been digging into the numbers on OpenAI’s platform, specifically the Context Tokens and Generated Tokens. Check out the screenshot below for reference:
stats

Now, I’m curious about your take on what makes a top-notch AI assistant, especially in the context of customer support. What ratio of Context Tokens to Generated Tokens do you find works best for assisting customers effectively? And hey, if you have any other cool metrics or observations to share, I’m all ears!

Let’s geek out together and swap insights! And don’t forget to consider the specific requirements of your customer support tasks or other tasks when evaluating the ideal tokens ratio.

Can’t wait to hear from you!

Razvan I. Savin

I’m not sure what the ratio of input tokens to output tokens would have to do with quality of assistant. Can you elaborate on what you’re thinking?

1 Like

My purpose is to reinforce the AI knowledge base with real data in a way to have a conversation like you have with a real human and he will use facts not generative data. This makes him more accurate, not lazy, not hallucinate, not to put tons of data in the model to achieve this.

As an idea to start is like drawing on this image and make him to connect dots for you (when you reinforce him with your local data: in drawing example what points you want to connect first, what order, etc)

I want to find the best and efficient way to provide him real data to offer me what i want. I saw many people train the models and they expect better results in accuracy in answers and i don’t like that path because i don’t think is good for long term.
I want his accuracy to increase based on the real data i provide him to let him use it for a current state in a unique environment.