Working with GPT 3.5 Turbo to query JSON data - ChatGPT and Token Limits

Hey folks,

I’m currently scoping out a project for our team and i’m looking to add GPT-3.5 as a chatbot to help users better interact with our platform. The method we’re using is grabbing a limited set of data (in the form of json), and letting the user ask questions to gpt 3.5 about that set. Based on the token limits, we’re actively working to limit the size of these objects so we can pass as many as possible in one query.

Is there a better approach you all have found for this use case?

How are you implementing ChatGPT into your projects?

Maybe Fine-Tuning would be better in your scenario? Have you considered it? This would allow you to train a model with your own dataset, thus not having to refeed it over and over.

Ah I see, yeah that sounds like it could work. What model would you recommend to fine tune on? Another constraint is users should not be able access data from other users, which is why we currently are trying to pass in data on demand and have it analyze.

That’s really up to you, you can just not store the data from other users.

I’m terme of the model, it depends on cost and usage, but DaVinci is the most “modern” version that supports fine-tuning. (GPT-3)


Why this limit