Large Datasets for gpt3 api

How do we feed in large datasets for gpt3 api to analyze?
Ive been looking around a lot and can’t find any concrete answers.
thanks

How come nobody is answering?? can anyone help me?

How large is “large”?

There are quite a few third-party solutions for loading documents and asking questions.

But there is no magic API call or way to do it without using embedding and breaking documents down.

Are you looking for a solution, or asking how to do it yourself? if you want a solution, I’ll send you a link via DM. If you want to learn how to do it, you need to read the documentation and forum posts about embedding (there are lots and lots of examples in the history) There are also several courses available including one with an entire section f free video on embedding.

Just let us know what you are after, and we can point you in the right direction

2 Likes

so the term I should be searching for seems to be “embedding”… ? I didnt know that. sorry.
I would be open to looking at solutions if you DM me. For sure I want to learn how to do it myself. Let me know the free video link, and any other docs that I should look at.

Much appreciated

what if it was 32000 or 64000 tokens?

Free videos here: OpenAI Embedding Tutorial – Train Your Own GPT Research Assistant

Also, search this forum “Embedding”

Also, google search “GPT Embedding”

Have a look at the online documentation. It has lots of information that will help you

I’ll send you a DM with a link to a product you can try (To avoid promoting products in the forum)