Strategies for Enhancing Large-Scale Data Analysis and Output

Hello everyone, I’m the creator of AnkiX, a flashcard creation GPT. Our tool currently does a well job of outputting effective flashcards but faces a challenge in scaling up its capabilities. When provided with substantial data, it tends to generate a limited number of flashcards, which falls short of comprehensively covering the provided information. I’m reaching out to seek advice and insights on how to enhance our GPT’s data analysis and card generation process. Has anyone faced the same issue, and if so, how did you resolve it?

Are you referring to GPTs (ChatGPT) or the API? Your tags make it confusing.

Regardless, are you using your own RAG, or Retrieval?

This sounds like you need to send several requests to the model because the maximum number of output tokens is limited.
Try that or split your data into smaller chunks to generate the cards.

I apologize for the confusion. I’m working with GPTs (ChatGPT). There is no RAG involved. The problem lies when the user inputs a large amount of information (as an attachment/s or copy-pasted into the input field). AnkiX does not parse through all of the information and creates only a very limited number of flashcards, as opposed to creating a large amount that encompasses the provided material.

When you say “several requests”, are you suggesting I create a loop that forces the GPT to continuously create cards until all of the information is parsed through? If so, what might the prompt look like?

Also, does “split your data” mean the GPT takes in a large amount of user inputted information, and splits it into sections, creates cards for a certain a section, and continues this process until all information is parsed through and cards are outputted for?

ChatGPT and in fact all language models have inherent technical limitations. There is a maximum amount of information they can take in for processing and another maximum amount of output they can produce. This limitation is by design, partially forced by available compute and also influenced by decisions how much of the available compute will be available. For example: even if the GPT-4 model variant behind ChatGPT has a maximum context length of 8K tokens, the maximum length of requests one can send via the textbox is shorter. And then the output is constrained to a fluctuating number between something like 500 and 2000 tokens.

Let’s assume the best case scenario for a case where your input data has 10000 tokens. Then the first 2000 tokens of that data will be immediately dismissed because the model can’t process them anyways. Then the output will be constrained to 2,000 tokens. Meaning you won’t get more output in one conversational turn no matter what you try.

Thus you need to split your input data into several smaller chunks and process them one at a time.

Since you are working with the consumer service, ChatGPT and not the API developer service your possibilities to automate the input and output are inherently limited as well. You will need to send the request to continue manually, while refreshing the instructions to make sure the model remembers what to do and how and you, probably, also need to divide your input data into smaller chunks for the model to process one at a time.
I’m saying probably because with some good prompting and proper ist use of actions this can be partially automated.

But in this case, where you create actions to semi-automate the tasks, it would likely be faster and easier to work with the API instead of a GPT.

1 Like

Thank you for the detailed reply and a potential solution! Could the following plan work?

  1. User inputs a large set of information.
  2. My GPT makes a call to an API I set up.
  3. My API parses through the information in increments and creates flashcards.
  4. Finally, after parsing and card creation is complete, the cards are sent back to GPT output for user to see.
1 Like

Glad I could help!
Your approach sounds reasonable!

Of course the drawback is that you have to manage the costs and process in the background.
But from what I can tell this is exactly what a high value GPT would look like.

On a side note:
You may want to protect yourself from users uploading large amounts of information and you carrying all the costs for the creation of the flashcards.

1 Like

Understood, thank you! Your assistance has been invaluable.

1 Like