Hello ! I’m working to build gpt, i want to use actions to fetch data listings from an external database thru actions. Despite GPT-4’s ability to handle up to 128k tokens, we’re encountering obstacles with managing large data during external API calls. Anyone have similar problem?
What is the specific problem you have? My guess will be due to rate limits and you not having sufficient tokens per minute to execute a large API call request.
Thank you for your prompt response! The specific issue I’m encountering is that when making an API call to fetch a large amount of data, it fails within the chatbot due to what seems to be a limitation on the amount of data it can handle at once or a rate limit on token processing that hinders efficient data processing. The data I’m trying to fetch could be a substantial JSON response related to various items like vehicle data, clothing items, real estate listings, etc., and I’m looking for new, updated data. I’m wondering if there are any recommended approaches to handling such large data payloads? Should I attempt to break down the data calls into smaller chunks, or is there a setting I could adjust to allow more substantial amounts of data to pass through?
Hi there. I’m having the same problem as OP- my custom GPT executes an action (makes an API call), but if the json response is too large it says something along the lines of the data being too large. Responses with smaller data size all seem to come through just fine though, so I know the problem isn’t in my OpenAPI schema.
Do GPTs currently use the new GPT-4 Turbo model, and if so, is the model’s output token limit of 4,096 the reason for this issue? Will this token limit ever be raised? Thank you.
As the GPT-4 Turbo is a “preview model”, there is probably a chance that the output token limit will be increased, but there’s no guarantee on when it will happen.
Not sure about the actions in GPTs, have not tested that yet, but plugins had a cap of 100k characters limit for the response. Maybe actions share this limit.
In our quest to enhance agent efficiency, the pivotal challenge lies in transforming extensive and repetitive API output into concise tables prior to prompt integration. This necessitates a mechanism allowing user input in the intermediate steps between the API call and prompt insertion. Ideally, the agent itself should autonomously filter and discard extraneous data, possibly guided by an additional filter description provided by the user. This approach would significantly streamline the process, ensuring that only relevant information is utilized in the prompts.