Several Issues with third party APIs within own GPTs

Dear all,

I was trying to connect different APIs to my own GPT.
I setup a OpenAPI schema that basically works and connects the API to my GPT.
also I put in the API key.

Here are the issues I got:

  1. Sometimes the GPT can not connect to the API. The error is, that there is allegedly no API key stored. When I copy the API Key again into the form field, it works. Or when I just past the API key into the Chat, it works as well.

Is there a possibility to store the API permanently in in the GPT?

  1. Often it seems that the API gives back, that there was too many data queried. This is what I get:

[debug] Calling HTTP endpoint
{
** “domain”: “stats.oecd.org”,**
** “method”: “get”,**
** “path”: “/data/{datasetId}”,**
** “operation”: “getDatasetData”,**
** “operation_hash”: “d52f10703f2f4abe66ebf79ac26cc7177bc77fd5”,**
** “is_consequential”: false,**
** “params”: {**
** “datasetId”: “AIR_GHG”,**
** “startTime”: “2021”,**
** “endTime”: “2021”**
** }**
}
[debug] Response received
{
** “response_data”: “ResponseTooLargeError”**
}

Is there a possibility how to restrict queries via prompting?

Here are two examples of APIs that mostly tell me,l that there was woo many data and the query was rejected:

URL: API Documentation
URL: dip.bundestag.de/documents/informationsblatt_zur_dip_api.pdf

Would be very helpful for help. I am more a journalist then an IT guy, so please do not overestimate my coding skills, because there is none.

Best regards

Hi @Randolf Did you try playing around with dimensionAtObservation, explicitMeasure , detail or references and manage those as variables to send with the prompt?

From the sdmx documentation ( SDMX - SDMX 2.1 Web services guidelines 2013)

(page 14-15):

(page 26):

(page 29)

Hi @maurizio.chiaro ,

thank you very much for your reply!

I guess i understood the basic principle of your suggestion. I could make query restrictions via the prompt itself.
Would it be possible to give an example prompt? :slight_smile:
Best regards
R

via Instructions in the Configure panel, not via prompt. But it would work only if you have those parameters in your schema.


I tried a couple of times a custom GPT called GPT API Schema Builder that coverts API documents in OpenAPI schemas. You could try feed it relevant sections of your API docs and have a chat with it to see if it could troubleshoot the issue. If the schema test works in debug you could ask it to create detailed Instructions on how to tailor request and response correctly. I did it for one of my GPTs and it managed to create schema and instructions perfectly after a little back and forth.

Thank you very much for the advice. Will test and post the results here.

1 Like