Currently a chatGPT plug-in needs needs to implement a REST API server with an OpenAPI spec. Every graphql API automatically comes with formal, machine readable, documentation (via introspection queries). Considering that there is a rich ecosystem of tools which expose a graphql API ( for instance, hasura; disclosure: I work for them but speaking for myself), I think supporting graphql apis in addition to REST/OpenAPI’s could be really powerful.
(a similar question was asked here, but it’s not exactly clear what they mean: /t/graphql-support-for-chatgpt-plugins/181748)
My workaround so far has been to feed the GraphQL schema as instructions to the GPT API. It works, but official support would be so much easier to work with and probably perform better in more ways than one
Its by no means perfect at this stage, but I find it works 90% and could probably be improved further.
I basically specify a single OpenAPI endpoint POSTing to /graphql. Together with this I’m sending the schema as a system instruction (with some general description of the APIs use case, and an example or two, just to give it something to work with).
The cool thing is that GPT practically corrects itself (if you have unmasked errors) if you feed back the errors from the GraphQL API
Obviously it helps a lot to have descriptive names of your queries, naming things just got even more relevant
Sorry, it’s a bit too entangled right now. But just to reiterate:
you need to feed the GraphQL Schema to GPT yourself - it wont do the introspection for you since it doesn’t understand the concept of GraphQL at this time.
And then you need to use function calling to make the actual call to your API.
Hey Rober, I’m testing a similar use case but finding that the schema is too big as context.
It’s kind of working passing a subset of the schema but that’s not ideal.
Is it working for you because your schema is small enough or have you looked into an alternative (embeddings, fine-tuning)?
For sure. If there’s a way around it it’d be great, so you have a suggestion? The best would of course be if it got natively supported by the function calling. Is there a way to make OpenAI prioritize this?
We previously tried a similar approach: using RAG to retrieve relevant schema and feed error back for self correction. it generates better query than onepass but but the problem is it still doesn’t work very well on complex schema and can take multiple iterations (thus long latency).
Would love to see if there’s any update on road map for official support.