Learnings while integration third party API with GPTs

I created Langchain GPT which interacts with third party API and answers everything about Langchain.

This post is about learning & nuances discovered while creating.

• If you want to do RAG and interact with third-party data, you need to write instructions around the following lines.

“I will give you a query. You will take this query, and pass this as “query” param in the api. API will return response from which you will extract the contexts key from the result and just display it to the user.”

• Next is configuring an action. When you start, you will have to write OpenAPI spec for the api. You can use GPT 3.5 or 4 itself to convert a curl call to the OpenAPI spec. I used the following prompt.

"I am giving you a curl call and a sample OpenAPI spec. You need to give me the correct OpenAPI spec for the curl call.

Curl: <curl_call_here>
Sample OpenAPI Spec: <sample_spec>
Output OpenAPI Spec:

• Make sure that the Output OpenAPI spec has following:
• Proper schema for request parameters
• Proper schema for response object. This is necessary & you will have to define which key from the response you want to use in your prompt. In my case, I have specified context key in the results key.

• If you are using API Token based auth and you are sending auth token as “Auth Token <token_here>”, then use the following config


I am not a developer by default. Would this method work as a copy/paste with code?

Hey, yes it should.
Feel free to tweak the instructions as per usecase.

1 Like

Hi @taranjeet2529. That is very useful indeed. By way of example of the use of the Actions API feature, I have a working Python Flask RAG implementation of a Q&A using ChromaDB and vectorstore with similarity_search or retrieval to pull back the most relevant article that is in an Atlassian Confluence KB Space using the Confluence REST API and a document loader.

Can I assume from what you are saying that I can using your method here get the AI Assistant to submit a user question, via my Action’s API endpoint and with the question as a param, then the AI Assistant’s API Action will engage and successfully and effectively use a JSONIFY response in the Flask app that returns with a call to to the endpoint with the answer to the question in the response ( as it currently does) ?

If this is true then I can use the AI Assistant as a chatbot with text streaming rather than a single Q&A using RAG in a Flask and allow the Assistant to summarise the response as required with further instructions to the GPT?

If so you’ll have helped me achieve something massive since my first assumption was I would have to return all pages from confluence, but instead you are showing me that I can actually call a server hosted Flask with my endpoint to achieve a full streaming bot powered by a GPT trained with my loader of documents. Immense if so!

My second question is, if i get this working how might I then utilise the AI Assitant and therefore my custom GPT via a Website or Webhook for chained Q&A and is this something I can make public or private within ChatGPT as well?

Webdadi solved the first of my question’s above as ChatGPT didn’t output a working schema initially but a merged version with a successfully working one provided the necessary template as context, and so I posted my solution here in the community (see below) This was my Topic:-

How do you pass a user question as a param inside the OpenAI Spec schema

just need a solution to Part 2 and need to resolve the domain verification bug with OpenAI’s GPT’s domain verification process. Which not only can lose a successful domain verification but also not verify a visible TXT record in my case for a non-pointing TLD, but instead a CNAME pointing www subdomain. Not that this should make any difference for verification when it works fine for Twilio and Google. Hopefully when OpenAI fix this i’ll get a chance to put something in the GPT marketplace.

Nice one @taranjeet , surely dabbling with this next.

I would really appreciate a Python Flask code version of the GPT as an AI Assistant so I/We can run this API GPT outside of ChatGPT. Instead I would like to create the GPT as a standalone OpenAI Assistant.

OpenAI’s API has now moved on from the previous chat completions in ver 0.28 from version 1.0.0 onwards. There are some examples of a Python assistant without using a REST API and I only find file-oriented examples.

I’d really like to know how you can therefore build a Python AI assistant using the REST API with and without files, to achieve effectively the same output I get with the GPT using a REST API Integration. I assume this is possible using Functions and Code interpreter built into AI Assistant.

Is there perhaps a prompt that would generate the Python Flask code needed to achieve this using an already working GPT harnessing the REST API successfully to work with? Where the AI Assistant is already working with the API connected up so its really a conversion to Flask code wanted?!

1 Like

Keep asking, you never know :slight_smile:

The OpenAI team might just release SDK for popular frameworks.