How do you pass a user question as a param inside the OpenAI Spec schema

Can anyone tell me how you would implement the OpenAI Spec to effectively operate as RAG?

Thus the AI Assist GPT Action takes any user’s question, and where I am using my own API with an endpoint e.g. /search, the schema passes a querystring param, such as ?input_text=[any user’s question in a chat] is passed to the /search endpoint and the API’s function then performs similarity_search for the user’s question and returns a json response to be passed back to the GPT to use for context (so the RAG part). The instructions would then determine if the response is any good and use it if it is!

So in effect and summary, I am using the GPT’s AI Assitant API Action to perform RAG. My user’s query is sent to my API endpoint and the response is returned as context for the GPT to answer based on the instructions of how to do so in the GPT.

Is that possible? Can anyone provide me with a working sample of the above using an OPENAI Spec performing a standard REST GET request to an API and passing the response back to the GPT that will use the ‘trained’ instructions to output the answer in a way that is acceptable through the fine-tuning?

I’m sure there are plenty of people who have a similar use case for this solution that I’m looking for.

Thank you in advance.

If you are wanting to pay AI fees on demand for user’s GPT chats, sure, it can be done.

You’d program with the plugin-like method, where along with your GPT instruction, you inform AI the main purpose is to answer using that API’s knowledge and it always must be called before answering, using a fully-articulated question (and more) based on the user’s recent conversation and their latest question.

(fine-tuning is a term reserved for customizing a new AI model with machine learning, BTW)

Thank you for your reply. Do you have an example of this in action or can you point me to one by any chance?

BTW Noted about fine-tuning, a poor choice of words, as I’m not providing a list of questions and answers to fine-tune a model with. I am creating a GPT not a model! although trying to achieve the same sort of thing!

The team at Webdadi have cracked it, and unbelieveably easily! Ha. So the trick is to have a schema that works. they combined a strategy of using a prompt to convert a CURL request to a schema with an example of a working GET request schema and the combination created a perfect working result with an instruction to the GPT.

Here is a working sample schema, WITHOUT Authentication, to test your use case as either an example or to merge with a prompt to create a schema from a CURL GET request.

You’ll find plenty of help resolving authentication in the Schema using Custom Authorisation in the community here.

This API sends a GET request to my API using the querystring param input_text, where the API is simply a hosted python Flask (runing using gunicorn) and has a Flask accessible endpoint /search that is accessible over SSL. The Flask app simply performs a vectorstore similarity_search of a ChromDB that is a locally cached VectorDB, and the similarity_search just returns the top scoring answer using JSONIFY which is being used by the GPT schema , its instructions with the Action provides decent inference. Here is my working schema:-

{
    "openapi": "3.1.0",
    "info": {
        "title": "WebdadiAPI",
        "version": "0.1.0"
    },
    "servers": [
        {
            "url": "MyFQDN/"
        }
    ],
    
  "paths": {
    "/search": {
      "get": {
        "summary": "Search GPT-generated content",
        "operationId": "searchGPTContent",
        "parameters": [
          {
            "name": "input_text",
            "in": "query",
            "required": true,
            "schema": {
              "type": "string"
            }
          }
        ],
        "responses": {
          "200": {
            "description": "Search results",
            "content": {
              "application/json": {
                "schema": {
                  "type": "object",
                  "properties": {
                    "results": {
                      "type": "array",
                      "items": {
                        "type": "string"
                      }
                    }
                  }
                }
              }
            }
          }
        }
      }
    }
  },
    "components": {
        "schemas": {
            "HTTPValidationError": {
                "properties": {
                    "detail": {
                        "items": {
                            "$ref": "#/components/schemas/ValidationError"
                        },
                        "type": "array",
                        "title": "Detail"
                    }
                },
                "type": "object",
                "title": "HTTPValidationError"
            },
            "ValidationError": {
                "properties": {
                    "loc": {
                        "items": {
                            "anyOf": [
                                {
                                    "type": "string"
                                },
                                {
                                    "type": "integer"
                                }
                            ]
                        },
                        "type": "array",
                        "title": "Location"
                    },
                    "msg": {
                        "type": "string",
                        "title": "Message"
                    },
                    "type": {
                        "type": "string",
                        "title": "Error Type"
                    }
                },
                "type": "object",
                "required": [
                    "loc",
                    "msg",
                    "type"
                ],
                "title": "ValidationError"
            }
        }
    }
}

Once you have this schema working and returning responses on a TEST basis you’re away once you’ve updated your GPT intructions to be something along the lines of:

Instructions example:

The GPT is designed as a first-line support service agent, specializing in answering customer questions . It utilizes an API via the GPT’s custom action. The user will give you, the GPT, a query. You will take this query, and pass this as “query” param in the api. The API /search endpoint will return a response from which you will extract the contexts key from the result and display it to the user. The GPT’s responses should be friendly and efficient, providing clear summaries of steps for problem resolution. It should engage in conversations, seeking further context when necessary, and answer customer questions thoroughly.

How do you pass a default parameter? I have an api_token in my query that I want to pass to get the data. For now, I have added it in my instructions to use the token

1 Like