Dynamic, real-time knowledge, base updating via API or URL

Can you be useful to actually have a knowledge base updated real-time for example if I wanted for example stocks prices or something like that.

Alternatively, a marketplace for custom Gpt developers that is listed in the directory within OpenAI could also work.

I don’t have the key skills to implement an api schema, but I suspect any real time data feed via API, or even a URL refresh can be done.

In either event, I would recommend that the user interface caters to a wider audience even those that have No/Low-level coding skills.

I want increase productivity and not having to constantly update my knowledge base.

!{Could not find a topic already in support - my search resulted in nothing](upload://71ky0VHypExSql06jq9HQkcD3Hb.png)

Every API has a specific path, such as v1/quote
and a method ( GET, POST, PUT, etc.).

you can refer to the API provider documentation
then copy the server URL , method, and path that you wish to use.

Here’s an example:

  • API URL: https://finnhub.io/api/
  • Method and Path: GET /quote
  • JSON Body:

If the documentation specifies that your request should be sent within the path instead of in JSON, the path would appear as follows:

/quote?symbol=AAPL and you don’t need to send it in JSON

In this case, “AAPL” is a variable. If you want GPT to search for any symbol, replace it with {variable} or {symbol}.

So, if you’ve determined what you need (server URL, method, path, and “JSON if required” ), you can go to ChatGPT, paste it, and say

Build API schema as JSON.

If your API provider is well-known, you can simply ask ChatGPT. For example, you could say:

“Build API schema as JSON for polygon.io Includes server url, path … etc”.

with GPT , You don’t need skills!

1 Like

Alrighty, I’ll give it a go and let you know.

Your steps are clear enough and yes, I know what Json is :ok_hand:

1 Like

Morning. How often do you need the knowledge base to update? And is the knowledge from a public or private source?

I can define a path for you if you share a few specifics.

1 Like

In this theoretical model, it would be ideally in real time, but I guess that as a starting point, and to facilitate deployment of a working model … We could even consider daily or hourly and then down to real time. If any, what of the variables that will determine how often we can pull the data with API calls.

In particular, I’m thinking practical information such as interest rates, etc … and beyond that news/updates/developments in a specific area of interest that would allow the custom Gpt to perform with relevant information of the day.

Is it tonne of practical applications that could be leveraged.

What’s the Next step?

Hey Dany, ok I think I have a solution. You can bring the sources into suefel.com knowledge base, and then copy the FastAPI endpoints you can add to Actions in a GPT. Refresh rate can be tuned. Let’s see how close we can get to “real time”

I’ll follow up on the DM sent as well.