Custom GPTs update answers

I have created GPTs for my work and uploaded some PDFs that contain useful information for me, there were new updates in my PDFs and I want to update my GPT →
Do I need to update the whole PDF and re-uploaded it again ? or I can just inform the GPT builder (I have tried to update so many times but still gives me answers based on the PDF) is there a way I can update without uploading a new file?

2 Likes

Morning. I ran into the same problem.

What I’ve done is save all my knowledge in a simple database and reference that knowledge via an Action (instead of directly uploading the PDF ).

The third party database indexes the files directly.

That way when you update the PDF, you simply upload the new copy to the database, it reindexes it, and then the new context is passed to the GPT.

Will that approach work for you? (How many GPTs and how many docs total are we talking about ?)

2 Likes

No, I don’t think that there is any other straightforward way.

You need to replace your GPT knowledge files with the latest pdfs everytime you want your GPT to be aware of the latest updates.

You could also keep on uploading your updated pdfs without deleting the older ones, in that case your GPT will take longer to process as it has to go through more documents and also might hallucinate and provide wrong info due to various versions of your documents.

One other way I can think of is that you can divide your documentation into various parts and host them on a website or something. Each part will have a different url.

Create a function for your GPT to access those urls.

Then you can keep updating your documentation the url will not change and your GPT will be able to get updated info each time.

Happy Coding :slight_smile:

1 Like

I have only one GPTs and two docs for now,
Thanks I’ll look at this approach

Create a function for your GPT to access those urls.

Newbie questions: could you elaborate on what that would look like? Could it be used as a potential workaround for document upload size/volume limits?

Hi,
Do you have a sample on how to do that ?

@hudaiban @eric8 @yosi

I think I figured out a way to do this as I was tired of reloading updated versions of documents. The process that is working for me is

  1. Build a basic FAST API backend (Python) that can ingest and store files of any type.
  2. Expose a GET route in the API with a parameter for a specific doc.
  3. Create an OpenAPI Schema for the API and use that in the Actions (not Knowledge) of the GPT
  4. Handle all refresh and monitoring of the knowledge sources in the API. This way the knowledge the GPT access is always fresh.

LMK if this works for you or you’d like further details.

Hi @eric8 and @yosi!

I created this function to add gifs support to my model. You can also create a similar function to access the urls.

{
  "openapi": "3.1.0",
  "info": {
    "title": "Get gifs",
    "description": "Retrieves as many trending gifs as you want",
    "version": "v1.0.0"
  },
  "servers": [
    {
      "url": "https://api.giphy.com/v1"
    }
  ],
  "paths": {
    "/gifs/trending?api_key=<api_key>": {
      "get": {
        "description": "Get trending gifs",
        "operationId": "GetTrendingGifs",
        "parameters": [
          {
            "name": "q",
            "in": "query",
            "description": "The search string related to your gifs",
            "required": true,
            "schema": {
              "type": "string"
            }
          },
          {
            "name": "limit",
            "in": "query",
            "description": "Number of gifs you want",
            "required": true,
            "schema": {
              "type": "integer"
            }
          }
        ],
        "deprecated": false
      }
    },
    "/stickers/trending?api_key=<api_key>": {
      "get": {
        "description": "Get trending stickers",
        "operationId": "GetTrendingStickers",
        "parameters": [
          {
            "name": "q",
            "in": "query",
            "description": "The search string related to your stickers",
            "required": true,
            "schema": {
              "type": "string"
            }
          },
          {
            "name": "limit",
            "in": "query",
            "description": "Number of stickers you want",
            "required": true,
            "schema": {
              "type": "integer"
            }
          }
        ],
        "deprecated": false
      }
    }
  },
  "components": {
    "schemas": {}
  }
}

In @hudaiban’s case or similar case where the knowledge needed to be updated frequently. Then there are two ways:

  1. If the knowledge can be hosted publically then you can create a function like this.
{
  "openapi": "3.1.0",
  "info": {
    "title": "Get updated knowledge",
    "description": "Retrieves updated knowledge",
    "version": "v1.0.0"
  },
  "servers": [
    {
      "url": "<blogging website base url example: https://yourdomain.com/blogs>"
    }
  ],
  "paths": {
    "/blog-page-with-some-knowledge-1": {
      "get": {
        "description": "Get updated details on my project",
        "operationId": "GetUpdatedProjectDetails",
        "parameters": [
          {
            "name": "<parameter name if any>",
            "in": "query",
            "description": "The parameter value description",
            "required": true,
            "schema": {
              "type": "string"
            }
          }
        ],
        "deprecated": false
      }
    },
   "/blog-page-with-some-knowledge-2": {
      "get": {
        "description": "Get updated details on my some other portion of my project",
        "operationId": "GetUpdatedProjectDetailsOnSomeProtion",
        "parameters": [
          {
            "name": "<parameter name if any>",
            "in": "query",
            "description": "The parameter value description",
            "required": true,
            "schema": {
              "type": "string"
            }
          }
        ],
        "deprecated": false
      }
    }

},
  "components": {
    "schemas": {}
  }
}

You can use any blogging platform to host your content like blogger, wordpress, our your own custom website. Anything will work.

  1. If your knowledge is private then you need to implement authentication in your function.

1 Like

Just pointing out that using Giphy via API is quite expensive now.

But sure, it was a good example.

Thank you. This is exactly what I have done but, My knowledge is a dataset in csv that has sales transactions. if I want to fetch more than a day it will fail with an error too large response.
If I upload the file to knowledge prior then it will try to fetch the additional dates since the upload is greater but I want that fetch to be chunked on day segments so it will be small enough for the fetch not to fail.
Also, the action-fetched data is only available for the current session. even if I ask it to be saved for the knowledge the sandbox where it is saved expires after 3 hours so the fetch needs to happen every day or to manually upload the up-to-date file since there is no API to script that upload. so it looks like I am stuck with these limitations.

Hi @yosi

In that case you may create an api. And add a date or day parameter in the api like pagination. So that the gpt have to call the function multiple times to get the data. You can do the same thing with the blogging website. You just need to adjust the urls.

If the data of single day is too large then you have to decrease the amount of data either by doing some preprocessing on the data or reducing the data for a day.

You need to keep the data per response limited but also not too condense otherwise the model might not consider some of the data.

Thanks! @jakke.lehtonen

I thought Giphy API is free of cost. Actually I am using the non-production api key just to try it out. I don’t know if the production api key is chargable. :face_with_peeking_eye:

Quote from day old email from Giphy:

You will have two options to choose from:

  1. GIPHY Pro:
  • Premium access to GIPHY’s API, which includes:
    • Ad-Free experience
    • Dedicated 24/7 dev support
    • Full library access (GIFs, Stickers, Clips, Emoji, etc.)
  1. GIPHY SDK:
  • Continued free access to our services via our SDK, which includes:
    • Access to GIFs and Stickers
    • Sponsored content via GIPHY Ads
1 Like

trying to build a GPTs based on my onenote content as well. want to know in this api-calling way would chatgpt query the content provider api each time I ask? would it be slow as the api need to return relative big amount of data?

is there an api for programatically updating the gpts knowledge database? so that I can set a cron job let it update my lastest note at night.

If it’s CSV data, why not automatically convert it into SQL storage and then use an API for SQL retrieval? This approach seems to be better.

Welcome to the forum.

Yes you would connect your OneNote content via API to a GPT Management System. The data would be indexed on that system. And an automation to auto refresh the index every evening would be set there.

The indexed knowledge is then accessed via OpenAPI Schema via Actions in the GPT.

Will that workflow work for you?