POST/GET data from GPTs (plugin) to server-side API

Did someone actually succeed in passing POST/GET data from GPTs (plugin) to your own server-side API? I am failing every time. I am trying to pass a user-uploaded image. GPT asks for permission as expected, but in the server-side POST/GET data is empty. Do you have any suggestions? Does this service actually work?

3 Likes

It was working for me about an hour ago but it’s now failing for every outbound request with the generic error “error talking to FQDN”.

It’s working fine for me. I’m doing GET/POST actions to an API of an ERP system on the Cloud.

1 Like

It’s been a bizarre afternoon. After deleting the action and recreating the generic error has gone away and can post/get against my server again.

1 Like

Nice.:+1: Does your api also store some binary data? For example pictures? I tried sending the image to my server both as a post url parameter and as a post from-data binary. So far no luck, the api is called but no data is passed

1 Like

I do have an vision endpoint that downloads a picture from a URL, base 64 encodes and sends it over for processing. Seems to be working OK.

It’s very easy, once you have set it up right.

The thing is that GPT also complains when the data it received is correct, but he think it’s not.

I was testing with an API about “fish” and to test I changed the fetched data from “fish” to “dogs”.

Than GPT said “something is wrong with the data”, but technically it was correct. He just tries to act smart.

This is a nice boilerplate, just change the URL and PATH and setup your server (at that endpoint) to spit out some JSON data. Execute the Operation ID in the chat (getFacts) and the GPT will “ping” to your server, fetch data and write it out.

{

"openapi": "3.0.0",

"info": {

"title": "Cat Facts API",

"description": "API for retrieving cat facts and user data from the Cat Facts website.",

"version": "1.0.0"

},

"servers": [

{

"url": "https://catfact.ninja"

}

],

"paths": {

"/facts": {

"get": {

"summary": "Retrieve and query facts",

"operationId": "getFacts",

"responses": {

"200": {

"description": "A list of cat facts",

"content": {

"application/json": {

"schema": {

"type": "array",

"items": {

"$ref": "#/components/schemas/Fact"

}

}

}

}

}

}

}

}

}

}

It will fetch this data;

catfact.ninja/facts

2 Likes

Thank you this is literally the only way I got GPT Actions to work.

Yeah, I used the example as… an example. The documentation from OpenAI is not very good.

This is my approach for a simple test set-up;

{
  "openapi": "3.0.0",
  "info": {
    "title": "News Facts API",
    "description": "API for retrieving news facts.",
    "version": "1.0.0"
  },
  "servers": [
    {
      "url": "https://MYDOMAIN.com"
    }
  ],
  "paths": {
    "/gpt": {
      "get": {
        "summary": "Retrieve and query news facts.",
        "operationId": "get_news",
        "responses": {
          "200": {
            "description": "A list of news",
            "content": {
              "application/json": {
                "schema": {
                  "type": "array",
                  "items": {
                    "$ref": "#/components/schemas/Fact"
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

Three things are important for this code;

  1. The url (base name of your website / server)
  2. The path (/gpt in this example)
  3. The operationID (get_news)

Then you setup (at your server) something like this;

https://MYDOMAIN.com/gpt/

The “index.php” in this folder (I write PHP) can do whatever you want. I did let it fetch the user-agent (the OpenAI bot) and spit out a date plus some news headlines I fetch with CURL (just to test things out).

In the chat, I did make a conversation starter like GO! (that acts as a shortcut for the custom command).

And in the custom instructions I did say;


After the command "GO!" you show the data received from the operationID "get_news" in your "actions" API.

Show the original output in a styled format (with headers, bullets, etc...), strip tags, keep all the items (also "user agent" and "timestamps" at the beginning of the data).

Don't ask questions, just do it and show the result.

So when I enter “GO!” (or press the shortcut underneath the textarea in the chat), GPT will use GO! as a hook to execute fetch_news.

This actions goes to https://MYDOMAIN.com/gpt/

And the file there returns something like the image below. The thing is, it is GPT, so it tries to make things up, makes full sentences of the stuff and even tries to design it (HTML) received data.

Of course the used JSON-formatted-API code is a test, not all is needed.

But you can also “send things” with this code and that’s fetched at your server, and that server can do thing with the send data and send back other data.

That is fine. Simple API calls work for me as well. But has anyone succeeded in sending a user file to the server side, like an image, for example? Actually, when I chatted with ‘Configure’ chat and asked if forwarding a user image is allowed, it said it’s not :slight_smile: So, maybe it’s not supposed to work that way.

Maybe you can ask GPT to base64 encode the uploaded image, send that data as text to you API and fetch it there.

But that won’t work for binaries.

1 Like

Hey- can we connect? I’ve been reading your posts, and I’m trying to find someone who can talk me through creation of actions and the potential of them. Would be great to hear back.

Best is to talk here, so everyone with the same troubles can learn.

And I am not "that good’, I just try out an awful lot of things (until it works…).

What about passing bearer tokens to the API? Has anyone gotten that to work? It is not working for me and I am pretty sure my OpenAPI spec is correct. If anyone has an example of a working API (I am using github’s API directly), that would be greatly appreciated. Auth with Bearer tokens doesn’t seem to be working.

yup,I tried base64, binary and imageURL as well. It understands other fields in POST request but ignores file upload.

1 Like

Maybe it is sandboxed and the system doesn’t have (direct) access to it.

Who knows, in the future.

We’ve been trying this as well, but no joy yet. Can’t seem to get multipart POST requests working at all, even without the file :person_shrugging:

Having the same issue, multipart doesn’t work for sure, and when sending the image as base64 it seems that the string is truncated and I’m not getting a proper image as a result.

Only thing that can be sent is an URL in the POST request, with the ‘application/json’ header, but that doesn’t help, because I would like to use the image the user uploaded to GPT.

EDIT: it seems only the first 500 characters of the base64 string are sent.

I see your pain :slight_smile:
If OpenAI’s solution architects are reading this, please consider developing a solution where user consent is obtained for each external file upload. For instance, a prompt could appear stating, ‘An external API is requesting permission to upload your file.’ Accompany this with options like (Button: Allow) / (Button: Deny). Also, implement a graphical interface that clearly displays which file will be shared and with which domain. For example, it could show ‘MyDinner.jpg’ → being transferred to → ‘salad-eating-championship.com’.

Currently, my thought is to create a workaround using an external upload URL and guide users to an external web page to facilitate image uploads.

1 Like

So has anyone succeeded in being able to send the file the user submitted to a server??