Can we get official confirmation that we can only have one action per GPT? This feels like a HUGE problem. I know we can build out a single schema with multiple “api’s”…but that just means they all have to be endpoints in the same OpenAPI spec file. I was under the impression we would be able to attach multiple actions to one GPT.
As it stands, while GPT’s have ADA and Retrieval built in is great, only allowing one action kind of kills everything that made plugins powerful.
5 Likes
I’m running into the same issue. However, even with multiple APIs with the same schema, I actually got an error:
“Multiple servers found, using URL-1
Found multiple hostnames, dropping URL-2”
You can make mutiple apis in one GPTs by action, just like plugins, watch this ChatGPT - GPTsNavigator
That looks like a list of GPTs… not calling and using multiple different actions in one GPT.
You cant list more than one base server url for an action. You have to code a router in your backend and alter your spec.
In my opinion that’s absurd, we should be able to have multiple URLs on one spec file and ChatGPT should be smart enough to route to the right server.
1 Like
Any update on this ? Can we invoke multiple endpoints now ?
You could always do multiple endpoints. You couldn’t do multiple domains. So if I wanted to have an app the combined google books api with a weather api to give book suggestions based on the weather, I wouldve had to make my own server that pulled in both apis before interaction with the gpt.
Isn’t this possible now with support for multiple actions in a GPT ? I can see an option to create multiple actions and maybe have a different domain in each of these ?
1 Like
I have GPT with 7 API endpoints (different logic with GET/POST/PUT/DELETE) in one action definition, and I can see I can create more actions with respective API specs.
1 Like
Yes, that’s why this is all past tense. This was a convo from before they added multiple domains.
1 Like
LLM served by Perplexity Labs
Here’s a more understandable and secure way to phrase your instructions for native English speakers:
Workaround for API Limitations
To bypass certain limitations, you can consider the following approach, though it comes with significant security risks:
Using a Proxy API Endpoint
- Create a Proxy API Endpoint:
- Set up an API endpoint on your own server that acts as an intermediary.
- This endpoint interprets the commands and either responds with the desired data or returns errors.
Example Syntax
Here is an example of how you might use this proxy endpoint:
text
http://localhost:5000/api?command=GET%20http://api.sunrise-sunset.org/json?lat=36.7201600&lng=-4.4203400&date=today&formatted=0
How it Works
- Your proxy endpoint receives the command and interprets it.
- It handles the JSON responses and any necessary interactions, without the original API being directly aware of the proxy.
Important Security Warning
This method is highly insecure:
- It is vulnerable to code injections and other malicious attacks because it interprets user-defined commands.
- Using this approach can expose your system to significant security risks, including potential data breaches and system compromises.
Therefore, it is crucial to weigh the benefits against the risks and consider more secure alternatives to manage API interactions.
- Consolidate the OpenAPI jsons, aka APIs and their endpoints
This means, just select your APIs that you would consolidate using ChatGPT 4o or any that can access the internet.
You may also ask perplexity or so to do this.
The best thing is (IMHO) to make an own Custom GPT that helps you with this.
Then use the quite large json that it’s creating for the action slot in your custom gpt.
- Same as in 1.: Create a kind of “Secure Multi Server”
That can do the same as in 1.
BUT it doesn’t allow all actions.