None of the provided servers is under the root origin - GPTs Schema - Server with port

Hi,

I’m trying to connect with an external API service that has a port, and I’m getting the following error message:
None of the provided servers is under the root origin https://[api_uri]

The API has the following structure:

https://[api_uri]:[port]/[path]

But there’s no way for me to overcome this error.

I tried the exact same operation in another service that points to port 80 and worked like a charm.

We need to have the port available in the “servers” portion of the Schema.

Thanks,

Gustavo Barizon.

5 Likes

I’ve just been pointing DNS subdomain records at anything requiring a port, seems to work so far

Could this be a standard “behavior” for the entire ecosystem?

I don’t think this can be feasible.

Imagine anyone configuring a DNS subdomain for each service and GPT / API to interact with. Imagine this at scale, with hundreds of customers and hundreds of APIs.

What is your opinion about enterprises in this context?

I’m open to discussions, but I truly believe the port should be natively accepted in this context.

From what I understand it’s how most ‘at-scale’ web services work. Rather than talking to (and thus relying on) a single server / dedicated IP, you have a load balancer (with a DNS record) that routes traffic to any one of many identical copies of the server instance. Then, you can change servers out whenever and the customers only ever have to store that single domain name.

For example say I have three services, an API server, a database, and a vector store, all on different boxes. Then on a single domain, I set up three records: api.domain.com, db.domain.com, and vec.domain.com, and point each of those at each box’s IP. Now in my application code, I don’t have to keep track of IPs, and I can just call ‘db.domain.com’ to get the location of wherever my db currently lives.

pearhaps you could help me out .
Please i relly need to know , haw to connect these gpt’s in GPT builer , to existing plugins , as example to CapCut or candva

please someone replay

The problem with your line of thought is the enterprise.

Usually, the load balancers are under the firewall to distribute the services internally - I’m talking about an on-prem situation, OK?

To leverage this, we added an API Gateway - which works with specific ports, and for security reasons, that can’t be changed to ports 80/443.

The DNS records translate the IPs, but we still need to provide the specific ports, so, we need to add a reverse proxy at least.

Adding an extra hop just for the sake of removing the port adds extra maintenance costs, has security implications, and is not an efficient solution. I still can’t buy this idea, other than letting the port be added in the Schema definition.

1 Like

I am seeing the same error but when trying to develop a simple GPT locally. I have built plugins before, but decided to have a play around with creating a GPT using the new approach from ground up.

My simple python backend is running on localhost port 80 (I also tried alternative ports, which seems to introduce a problem in itself), and my OpenAPI spec has the following line:

servers:
  - url: http://localhost

However, I always get the error:

None of the provided servers is under the root origin https ://localhost
Server URL http ://localhost is not under the root origin https ://localhost; ignoring it

In other words, it seems determined to infer an https protocol as the root, even though I’ve clearly specified http.

I’m guessing I could create a self-signed certificate, but it seems an overkill when I only want to debug locally.

Anybody else seeing this? Temporary glitch perhaps?

1 Like

I see this as a “security” measure since the standard today isn’t to accept unencrypted data from an HTTP call, forcing us to use HTTPS instead.

You could open your firewall door, make a port forward from port 80 to your localhost, add your IP address, and add a DNS subdomain, but again, this is too much for a simple test on a GPT.

There must be some flexibility in the GPT Schema to accept ports and non-HTTPS calls. That’s my complaint.

From a production perspective, yes, I agree with forcing the HTTPS connectivity, but blocking the PORT?

4 Likes

I see where you’re coming from. I guess my understanding was you would have your internal network with various services, which you connect to the public using a specified endpoint/load balancer. Then, you create a parallel “GPT” API, which is served on 443, and routes GPT function calls to their respective internal services.

I got around this by using a free tier aws vps to test on (with a free ssl), but definitely agree this seems like an oversight - would be great to at least have the option of http traffic even if it requires some extra settings or has safety limits

1 Like

I support @barizon , there is no reason to limit users to use https or standart ports. it is just out of your scope guys, maximum that you shall do is to add red alert line under the schema description, saying that http is not secure or port not standard and so on, just for testing some ideas it is overwork to make real domain, real certificate, real dedicated just for this api port, o’c I can place to https server a router which will route my tests by app name, but it takes a time.

4 Likes

For real… This doesn’t make any sence to me. Why would they choose to not allow http and different ports??

2 Likes

I am having the same issue, but with https and port different from 443.
I absolutely need to use my server on a different port. Until now no way… Quite upsetting.

1 Like

Same Issue here… it will give this error when specifying a https server with any port different to 443…
At an enterprise level, this is basic

1 Like

Imho obscuring ports is not a good security measure anyway. I don’t doubt that there are other reasons (besides security by obscurity) to have custom ports, but using a port as a security measure for anything GPT related is not a good idea.

I made a workaround as described below.

I installed Kong in a Digital Ocean machine with a Docker image. Then, I installed a Let’s Encrypt certificate to enable the HTTPS and set the port to 80/443.

I had to create a Service/Route in the API gateway to connect my APIs from the ERP system in a different port. It’s quite a bit of work, but it solved my problem.

Here’s how to install Kong:

Here’s the SSL part with Let’s Encrypt:

Good luck.

Just tunnel you ip with ngrok
Worked great for me.

Sorry, but I work with solutions for enterprises. I’m building GPTs to integrate with ERP systems and unfortunately I can’t rely on ngrok to propose a solution to my customers. Kong, is an API Gateway, so I can demo and bundle the solutions.

Here’s what ChatGPT replied me when I asked:

Can ngrok be used in production?
ChatGPT
Using ngrok in a production environment is generally not recommended for several reasons, although it can technically be used for such purposes. Ngrok is primarily designed as a tool for development and testing, offering developers a convenient way to expose local servers to the Internet temporarily.

Conclusion (ChatGPT)
While ngrok is an excellent tool for development, for production environments, it’s advisable to look for solutions that are designed with production-grade security, performance, and reliability in mind. This might involve more setup and configuration but will provide a more stable and secure environment for your application.

Sorry, should have been a reply towards @noc1

No need to be sorry. We’re in the same boat here—learning and sharing. This solution may be helpful for many people who need a quick way to enable their services with GPTs. It’s a valid option. Thanks for adding this!