I’m trying to connect with an external API service that has a port, and I’m getting the following error message:
None of the provided servers is under the root origin https://[api_uri]
The API has the following structure:
But there’s no way for me to overcome this error.
I tried the exact same operation in another service that points to port 80 and worked like a charm.
We need to have the port available in the “servers” portion of the Schema.
I’ve just been pointing DNS subdomain records at anything requiring a port, seems to work so far
Could this be a standard “behavior” for the entire ecosystem?
I don’t think this can be feasible.
Imagine anyone configuring a DNS subdomain for each service and GPT / API to interact with. Imagine this at scale, with hundreds of customers and hundreds of APIs.
What is your opinion about enterprises in this context?
I’m open to discussions, but I truly believe the port should be natively accepted in this context.
From what I understand it’s how most ‘at-scale’ web services work. Rather than talking to (and thus relying on) a single server / dedicated IP, you have a load balancer (with a DNS record) that routes traffic to any one of many identical copies of the server instance. Then, you can change servers out whenever and the customers only ever have to store that single domain name.
For example say I have three services, an API server, a database, and a vector store, all on different boxes. Then on a single domain, I set up three records: api.domain.com, db.domain.com, and vec.domain.com, and point each of those at each box’s IP. Now in my application code, I don’t have to keep track of IPs, and I can just call ‘db.domain.com’ to get the location of wherever my db currently lives.
pearhaps you could help me out .
Please i relly need to know , haw to connect these gpt’s in GPT builer , to existing plugins , as example to CapCut or candva
please someone replay
The problem with your line of thought is the enterprise.
Usually, the load balancers are under the firewall to distribute the services internally - I’m talking about an on-prem situation, OK?
To leverage this, we added an API Gateway - which works with specific ports, and for security reasons, that can’t be changed to ports 80/443.
The DNS records translate the IPs, but we still need to provide the specific ports, so, we need to add a reverse proxy at least.
Adding an extra hop just for the sake of removing the port adds extra maintenance costs, has security implications, and is not an efficient solution. I still can’t buy this idea, other than letting the port be added in the Schema definition.
I am seeing the same error but when trying to develop a simple GPT locally. I have built plugins before, but decided to have a play around with creating a GPT using the new approach from ground up.
My simple python backend is running on localhost port 80 (I also tried alternative ports, which seems to introduce a problem in itself), and my OpenAPI spec has the following line:
- url: http://localhost
However, I always get the error:
None of the provided servers is under the root origin https ://localhost
Server URL http ://localhost is not under the root origin https ://localhost; ignoring it
In other words, it seems determined to infer an https protocol as the root, even though I’ve clearly specified http.
I’m guessing I could create a self-signed certificate, but it seems an overkill when I only want to debug locally.
Anybody else seeing this? Temporary glitch perhaps?
I see this as a “security” measure since the standard today isn’t to accept unencrypted data from an HTTP call, forcing us to use HTTPS instead.
You could open your firewall door, make a port forward from port 80 to your localhost, add your IP address, and add a DNS subdomain, but again, this is too much for a simple test on a GPT.
There must be some flexibility in the GPT Schema to accept ports and non-HTTPS calls. That’s my complaint.
From a production perspective, yes, I agree with forcing the HTTPS connectivity, but blocking the PORT?
I see where you’re coming from. I guess my understanding was you would have your internal network with various services, which you connect to the public using a specified endpoint/load balancer. Then, you create a parallel “GPT” API, which is served on 443, and routes GPT function calls to their respective internal services.
I got around this by using a free tier aws vps to test on (with a free ssl), but definitely agree this seems like an oversight - would be great to at least have the option of http traffic even if it requires some extra settings or has safety limits
I support @barizon , there is no reason to limit users to use https or standart ports. it is just out of your scope guys, maximum that you shall do is to add red alert line under the schema description, saying that http is not secure or port not standard and so on, just for testing some ideas it is overwork to make real domain, real certificate, real dedicated just for this api port, o’c I can place to https server a router which will route my tests by app name, but it takes a time.