Split Tunneling API away from VPN (OpenVPN)

Hello developer community! I’m so excited to begin sharing and exploring this new frontier with everyone else! I’ve been personally developing and tinkering with my own projects for a while now with gpt plus, so I’m excited to become more integrated in this community to see what all we can develop together!

Anyways, I’m unsure if my question has been asked before, but I’m running into a pretty befuddling problem lol. I also want to be very explicit here, I am NOT asking to access OpenAI’s API through a VPN. I fully understand that OpenAI does not seem to condone VPN usage with their API and actively blocks it (which I’m assuming might be in the terms of service somewhere). That is fine and is the right of OpenAI to do so.
The problem is that my development environment is a bit unique, and I’ve never really worked with an API quite like this. I’ve been trying to modify Auto-GPT a little bit, but every time it calls the API it closes the connection. I’ve read that this is likely due to my VPN. Granted, I use OpenVPN merely as my interface and use a vpn file provided by my actual VPN provider. I do a lot of computer networking experiments and whatnot, so I’m able to create my own tunnels when I need them, but I’ve never needed to tunnel an API before.

What am I supposed to do if I’d like to keep my VPN up but I want to allow the API to bypass it? I’m used to tunneling per application or per IP, but if I’m actively developing and running my own code, I don’t actually know how I should do this. Which is a shame, because I’m right on the cusp of something really cool, but I need to find a way to prevent the connection from getting cut. I want to be clear I want to follow all the guidelines that’s out there, so I’m asking to do this so I AM following the proper guidelines and usage here. I’m not trying to make it run inside my VPN, I’m trying to make it an exception away from my VPN. If anyone thinks they could help me, this would be a huge help! Thank you all so much for your time!

Best,
-Ryan

Welcome to the developer forum!

Ok, so from your deployment environment you could configure your routing
sudo route add -net <OpenAI's IP> netmask 255.255.255.255 gw <your gateway> dev eth0

example

sudo route add -host 172.24.208.1 gw 192.168.1.1 dev eth0

This would create a split tunnel such that everything but the traffic to that OpenAI API would go through your VPN. Check the IP of the API endpoint from your deployment environment, assuming you have this level of control, if not, you should seek assistance from the deployment environment documentation and support staff.

2 Likes

Ahhhh, Okay! I think that’s actually what I’m looking for, is the actual command to check the IP of the API endpoint. I uh, did not realize you could do that lol. I thought that was restricted information somehow. But yeah, this is all in my own personal homelab sandbox/playground, so yeah I have full access no problems there. Do you use something like netcat to figure out with the IP endpoint is? I appreciate the help and quick response!

To get the IP of the API from your deployment environment you can use

nslookup api.openai.com

Oh man, lol, that’s super simple. Talk about face palm moment. Thank you. I swear sometimes I get inside my own head and projects so deeply super simple stuff just blows right past my head. Thank you for the help! I appreciate it!

1 Like