How to properly proxy chatGPT calls/get around CORS errors?

hi! i’m trying to call the openai API to generate some text based on some input. it worked fine until i deployed my website on render and got CORS errors.

right now i have a frontend that sends a json input to my backend inside the body a post request. then the backend makes the openai API call using the json data as an input. the backend then sends a response back to the frontend with the openai output.

i think it’s specifically the call to the openai api that breaks things; when i comment it out, the error goes away.

the error:
Access to fetch at (backend endpoint) from origin (frontend website) has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.

i’ve tried proxying the openAI api by using:

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY, httpAgent: new HttpsProxyAgent.HttpsProxyAgent([OpenAI completions endpoint])});

i’ve also tried using external proxy middleware:

const apiProxy = createProxyMiddleware({ target: [OpenAI completions endpoint]});
...
app.use(['/roastArtists', '/roastTracks'], apiProxy)
... (all other code the same as below)

backend code:

app.post('/roastTracks', async function(req, res) {
  // TODO:  implement try/catch so server doesn't just crash lmfao
  // console.log(req.body);
  
  let topTracks = req.body
  let topTracksStr = JSON.stringify(topTracks);
  // console.log("generateRoast, topArtists:", topTracks);
  // console.log("generateRoast:",topTracksStr);
  
  console.log("sending to chatGPT...")
  const completion = await openai.chat.completions.create({
      messages: [
          { role: "system", content: process.env.TRACKS_PROMPT },
          { role: "user", content: topTracksStr}
      ],
      model: "gpt-3.5-turbo",
  });
  console.log("finished!")
  console.log(completion.choices[0]);

  res.send({
    gpt_response: completion.choices[0]
    // gpt_response: {"message" : { "content" : topTracksStr}}
  })

  // console.log("GPT response:", completion.choices[0]);
});

frontend (sorry for messy code)

async function roastArtists(time_range) {
        console.log("roast artists:", time_range);
        setResponseState(LoadingState.LOADING);
    
        // not sure if i did this right lol
        spotifyApi.getMyTopArtists({ limit: 5, time_range: time_range })
            .then((response) => {
                let topArtists = []
                for (let i = 0; i < response.items.length; i++) {
                    topArtists.push(response.items[i].name);
                }
                return topArtists;
            }, function(err) {
                console.log('Something went wrong!', err);
                setResponseState(LoadingState.INPUT);
            })
            .then(async (topArtists) => {
                // proxy request to backend to ask for chatGPT output
                console.log("[generateRoast()] fetching gptResponse")
                const gptResponse = await fetch(BACKEND_ROUTE + "/roastArtists", {
                    method: "POST",
                    headers: {'content-type' : 'application/json'},
                    body: JSON.stringify({"topArtists" : topArtists})
                })
                return gptResponse;
            }, function(err) {
                console.log('Something went wrong!', err);
                setResponseState(LoadingState.INPUT);
            })
            .then((res) => res.json())
            .then(async (gptJson) => {
                let gptRoast = gptJson.gpt_response;
                setResponseState(LoadingState.OUTPUT);
                setRoast(gptRoast.message.content);
            }, function(err) {
                console.log('Something went wrong!', err);
                setResponseState(LoadingState.INPUT);
            })
    }

Thanks!

You need to make sure there is no direct connection between the client side browser and the back end server making the API calls.

example generated by ChatGPT

<!DOCTYPE html>
<html>
<head>
    <title>GPT-4 Turbo Demo</title>
</head>
<body>
    <h1>OpenAI GPT-4 Turbo API Demo</h1>
    <form action="/get_response" method="post">
        <label for="user_input">Enter your text:</label><br>
        <textarea id="user_input" name="user_input" rows="4" cols="50"></textarea><br>
        <input type="submit" value="Submit">
    </form>
    <!-- Response will be displayed here -->
    {% if response %}
        <h2>Response from GPT-4:</h2>
        <p>{{ response }}</p>
    {% endif %}
</body>
</html>
from flask import Flask, request, render_template
import openai

app = Flask(__name__)

# Replace 'your_openai_api_key' with your actual OpenAI API key
openai.api_key = 'your_openai_api_key'

@app.route('/', methods=['GET', 'POST'])
def index():
    response = ""
    if request.method == 'POST':
        user_input = request.form['user_input']
        gpt_response = openai.ChatCompletion.create(
            model="gpt-4-turbo",
            messages=[{"role": "system", "content": "You are a helpful assistant."},
                      {"role": "user", "content": user_input}]
        )
        response = gpt_response.choices[0].message['content']
    return render_template('index.html', response=response)

if __name__ == '__main__':
    app.run(debug=True)