ChatGPT API Proxy Server on Heroku - Intermittent H12 Errors

Hello everyone,

I’m encountering intermittent H12 errors when using my API proxy server on Heroku, and would appreciate any guidance to resolve this issue.

I have developed an API proxy server using Express and deployed it on Heroku. The server acts as an intermediary between my Android app and the OpenAI API for chatbot functionality. However, I’m experiencing intermittent H12 errors, indicating a request timeout about 10 to 15% of the time. Image Below

I have thoroughly reviewed Heroku’s documentation on troubleshooting H12 errors, followed the recommended steps such as adjusting the timeout settings, checking network connectivity, and optimizing the code, but the errors persist.

I would greatly appreciate any recommendations or suggestions from the community.

I’m open to exploring alternative proxy server setups if that might help mitigate the H12 errors. Below is my current code configuration:

index.js

const express = require('express');
const axios = require('axios');
const bodyParser = require('body-parser');
const cors = require('cors');

const app = express();
const PORT = process.env.PORT || 3000;

const API_KEY = process.env.OPENAI_API_KEY;

app.use(cors());
app.use(bodyParser.json());

// Increase the timeout for the Axios requests
const instance = axios.create({
  timeout: 60000, // Set the timeout to 60 seconds (adjust as needed)
});

app.post('/chat', async (req, res) => {
  const requestBody = req.body;
  
  requestBody['temperature'] = 0.7; 
  requestBody['max_tokens'] = 360;

  try {
    const response = await axios.post(
      'https://api.openai.com/v1/chat/completions',
      requestBody,
      {
        headers: {
          'Content-Type': 'application/json',
          Authorization: `Bearer ${API_KEY}`,
        },
      }
    );

    res.json(response.data);
  } catch (error) {
    console.error('Error:', error);
    res.status(500).json({ message: 'An error occurred while processing your request.' });
  }
});

app.listen(PORT, () => {
  console.log(`Server is running on port ${PORT}`);
});

And here is my package.json file:

{
  "name": "chatgptapiproxy",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "start": "node index.js",
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "axios": "^1.4.0",
    "body-parser": "^1.20.2",
    "cors": "^2.8.5",
    "express": "^4.18.2"
  }
}

If anyone has encountered similar issues with API proxy servers on Heroku or has any insights into troubleshooting H12 errors, I would greatly appreciate your assistance and recommendations.

Thank you so much everyone :slight_smile: :sparkles:

Rick

Well, some requests do take longer than 30 seconds to complete. I’m unsure about Heroku, but you may need to configure it so that it has a higher threshold of timeouts, or that this is not considered an error, or try to use streaming responses, or, well, use an alternative to Heroku :wink:

Edit:

The timeout value is not configurable. If your server requires longer than 30 seconds to complete a given request, we recommend moving that work to a background task or worker to periodically ping your server to see if the processing request has been finished. This pattern frees your web processes up to do more work, and decreases overall application response times.

Via Request Timeout | Heroku Dev Center

1 Like

Tysm :saluting_face: I appreciate the advice.

I’ve recently been hit with same issues. It seems the GPT3.5-turbo endpoint slowed down it’s response times massively ~ 2 days ago. This causes timeouts when the API takes 30+ seconds to respond. A solution I’m currently working on implementing is streaming. This way data is returned to the Heroku server in small chunks which can be forwarded to the application in real-time - thus preventing H12 errors.

In many applications this is also just a better implementation, streaming users their responses rather than forcing them to wait.

1 Like

Shashank - facing the same issue now. Can you share a good resource that best helped you implement the streaming functionality?

Let’s be honest, Heroku just isn’t the right fit for chatgpt API.
=> considering the average 20seconds for chatgpt api to answer, your heroku dyno becomes a bottleneck and stops every further request to reach your API.

It cannot handle parallel HTTP requests.
so you end up having to pay multiple dyno (minimum 25$ per dyno since the basic one cannot be scaled horizontally) to make sure your app is not stuck if multiple requests are done within the same 20seconds window.

The dream would be for chatgpt to provide some webhook functionality where it could call our endpoint once it’s done, but not sure they’ll ever release it.

So for now I’m going serverless (firebase function), which may seem counter intuitive since you pay per seconds of usage, however at least this way I can have multiple http request in parallel.

1 Like

firebase cloud function v2? Having heck of time getting oai stuff to run like that. http tips?