API 504s in production (vercel) only

Hello. I’m building out an app using the Open AI Nexjs starter project. It’s working fine locally, but I’m getting opaque 504 errors when calling /api/generate in prod. I’ve set up my key as an env var. Any ideas? Console log:

index-b31ea1d2cb974cba.js:1          
POST https://astroguide-ai-dldu.vercel.app/api/generate 504
(anonymous) @ index-b31ea1d2cb974cba.js:1
l @ main-f65e66e62fc5ca80.js:1
(anonymous) @ main-f65e66e62fc5ca80.js:1
P.forEach.e.<computed> @ main-f65e66e62fc5ca80.js:1
u @ index-b31ea1d2cb974cba.js:1
o @ index-b31ea1d2cb974cba.js:1
(anonymous) @ index-b31ea1d2cb974cba.js:1
(anonymous) @ index-b31ea1d2cb974cba.js:1
f @ index-b31ea1d2cb974cba.js:1
onSubmit @ index-b31ea1d2cb974cba.js:1
$e @ framework-e70c6273bfe3f237.js:1
Ye @ framework-e70c6273bfe3f237.js:1
(anonymous) @ framework-e70c6273bfe3f237.js:1
Nr @ framework-e70c6273bfe3f237.js:1
Tr @ framework-e70c6273bfe3f237.js:1
(anonymous) @ framework-e70c6273bfe3f237.js:1
De @ framework-e70c6273bfe3f237.js:1
(anonymous) @ framework-e70c6273bfe3f237.js:1
Ir @ framework-e70c6273bfe3f237.js:1
Jt @ framework-e70c6273bfe3f237.js:1
Zt @ framework-e70c6273bfe3f237.js:1
t.unstable_runWithPriority @ framework-e70c6273bfe3f237.js:1
Ql @ framework-e70c6273bfe3f237.js:1
Me @ framework-e70c6273bfe3f237.js:1
Xt @ framework-e70c6273bfe3f237.js:1
VM31:1 Uncaught (in promise) SyntaxError: Unexpected token 'A', "An error o"... is not valid JSON
Promise.then (async)
u @ index-b31ea1d2cb974cba.js:1
o @ index-b31ea1d2cb974cba.js:1
Promise.then (async)
u @ index-b31ea1d2cb974cba.js:1
o @ index-b31ea1d2cb974cba.js:1
(anonymous) @ index-b31ea1d2cb974cba.js:1
(anonymous) @ index-b31ea1d2cb974cba.js:1
f @ index-b31ea1d2cb974cba.js:1
onSubmit @ index-b31ea1d2cb974cba.js:1
$e @ framework-e70c6273bfe3f237.js:1
Ye @ framework-e70c6273bfe3f237.js:1
(anonymous) @ framework-e70c6273bfe3f237.js:1
Nr @ framework-e70c6273bfe3f237.js:1
Tr @ framework-e70c6273bfe3f237.js:1
(anonymous) @ framework-e70c6273bfe3f237.js:1
De @ framework-e70c6273bfe3f237.js:1
(anonymous) @ framework-e70c6273bfe3f237.js:1
Ir @ framework-e70c6273bfe3f237.js:1
Jt @ framework-e70c6273bfe3f237.js:1
Zt @ framework-e70c6273bfe3f237.js:1
t.unstable_runWithPriority @ framework-e70c6273bfe3f237.js:1
Ql @ framework-e70c6273bfe3f237.js:1
Me @ framework-e70c6273bfe3f237.js:1
Xt @ framework-e70c6273bfe3f237.js:1
4 Likes

Did you figure this out? I saw the same issue with Vercel in prod. Doesn’t happen consistently to me tho.

1 Like

Same here, also in Vercel. Intermittent so hard to track down

1 Like

I just passed this to the Vercel team to see if they have any suggestions. Stay tuned!

3 Likes

I have the same problem from today. Only in production. Next.js and Vercel.
Unexpected token ‘A’, “An error o”… is not valid JSON

Is there any update on this problem?

Use your API in cloud functions elsewere.

This error in function logs is making me sick.
Task timed out after 10.01 seconds

There’s a couple issues thats happening here:

  1. I’m assuming most people here are using the sample /api/generate scripts from the openai-quickstart-node project. It has the error handling in the wrong place. It should come before you try to interpret the response json on the line above.
    Unexpected token is throwing because you are trying to infer json from the text “An error occurred…”
    Move it down, drop the data.error, and only use the response.status in your Error like this
if (response.status !== 200) {
  throw new Error(`Request failed with status ${response.status}`);
}
  1. The unexpected error (504) is a timeout error. This is happening because Vercel deployment environments default to 10 second timeouts. This is an issue when using APIs like OpenAI that can take up to 12-15 seconds to fulfill a request.
    In order to increase this timeout, you must setup a custom vercel.json.
    This is very easy. Its just a file titled vercel.json in your project’s root folder, but there are a few more gotchas.
  • You need to have a Pro account in order to set a custom timeout or you will see an error like this

  • Even after upgrading to Pro, your pushes may not automatically trigger a deployment because you’re adding a vercel.json file

Pushed commits will deploy automatically for public repositories. However, if the vercel.json file changed or the project has Environment Variables, a Team Member on Vercel will have to authorize the Deployment. This is a security measure that ensures changes to Environment Variables and other configuration properties are reviewed before a Deployment is created (it can be disabled if you prefer via the git-fork-protection setting). A link to authorize the Deployment will be posted as a comment on the Pull Request.

I disabled git-fork-protection in attempt to get around this, but it still wouldn’t automatically deploy. I ended up needing to install the vercel CLI and deploy it from the CLI manually.

After all was said and done, the random 504 error stopped occurring in prod and everything worked as expected.

See the follow up comment for the supporting links…2 link limit per comment for new users

4 Likes
1 Like

Hi all – Hassan from Vercel here. I recommend you use Edge functions with Vercel which will enable you to bypass the 10 second timeout on Vercel’s hobby tier which is seems is what most of you are experiencing. With Edge functions, you have up to 30 seconds to return a response, and if you use edge functions with streaming, you can technically stream indefinitely.

If you want to take a look at why edge functions and how to use them, I specifically wrote up a blog where I migrate a serverless function to an edge function that uses GPT-3 here: Building a GPT-3 app with Next.js and Vercel Edge Functions – Vercel

4 Likes

Thanks. I eventually figured this out. Good excuse to start using the edge functions goodness :slight_smile:

@gannonhall what was your fix? I’m encountering the same issue

@Mouradif I moved to edge functions with streaming. This repo was helpful: GitHub - Nutlope/twitterbio: Generate your Twitter bio with ChatGPT and Vercel Edge Functions.

My recommendation is to by-pass Vercel APIs altogether and use a different backend, for example, Cloud Firebase Functions. You could configure the timeout up to 9 minutes, without having to pay a monthly bill.

If you’re interested to learn more, I wrote a post about it here: Overcoming the Vercel Timeout Problem with Firebase Functions

I just resolved this issue for a Next.js app I’m working on that is deployed on the Vercel hobby tier.
(After…far too long of digging and debugging.)

Checkout issue #1174 at netlify/next-runtime/issues on GitHub.
(Can’t post links, sorry.)

tl;dr:

Do not:

import { Thing } from '@place';

Do:

import Thing from '@place/Thing';

wherever possible.

Material UI icon imports were apparently the biggest issue.

1 Like

Thanks for stopping by to let us know. Hope you stick around.

2 Likes

Try this. Check the OpenAPI Uasge.

In my case, there were 429 Errors in the logs in Vercel.

The reason of 429 Errors were API Usage exceeded the limits.