Whisper API fails on "large" ogg files (still below 25MB)

So for the last few days I’ve been unable to upload “larger” OGG (OPUS) files using the Whisper API. I could swear this was working totally fine just one week ago, and the weird thing is that it seems to still work flawlessly for files below 12MB. So I thought I’d ask here if anybody else has experienced the same lately? All of the files are below 25MB - just to clarify.

I’m using NodeJS and I’ve tried both the openai library from npm and just using regular node-fetch requests. It’s literally impossible for me to troubleshoot as it doesn’t give me any error code or details - it just fails. Here is the error I’m getting when using the openai library:

APIConnectionError: Connection error.
at OpenAI.makeRequest (/usr/src/app/node_modules/openai/core.js:292:19)
at async exports.performTranscription (/usr/src/app/utils/performTranscription.js:13:18)
at async exports.performTranscription (/usr/src/app/transcribe.js:473:26)
status: undefined,
headers: undefined,
error: undefined,
code: undefined,
param: undefined,
type: undefined,
cause: FetchError: request to https://api.openai.com/v1/audio/transcriptions failed, reason: read ECONNRESET
at ClientRequest. (/usr/src/app/node_modules/node-fetch/lib/index.js:1501:11)
at ClientRequest.emit (node:events:518:28)
at TLSSocket.socketErrorListener (node:_http_client:500:9)
at TLSSocket.emit (node:events:530:35)
at emitErrorNT (node:internal/streams/destroy:169:8)
at emitErrorCloseNT (node:internal/streams/destroy:128:3)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
type: ‘system’,
errno: ‘ECONNRESET’,
code: ‘ECONNRESET’

And this is the output I get in the console trying to transcribe through a regular node-fetch request:

{
"error": {
"message": "The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID xx in your email.)",
"type": "server_error",
"param": null,
"code": null
}
}

What can possibly be the problem here? Anybody else experiencing the same and how can I fix this? I’ve tried to contact customer support but I guess it’s just an AI generated response telling me to improve my retry-logic… I have > $100 in credit balance so it can’t be because of token limit.

And most importantly - any workarounds on this?

I suspect that Opus-encoded ogg gives one the ability to send unanticipated lengths of audio, and OpenAI tightened up some of their internal timeouts for how long the API endpoint might wait for model completion.

It is thus the time generating that could cause this, if they didn’t simply place a 100 minute limit or similar.

The only documented limit is the file size, but obviously unlimited hours and hours packed into one ultra-compressed file can have an extreme generation time and would reach some limitation. It is a bug in that it worked, and now it doesn’t.

Since these transcription time lengths approach $1 and a fail early in the text can mess it all up, it makes sense to chunk the audio task at found silences, scanning for them starting around 10 minutes. You can then get done in 1/10th the time by sending chunks all parallel.

1 Like

Right - I see. That might be the case then. However, the audio length was at 1 hour and 52 minutes so it’s not “that crazy”. But I will try another approach then.