Hosting Whisper model on Vultr Machine

Hi,

I am using this endpoint https://api.openai.com/v1/audio/transcriptions , model- whisper-1 to transcribe audio files in n8n. but there is a 25 Mb limit which is making things difficult.

My question is if i host whisper open source model somewhere such as on Vultr Machine, will the limit be removed?

I don’t want to go through those steps of chunking the audio in order to manage with the 25 Mb limit so thought of hosting the model on a server.

Hi and welcome back to the community!

There is another way to work around the 25MB limitation, by re-encoding the audio files.

Here is a post from @_j with a quick overview for you to try.

1 Like