Hello. I tried many ways to use whisper API in React native and couldn’t get a result. I’m so confused now and I don’t know what to do. Here’s how far I’ve come: I recorded a sound with the react-native-audio-recorder-player package. The recording location of this sound is “file:///data/user/0/com.xyz.app/cache/sound.mp4”. Now I will send this mp4 file to OpenAI’s whisper API. At this stage, I tried everything such as RNFS, blob, formdata etc. and could not get any results. Can someone please explain exactly what I should do at this stage?
texport const getCompletion5 = async (key) => {
const configuration = new Configuration({ apiKey: key });
const openai = new OpenAI(configuration);
// sound file path
const filePath = "file:///data/user/0/com.asd.xyz/cache/sound.mp4";
// how should I prepare this??
let audioFile;
try {
openai.audio.transcriptions.create({
file: audioFile,
model: "whisper-1",
}).then(transcription => {
console.log(transcription.text);
return transcription.text
}).catch(error => {
console.error("error 1:", error);
});
} catch (error) {
console.error("error 2:", error);
}
}