Here is a Realtime Voice API Plugin for Unreal Engine and all-talking 3D Metahumans

thanks it compiles now.

so you have only gave the after part of open aicall realtime at the blueprint. I am guessing its like this I think am missing something.

after the open aicall realtime is same as yours. i am asking for the behind

What have you tried so far?

Are you getting errors of some sort?

Thanks, @robertb for all the assistance!

no its not about getting a mistake.

Am just not sure on how to complete the blueprint part. Robert only shared the half of the blueprint and i am trying to guess how the other half works.

if robert can share the whole blueprint and explain it. it would be amazing

No problem, here is the whole Blueprint with everything needed.

2 Likes

nicee but i really dont get where you send the voice message of yours to the openai.

The audio capture and sending of your voice to the OpenAI Realtime API is built into the plugin here, in here OpenAIAudioCapture class. So if you have chosen the real mic as the main mic input in Windows Sounds settings it should start sending your voice to the API when you press Play in Unreal Editor: OpenAI-Api-Unreal/Source/OpenAIAPI/Private/OpenAIAudioCapture.cpp at main · rbjarnason/OpenAI-Api-Unreal · GitHub

There is still also plenty of debug logging in the console from the plugin, you should see there what is going on.

it works at the beginning of the program but later I think my laptops fans make a lot of noice so its not able to recognize my voice from the raw could this be the problem

okeyyyyyyy i made it work i made my iphone the microphone so it works now I

THANK YOU ALOT ROBERT

2 Likes

Great news! This was actually my first Unreal Blueprint, and I relied heavily on online communities - especially the developer of the open-source Runtime Audio Importer - for guidance. Really glad I could pay it forward! :blush:

2 Likes


this is the systems output connected to cable input.

and


in audio2face the input device is set as the cable output. lastly the output device of audio2face is the speaker of the laptop. am I doing it as you said ?
the of the unreal engine is my phone.

this way I am not able to hear anything what can that is coming from the unreal engine

You need VB-Audio VoiceMeeter to route the Output cable back to the speakers, there in the output options you can also set the 300ms delay on the audio to have it match the lip sync from audio2face.

And the trick is to set the output cable as the mic in to start audio2face but then before pressing play in Unreal to change the mic back to the iPhone mic.

1 Like

i did exactly like you did

but i have still not got problem. its not taking the voice from the cable output but now I can listen the output coming from chat

can I see what did you connect to audio2face as the input and the output

As I’ve mentioned this is a bit of a hack, the proper way would be to stream the Blueprint SoundWaves as gRPC streams into headless A2F, but I have not done that yet and didn’t need it for my demo.

There is a timing issue involved to get this to work in the Sounds cable settings.

  1. Start Audio2Face, choose Streaming Player but don’t start Recording (or set it to Live yet.)
  2. Set the Windows Sounds input to “Cable Output”
  3. Press Record and set the A2F input to Live - it will be silent but it will have locked A2F to the “Cable output” which is where, with this patching, the Unreal Audio out will be coming from.
  4. Start Unreal Engine.
  5. Set the Windows Sounds input to “Microphone” instead of “Cable Output”, now the Mic is active for the Blueprint to stream to OpenAI Realtime API
  6. Press Play
  7. Open V-Meeter and make sure to set the Cable Output also to the Speaker Hardware on the computer

One more thing, in Unreal I had to cap the FPS in the Unreal Editor to 25fps otherwise there is no GPU space for Audio2Face to work properly, making the lip-sync very slow. And this is on brand new desktop with top of the line Core i9 CPU, 64GB ram and RTX 4090 graphics card.


the connections are correct now but the problem is the audio2face doesn’t get the whole voice. it detects part by part what can be the reason of this

Not sure exactly what you mean but I also have the LIVE button pressed, I’ve not tried to record a whole session, just doing this live. There is an issue where the VAD could detect the AI’s voice as the users voice so interrupts, cuts off its own voice, not sure if that is your issue? Here is my comment on it on another thread: [Realtime API] Audio is randomly cutting off at the end - #35 by robertb

alright robert it works perfectly. the voice issue was about the cap at fps I decreased it to 20 fps now its fine. so the real question is that how would this work when I render it and get a exe file would it still be connected to audio2face. when I run the exe file would it automatically connect to audio2face or what setup should I do to do it. got any ideas ?

1 Like

Great, happy you got it to work. I’ve not thought about how to deploy this in an easier way but if we take this setup to production stage, we’d for sure add gRPC streaming to the Blueprint and use gRPC as streaming audio into Audio2Face and run it in headless mode on a separate computer or in the cloud.

1 Like

Hi Robert, well done!!

bit out of topic- have you seen the new UE5,5 audio to lipsync plugin ? any thoghts about integrating it to the chatGpt plug in ?
thanks
Amir

1 Like

Thanks! UE5.5 Metahuman lip-sync looks cool but I’m not sure if it works in realtime though, I think not.

1 Like