thanks it compiles now.
so you have only gave the after part of open aicall realtime at the blueprint. I am guessing its like this I think am missing something.
after the open aicall realtime is same as yours. i am asking for the behind
thanks it compiles now.
so you have only gave the after part of open aicall realtime at the blueprint. I am guessing its like this I think am missing something.
after the open aicall realtime is same as yours. i am asking for the behind
What have you tried so far?
Are you getting errors of some sort?
Thanks, @robertb for all the assistance!
no its not about getting a mistake.
Am just not sure on how to complete the blueprint part. Robert only shared the half of the blueprint and i am trying to guess how the other half works.
if robert can share the whole blueprint and explain it. it would be amazing
nicee but i really dont get where you send the voice message of yours to the openai.
The audio capture and sending of your voice to the OpenAI Realtime API is built into the plugin here, in here OpenAIAudioCapture class. So if you have chosen the real mic as the main mic input in Windows Sounds settings it should start sending your voice to the API when you press Play in Unreal Editor: OpenAI-Api-Unreal/Source/OpenAIAPI/Private/OpenAIAudioCapture.cpp at main · rbjarnason/OpenAI-Api-Unreal · GitHub
There is still also plenty of debug logging in the console from the plugin, you should see there what is going on.
it works at the beginning of the program but later I think my laptops fans make a lot of noice so its not able to recognize my voice from the raw could this be the problem
okeyyyyyyy i made it work i made my iphone the microphone so it works now I
THANK YOU ALOT ROBERT
Great news! This was actually my first Unreal Blueprint, and I relied heavily on online communities - especially the developer of the open-source Runtime Audio Importer - for guidance. Really glad I could pay it forward!
and
this way I am not able to hear anything what can that is coming from the unreal engine
You need VB-Audio VoiceMeeter to route the Output cable back to the speakers, there in the output options you can also set the 300ms delay on the audio to have it match the lip sync from audio2face.
And the trick is to set the output cable as the mic in to start audio2face but then before pressing play in Unreal to change the mic back to the iPhone mic.
i did exactly like you did
but i have still not got problem. its not taking the voice from the cable output but now I can listen the output coming from chat
can I see what did you connect to audio2face as the input and the output
As I’ve mentioned this is a bit of a hack, the proper way would be to stream the Blueprint SoundWaves as gRPC streams into headless A2F, but I have not done that yet and didn’t need it for my demo.
There is a timing issue involved to get this to work in the Sounds cable settings.
One more thing, in Unreal I had to cap the FPS in the Unreal Editor to 25fps otherwise there is no GPU space for Audio2Face to work properly, making the lip-sync very slow. And this is on brand new desktop with top of the line Core i9 CPU, 64GB ram and RTX 4090 graphics card.
Not sure exactly what you mean but I also have the LIVE button pressed, I’ve not tried to record a whole session, just doing this live. There is an issue where the VAD could detect the AI’s voice as the users voice so interrupts, cuts off its own voice, not sure if that is your issue? Here is my comment on it on another thread: [Realtime API] Audio is randomly cutting off at the end - #35 by robertb
alright robert it works perfectly. the voice issue was about the cap at fps I decreased it to 20 fps now its fine. so the real question is that how would this work when I render it and get a exe file would it still be connected to audio2face. when I run the exe file would it automatically connect to audio2face or what setup should I do to do it. got any ideas ?
Great, happy you got it to work. I’ve not thought about how to deploy this in an easier way but if we take this setup to production stage, we’d for sure add gRPC streaming to the Blueprint and use gRPC as streaming audio into Audio2Face and run it in headless mode on a separate computer or in the cloud.
Hi Robert, well done!!
bit out of topic- have you seen the new UE5,5 audio to lipsync plugin ? any thoghts about integrating it to the chatGpt plug in ?
thanks
Amir
Thanks! UE5.5 Metahuman lip-sync looks cool but I’m not sure if it works in realtime though, I think not.