Hello everyone!
I’m excited to share a project that I’ve been working on called AIUI.
AIUI is a platform designed to enable seamless two-way verbal communication with artificial intelligence. It aims to bridge the gap between human users and advanced AI, making it easier than ever to interact with AI in a natural, conversational manner.
To give you a better idea of what AIUI is all about, I’ve put together a short demo video:
AIUI is open-source and hosted on GitHub here: GitHub - lspahija/AIUI: AIUI is a platform enabling seamless two-way verbal communication with AI.. I’m actively seeking feedback, suggestions, and contributions from the community to help improve the platform and shape its future development.
If this interests you, I invite you to check it out, try it for yourself, and give it a star if you find it useful! Also, please feel free to share your feedback, ideas, or any issues you encounter - every bit of input helps us make AIUI better.
Looking forward to hearing your thoughts and seeing what we can build together!
Thank you!
8 Likes
I haven’t tried yet, the video looks pretty cool
- does it handle other languages ?
- is it “easy” to have type as well the text on screen
- can you choose a custom voice for the assistant ?
1 Like
I’m interested in two way (or some way) image based communication which could make things even more seamless if anyone wants to brainstorm
Hi Luka. I am fighting with python for over a week doing just that. I had issues with many sound libraries, input and output, and knowing when to stop recording (when speech was stopped). This looks super interesting. I’ll download it and let you know of any feedback.
====
After downloading it and inspecting the code - the speech recording and sound are done in the front end, which makes sense and, I guess, much easier than doing in python.
====
Amazing! works really good! I was able to run it without much problem.
The only thing that I would try to improve is handling speech issues: I paused to think for a second and the sentence was already processed. Or if I speech while he speaks. Not sure what happens then. This is a very common issue when speaking to AI bots.
I might change the prompt and condition it if the sentence looks complete. If not - let it return some keyword. Then you will know to process the next audio and add it to the current prompt.
If you do that, the speech might be more natural even if I stop to think for a second.
====
Just one issue that you should clarify: when running the docker, the OPENAI_API_KEY parameter is without any quotes (" ') - just OPENAI_API_KEY=sk-… That took me some time to understand. Maybe if the string has quotes - you can remove them? This will save a lot of time to a lot of people…
Hi @Luka_Spahija - Awesome work and I agree that the future of interfaces would change. Your work is very relevant to a use case that I have been working on. Would love to collaborate and see how I can help contribute. Not sure how the forum works but if you can direct message me, that would be great.
Btw, I am new to this community!
would also love to help, this is usefell for my usecases also
but i am not that good with Python, let me know how i can contribute otherwise
H
I installed it reacts to my mic, but i cant get any sound out of it. Any ideas how to fix this ?
It was an improper configuration. This was essential – it works really well. I think it would be nice to have a way to stop it from auto detecting speech, especially since the free qouta is so cheap. Can’t test it too much cuz I’m broke lol.
Will plan some specific tests and use it – I have an idea of using it to interact with alexa maybe
Just one issue that you should clarify: when running the docker, the OPENAI_API_KEY parameter is without any quotes (" ') - just OPENAI_API_KEY=sk-… That took me some time to understand. Maybe if the string has quotes - you can remove them? This will save a lot of time to a lot of people…