Nice! You’re making some good progress.
That’s extremely promising.
When I first heard of transformers and the attention mechanisms I thought this would be a great way of generating dialogue and having the NPCs reply based on information they have about themselves and the world.
Giving the player the ability to say their own lines, really adds to the agency over their actions.
Well done!
Next step for me would be fine-tuning on a world of info!
@RatherGood I should donate some fantasy RPG info to you to test out. Any interest in collaborating?
Perhaps, I am crashing through this blindly.
Hit me up if you’re interested…
Here’s what I was working on late last year… Gave up after a bit, though, and it wasn’t connected to avatars like you’ve done!
I do think with fine-tuning now, you could use that to help create more cohesive output.
Hey ! Very nice work! The quests and gameplay are very well integrated!
I tried to implement something more or less similar in Unreal Engine. Although, I still haven’t figure out how to reduce the delay for animated answers down to a few seconds.
Feel free to check it out!
That is looking really cool. A faster, fine-tuned model may be the answer. to reduce the delay.
Nice! What model are you using? Davinci can be slow with a big prompt.
I’ve fired up UE4 again recently, and might play around with this soon…
I am using Davinci. I believe that the problem doesn’t come from GPT-3 but rather from the AI voice generation and lip-sync animation.
Yes that is very true. These apps are unsustainable for large scale deployment at the moment. The carbon footprint would be too big, even for 100 users. But the produced heat may be distributed in houses… is there a mention of how much CO2 per GPT-3 tokens?
Been working on something similar as well!:
All conversations are currently being powered by GPT3, and I’m using Unity + Google text-to-speech to bring them in the game.
Wrote some of my thoughts of where NPC systems could go, I think GPT3 (and other transformers like it) could power good chunks of that vision:
Biggest problems I’m currently thinking about is cost (I’d love if there were some flat rate or mega bulk discount), how to cement things GPT3 says into the game world (maybe codex can help with this), and game engine code.
Nice job! Your AI seems better behaved than mine. Cant imagine how many games will come out with the Synty artwork style
Impressive, how did you get the response latency so low ?
This is very impressive!
Look into multi-modal solutions involving perceiver
I would love to test it out.
Very impressive and fast. Did you use fine-tuning to reduce latency?
No finetuning used in my examples. I don’t have access to fine-tuning API for Davinci.
Does fine-tuning reduce latency? I didn’t think it did.
Sorry my question was incomplete. I meant fine-tuned curie. Although not as creative as davinci, curie can also do a good job when given preliminary information (I tried it in Prompt, I didn’t use fine tuning). Maybe it will work with enough dialogue examples.
Oh that makes sense! I’ll have to try it out!
One thing I’m noticing though is that the latency on DaVinci is negligible compared to the time it takes for Google’s text-to-speech to synthesize the audio.