Great language model, but it is a shame that you do not have a read aloud/text to speech fully integrated with api, specially in the outputs. It is so much nice to have a voice behind the text and for visually impaired people is much more comfortable. All the tech is the edge browser, but it does not work “naturally”.
I wonder if you are planning to have speech recognition in the inputs later on the road.
One thing is to have a pen pal, much better is to be able to talk with.
There are countless text-to-speech applications available, so you can use the OpenAI API to send text completions to any text-to-speech application and “abracadabra” you have want you want.
Furthermore, if you need ChatGPT and not the OpenAI API, you can wait a bit and OpenAI is planning to release a ChatGPT API and you can then easily integrate that “coming soon” ChatGPT API with any text-to-speech application you desire.