When can we expect the voices from advanced mode will be made available to the API. Right now they are two different voice. And when will you make available tools to control the voice emotion, tone, innotation, etc?
When can we expect the voices from advanced mode will be made available to the API. Right now they are two different voice. And when will you make available tools to control the voice emotion, tone, innotation, etc?