So a dream set up for me would be to be able to use my phone to have an advanced voice mode conversation to remote control the development of a project (so the AI agent can interface with vscode, codex CLI or something similar to give instructions and monitor results and report back).
This will effectivelly free me from my screen and make me much more mobile during development.
So for this to work the advanced voice mode need to be able to execute commands somehow (through MCP, actions or whatever mechanism).
Has anyone attempted to make this work?
So, you want “voice-to-code”? Most devs that I know can type faster than they speak - in fact many devs have a hard time speaking. And what about foreign languages?
Agentic coding means you give instructions to AI that translates that into code edits. I’d say most developers do this already through copilot, codex, etc. and writing code line by line is becoming a less common way you write code. What I want is to take this one step further and be able to remote control this process through advanced voice mode.
If you are serious about this, I suggest that you develop a prototype. That’s how to get something done.
Of course, but I can save some time if anyone has tried it and can share their insights.