Hi,
I’m building Piwi, a local AI agent that turns natural-language instructions into executable scripts.
Current setup:
Uses the OpenAI API for script generation only.
Scripts execute locally inside an isolated environment (Ubuntu WSL).
Package installs are confined to the sandbox.
User outputs are written to the host OS (Desktop / Documents / explicit paths).
Next steps include:
porting the execution layer t
o macOS,
exploring a mobile-compatible model (restricted execution, no shell access).
I’m looking for feedback on:
this agent pattern (LLM → script → sandbox),
portability concerns across OSes,
best practices when using the OpenAI API in local-execution agents.
Thanks.
