I would like to propose the development of a fully AI-powered operating system for PC. My vision is an OS that can instantly detect what kind of activity I’m doing—like gaming, productivity, or communication—and automatically adjust system resources in real time. The AI would also act as a smart assistant by adding meetings from real time chat or email directly to my calendar, organizing files based on natural language requests (such as “find my vacation photos from 2015” or “gather all my payslips as PDFs on the desktop”), and handling many other time-saving tasks. The goal is to have a truly adaptive and intelligent operating system that streamlines everyday PC use.
Welcome to the forum!
Have you seen
https://openai.com/index/introducing-chatgpt-atlas/
Does it sort of remind you of Chrome and ChromeOS
See the missing part? (Rhetorical question)
Sounds like what you need is Windows 11. MS is creating an agentic OS. ![]()
If one pushes the idea further than the OS they would possibly create a user computer with hardware specific to AI.
Years ago they created Lisp machines
which was also one of the AI winters.
Lisp is one of the original AI programming languages.
For an AI OS you simply need an Interface… To AI a computer OS is also a computer OS…
You’d need to find some bright SPARK with a great design I guess. One that was totally modular.
AI is programmatic It is not functional… by this I mean existing OS’s are created with FIXED functionality… AI is LIVE, it evolves, it writes itself.
An AI OS would need an interface to capture that evolution.
I don’t know if that interface exists yet but what do you see an AI OS Interface looking like? Remember it would need to work on all monitors/displays to be generic, even in a command line mode like DOS with text.
How is your AI OS actually structured?
If helpful, I’ve discussed how my method evolved from CLI → browser → tree-based UI here:
I’ve been working on a project just like this. With the whole idea of defining the A.I. to be the local system operating intelligence but offline capable. The intent is to keep it completely modular for any future LLM. Basically a shell for any LLM. But for that, you have to tweak the system to overcome the nuances that come with models. But it’s do able!
OpenAI is making their own. So your wish should come true soon.
I’ll say this short and sweet: Any notion of an AI Operating System for a PC is totally insane - that includes Microsoft
I think maybe we have to separate an AI Operating System and an AI Operating System Operator.
An Operating System helps you interact with hardware, it is not helping you interact with what’s on the screen, you are doing that.
An AI Operating System I consider similarly… It still has a human Operator, only that Operator accesses the model through code.
People be whizzing ahead with the models and ignoring the peripherals…
People assume the models to be the Operators but the Operators are us, the Meta, the abstract reason why the task needs to be done.
CODE is the peripheral of AI… How we present and interact with that our Interface.
With all due respect, I’m not interested in semantics.
An Operating System helps you interact with hardware, it is not helping you interact with what’s on the screen…
Have you ever used Windows? Just look at the UX - It goes way beyond hardware interaction.
… you are doing that.
Of course I’m doing that - I have been doing that for decades.
An AI Operating System I consider similarly… It still has a human Operator, only that Operator accesses the model through code.
I have been a software engineer for 37 years. I have been implementing specialized desktop AI solutions since gpt-3.5. But that is a far cry from, what you call, an AI Operating System.
CODE is the peripheral of AI…
CODE is CODE, regardless if it’s AI code or not.
Windows 11 is becoming agentic. That’s what I call an AI Operating System.
I think this actually highlights that “AI Operating System” is being used to mean different things in the same discussion.
I think I was clarifying that however ‘Agentic’ they are, Agents are an interface away and anyone can build them. They aren’t a kernel software layer they are a conceptual layer that uses code.
Computers are a physical interaction, through a mouse or keyboard or screen. AI is an interaction through code on an abstract layer.
Agreed, but code generated or executed by AI carries delegated agency. The goals, constraints, and stopping conditions are defined elsewhere, even if execution is automatic.
When intent isn’t explicit, users end up operating within the AI’s defaults rather than their own goals. At that point, they aren’t really “operating” the system, they’re adapting to it.
The OP seemed to want their Agency applied rather than have a Software Developer apply it for them. I hope we get more people with that vision on this forum. ![]()
Create functionality for a before unknown task
Have it execute appropriately
Have the AI constantly consider your functionality and update it over time making use of other modules.
My prompt - “UI – Step By Step AutoHotkeys Script Builder to… instantly detect what kind of activity I’m doing like gaming, productivity, or communication—and automatically adjust system resources in real time”

(For Best Experience Open Animated GIF Image In New Window)
Also very little in details or examples… Some screenshots might help this user.
I’m not saying that you shouldn’t be able to read and modify a little code… I’m just saying…
You don’t have to be a machinist to drive a tractor ![]()
I’m not even claiming it’s commercially viable. I am suggesting that a lot of stuff in this space, the control/orchestration of at least some local context, is not only technically viable but an essential skill like reading, or tool use, or talking to other people. It’s also I think a skill business can’t support.
I am particularly interested as a Developer in simplifying Human/AI/Computer Interface.
I deleted my last post, I don’t like to share my own personal projects out there so easily lol.
But i do think this thread is worth digging deeper into.
I think a big part of being a developer is intentionally moving the boundary between operator and controller.
The goal isn’t to turn everyone into a machinist, it’s to design systems where the machinist knowledge is embedded in the tool itself, so the operator can act effectively without needing to internalize all of it.
That’s where I agree with you that control and orchestration of local context is an essential skill, but I’d frame it as a developer responsibility, not something every user should have to learn directly. Just like reading or tool use, the complexity gets pushed down into the interface over time.
So when I say “someone still needs to know where the controls are,” I don’t mean the user has to. I mean the system has to make those controls explicit, legible, and safe. So that the intent flows cleanly from human → interface → execution without hidden assumptions.
I agree with you on collapsing the operator/controller gap by design, not by making the AI autonomous, but by making the interface do more of the work humans shouldn’t have to.
I’m not claiming that’s commercially viable — just that it’s technically viable and, long term, probably unavoidable.
I get that, I tend to see things a bit differently though.
I see projects more like seeds. I’m very aware that a lot of people here are doing far cooler things with AI and neural nets than I am, and equally I’ve seen threads where people are still wrestling with basic API errors. That spread is kind of the point for me.
I’ve got 6/7 threads out there where I’ve been weighing this idea from different angles. For me, everything is a project, and every concept is really just a question I’m trying to answer. Putting things out there early helps me stress‑test the idea, see how others interpret it, and sometimes watch it grow into something I wouldn’t have arrived at alone.
So it’s less about showcasing a finished thing, and more about sharing the thinking process while it’s still alive.
That’s fair, and I get that approach.
I think where I’m slightly orthogonal is that I’ve found the interface only really works once there’s something underneath it doing the unglamorous work such as; intent normalization, policy decisions, arbitration between competing actions, that sort of thing.
I pulled my post mostly because I realized this thread was naturally drifting toward interface semantics, and the thing I’m exploring lives a bit deeper in the stack. I didn’t want to oversimplify it just to keep up with the direction of the discussion.
That said, I agree with you that collapsing the operator/controller gap by design is the right goal. I just tend to come at it from the bottom up rather than the top down.
I think this might be drifting a bit beyond the scope of the thread, but conceptually I don’t see this as unfamiliar territory.
At a basic level, intent normalization and policy constraints often collapse down to something like an allowed_type or capability relation on a module (disable_functions in PHP) - I was doing versions of that 20 years ago in early CLI tree systems in MASM. The mechanics themselves aren’t especially exotic.
id, parent_id, type_id, name, description
allowed_types id, type_a, type_b
rules id, allowed_type_id, rule
Where arbitration comes in is exactly where simple trees stop being enough, and you start needing clearer rules about precedence, scope, and cancellation. I spent quite a bit of time back then exploring branching function trees and how control flows through them, all my code runs recursively through a controlled eval-style execution loop. The CLI version I built could even split branches off into a single ‘executable’ (not yet tried that with my browser version).
I’m mostly just holding the door open here and seeing where the discussion naturally wants to go.
That’s fair, and I agree with you that none of the individual mechanics are exotic; ie. trees, capability constraints, precedence rules, recursive eval loops, etc. have all existed for a long time.
Where I’ve found things start to diverge a bit now is less in the structure itself and more in the inputs. When intent is coming from a probabilistic system rather than an explicit command, normalization and arbitration stop being purely syntactic problems and start drifting into trust, scope, and reversibility questions.
In older CLI or tree-based systems, the operator’s intent was already well-formed by the time it hit the execution layer. With AI in the loop, a lot of the work shifts toward deciding how confident you are in the intent, how much authority it’s given, and how aggressively it’s allowed to act or unwind.
I think that’s where some of the renewed interest comes from? Not because the primitives are new, but because the assumptions about intent and agency underneath them have changed.
Operating Systems that include AI is the decision of the vendor (i.e., Microsoft, Apple, etc.) If a user installs AI software on the operating system, that software is not a part of the operating system.
Do you agree?
I tend to assume the renewed interest comes less from the primitives changing and more from the fact that we can finally implement this stuff at scale. The discussion is everywhere now — media, tooling, APIs — whereas 20 years ago a lot of it stayed theoretical or niche.
When I was doing this kind of work back then, I personally hit a wall around the time CUDA came out. I didn’t have the right trust levels, access, or infrastructure to really push it further. My mentor was an old-school ASM programmer with a deep interest in geopolitics, so a lot of our conversations were already about power, control, and who gets to wield these systems.
It feels a bit like we’re going through a fractal wave — similar to SIMD in terms of the technical shift, but at a scale that affects everyone. The primitives aren’t new, but their reach and consequences definitely are.
Trust is earned through representation before execution.
