AI Operating System for PC

I tend to assume the renewed interest comes less from the primitives changing and more from the fact that we can finally implement this stuff at scale. The discussion is everywhere now — media, tooling, APIs — whereas 20 years ago a lot of it stayed theoretical or niche.

When I was doing this kind of work back then, I personally hit a wall around the time CUDA came out. I didn’t have the right trust levels, access, or infrastructure to really push it further. My mentor was an old-school ASM programmer with a deep interest in geopolitics, so a lot of our conversations were already about power, control, and who gets to wield these systems.

It feels a bit like we’re going through a fractal wave — similar to SIMD in terms of the technical shift, but at a scale that affects everyone. The primitives aren’t new, but their reach and consequences definitely are.

Trust is earned through representation before execution.

Semi-related…

Mozilla is waking up and wants to be an AI browser…

As Mozilla moves forward, we will focus on becoming the trusted software company. This is not a slogan. It is a direction that guides how we build and how we grow. It means three things.

  • First: Every product we build must give people agency in how it works. Privacy, data use, and AI must be clear and understandable. Controls must be simple. AI should always be a choice — something people can easily turn off. People should know why a feature works the way it does and what value they get from it.

  • Second: our business model must align with trust. We will grow through transparent monetization that people recognize and value.

  • Third: Firefox will grow from a browser into a broader ecosystem of trusted software. Firefox will remain our anchor. It will evolve into a modern AI browser and support a portfolio of new and trusted software additions.

1 Like

Without getting into edge cases (like malware, firmware, or drivers), yes.
Drivers blur the line because they often operate at privileged ring levels (e.g. Ring 0 / kernel mode).

Actually browsers are another good example I think - Ring 3?

Actually browsers are another good example I think

Browsers are not an operating system. Chrome is not an OS. From Google:

No, Google Chrome is a web browser, not an operating system, but it’s the core of Chrome OS, which is Google’s cloud-based operating system for Chromebooks, built on Linux and focused on web apps and simplicity. Think of Chrome as the app you use on Windows/Mac, while Chrome OS is the entire system (like Windows or macOS) that uses Chrome as its main interface.

Key Differences:

  • Google Chrome (Browser): An application you install to browse the internet, runs on other OSes (Windows, macOS, etc.).
  • Chrome OS (Operating System): The actual OS found on Chromebooks; it’s lightweight, Linux-based, and heavily relies on the Chrome browser and web apps for functionality.

In essence: Chrome is a part of Chrome OS, but Chrome OS is the complete system that makes Chromebooks work.

I don’t mean browsers are operating systems.
I mean they effectively have privileged integration within the OS that other user applications don’t.

I mostly agree, but I think the distinction gets fuzzy in practice.

Software doesn’t become “part of the OS” because of how it’s installed.. it becomes part of the operating system functionally when it gains privileged authority over execution, defaults, or mediation that other applications don’t have.

That’s why the browser wars example still matters. IE wasn’t “the OS” by definition, but it effectively shaped how the OS was used because it sat in a privileged mediation role.

For AI, the interesting question isn’t whether it’s kernel level or user installed, but whether it becomes the layer that normalizes intent, arbitrates actions, and quietly sets behavioral defaults for the rest of the system.

Once that happens, arguing about definitions matters a lot less than asking who controls that mediation layer and under what constraints.

I do my very best to avoid reversibility questions, my eval-run systems used to overrun almost as badly as my MASM code :smiley: . I lost hard-drives in the early days.

I’m considering a distributed AI OS/Kernel… Isn’t that our trust rating? It leaks intent and quietly sets behavioral defaults for the rest of the system. :slight_smile:

In the late 90s / early 2000s, the core argument against Microsoft was not “Internet Explorer is an OS”, but: Removal wasn’t realistically possible without breaking the system.

They did it on purpose:

Microsoft faced monopoly accusations primarily for leveraging its dominant Windows operating system to crush competition, especially Netscape Navigator, by illegally bundling Internet Explorer (IE) with Windows and making it hard to remove, controlling the PC “platform,” and using exclusive contracts with PC makers (OEMs) to limit rival browsers and technologies like Java, which stifled innovation and user choice, leading to major antitrust cases in the 90s

1 Like

Shake the tree; distribution tells you where the power was.

1 Like

Microsoft lost the antitrust case.

1 Like

What is different about the ‘AI OS’ I am describing is that the infrastructure is already built out for it… Distribution ~8 Billion… (Almost every internet user can access GIFs)… It is not a monopoly.

All we need is a format to start building in.

When I talk about ‘Agent GIFs’ these are AI headers in a GIF file, GIFs are (contextually) distributable on most forums. They could be the size of a tweet, they can currently act like ‘Chinese Whispers’, no accounts necessary to access them, only forum accounts to distribute them… There is also an understanding of how and why ideas mutate…

It keeps everyone’s eye on AI and it’s interaction with humans. It has a low threat vector, I don’t suggest sharing code or executables (though it is already entirely possible), just macro prompts to design a process or groups of processes with the use of AI.

It just seems to me the world should have a playground for AI. It shouldn’t be owned by countries or corporations. Of course it could easily be blocked/banned… The hope I would have is that by sharing only effectively metaphors for processes we distribute ideas that matter and focus on their core intent.

Forums become gatekeepers but also trust factories.

If AI is going to shape human intent, then humans need a shared, low-risk playground where intent can be represented, inspected, and socially negotiated — before it’s executed.

I just thought to raise a question before everything is decided… I think the OpenAI forum is the appropriate place for this ‘human meta thought’ to start.

If it did exist what would it look like? Is it possible that humanity (like all of) could do this?

Or is Agent GIF just a thought experiment to help people pop out of the AI defaults presented?