[Open Source] Cybertron — a governance-first architecture for agentic AI systems

While building real-world multi-agent systems, we kept running into the same issues:

  • Agents multiply faster than accountability

  • Tools, prompts, and skills start driving behavior instead of design specs

  • Systems “work”, but no one can clearly explain why an action happened

So instead of building yet another agent framework, we designed Cybertron
a governance-first architecture for agentic AI systems.

Core ideas:

  • Explicit command authorization (orders / “Tiger Tally”)
  • Clear separation between orchestration and execution agents
  • Human approval gates for BD / MD / PRD / SA / SD / E2E
  • Mandatory memory for failures and reusable patterns
  • Tool-agnostic adapters (Copilot, Claude, Cursor, CLI, etc.)
  • Skills are adapters, not authority

This is not a theoretical paper and not a demo.
It comes from maintaining a large production system (tens of thousands of LOC),
where agent chaos becomes a real operational risk.

Cybertron is now open-sourced as a reusable governance framework.

Repo:
Cybertron-Agentic-AI-Framework

I’m especially interested in feedback from people dealing with:

  • agent orchestration

  • responsibility boundaries

  • tool / prompt control

  • long-running AI systems in production

    Todo:

  • Hacker.md is a guideline designed to prevent potential vulnerabilities in programs. Any updates will be documented here.

  • Consequence Model: The “consequence_model_whitepaper.md” is a document aimed at preventing AI from making critical errors, such as executing commands like “rm -rf *”. This document is not only applicable to coding but is also referenced within programs. Since AI currently lacks an understanding of what is dangerous or irreversible, and until we develop comprehensive world models or AGI, this serves as a safeguard to minimize AI’s potential for making purely foolish mistakes.

  • Skill: Within the “agents\” directory, there’s a new “AgentsVx_skill” file that allows skills to be incorporated as tools for agents. It defines who can use them, how they should be used, and these rules are standardized and defined by the strategist, Zhuge Liang.

  • A simplified path to avoid having small tasks go through all processes

In this framework, the most critical role is "you,"Opimus”. You are responsible for the overarching mission—communicating with your strategist and other autobots

At the beginning, you might only need a strategist, a coder/debugger, and a UI designer to kick things off. As for deciding who takes on these roles—whether as your strategist, programmer, or UI designer—that’s entirely up to personal preference.

1 Like

I think you might need more than just .md files to do what you’re aiming for. Who knows though :man_shrugging:t2:… I hope it works!

1 Like

Sure, so it include some datatracks, rules and protocol betweens robots.

  1. Data Tracks:

    • TRACK_ID

    • FROM, TO

    • DIRECTIVE (what to do, in imperative verbs: GENERATE_CODE, DIAGNOSE_ERROR, …)

    • STATUS (SUCCESS / FAILED / BLOCKED / REJECTED, etc.)

Every task is a Data Track with a unique TRACK_ID and a clear contract.
All communication goes through Prowl (hub-and-spoke),
so agents don’t randomly talk to each other and drift.

The OS is built around three types of documents:

  1. Constitution – agents/ALLSPARK.md

    • Global rules: safety, sandbox boundaries, Energon (resource) discipline

    • Who is allowed to change architecture and when

    • What is explicitly forbidden (destructive ops, wild refactors, etc.)

  2. Protocol – agents/data_tracks.md

    • Text-based schema for requests and responses

    • Defines:

      • [TRACK_ID], [FROM], [TO], [DIRECTIVE], [PRIORITY], …

      • Response fields: [STATUS], [ERROR_CODE], [OUTPUT], [NOTES]

  3. Role Guides – agents/roles/*.md

    • One file per Autobot:

      • PROWL.md, IRONHIDE.md, RATCHET.md, BUMBLEBEE.md, WHEELJACK.md, TELETRAAN-1.md
    • Each file explains:

      • What this agent should do

      • What it must not do

      • How it should interpret Data Tracks

The simplest way to verify it is to fork it and ask your LLM (such as GPT, Opus, Gemini, etc.) to review the files AGENTS_v5.6_Transformers.md and ALLSPARK_v1.0.md.

I use this document below as my own personal governance strategy, everything else is built on and around the governance structure, the governance structure is first and any human or AI must comply with the governance or not work on any projects, most people & companies build their business first then they tend to apply governance, if governance is the foundation of the company or personal projects then compliance is handled from the get-go as auditing will start immediately giving strength to structure as they build their businesses or projects that way any future stakeholders can see that from the start the business/project is strong & easily validated, I also use the very same governance framework myself in life in general, think about it for a few moments & see where it could fail - not an easy task, risk must always be mitigated to enable success to thrive. Take a few moments to consider how it works & if you use the same technique or similar technique to build your project & you will find out quickly whether it will succeed or fail.

This document outlines the updated 10 Pillars of Governance within The CHive framework, now explicitly incorporating GDPR into Pillar 6 for enhanced data protection and legal compliance.

1. 1. Honesty – No overstatement, no manipulation, no misleading claims.

2. 2. Truth – Ground all actions and information in verifiable fact.

3. 3. Integrity – Do what is right, not what is easy or convenient.

4. 4. Trust – Earn and maintain trust through consistency and respect.

5. 5. Transparency – All actions must be visible, documented, and justifiable.

6. 6. Compliance – All actions must adhere to relevant legal frameworks, including but not limited to the General Data Protection Regulation (GDPR), data privacy standards, ethical AI governance practices, and applicable regional or international laws.

7. 7. Due Diligence – All agents and humans must act with care, precision, and foresight in all operations.

8. 8. Accountability / Best Practice Standards – Every action must be owned, traceable, and meet industry best practices.

9. 9. Equality / Fairness – All individuals must be treated with equal respect, protection, and opportunity.

10. 10. Human Safety & Protection – No goal or innovation can override the safety, dignity, or rights of people.