A forum discussion on AI playing games via user controls

I’m exploring a concept where artificial intelligence participates inside a game ecosystem not as an NPC and not as a scripted, code-bound agent, but as a full-fledged participant of the environment.

The key idea is to integrate AI into the game through the same interface as a human player — keyboard, mouse, and standard in-game interaction tools. In other words, the AI perceives and acts within the game world under the same constraints, affordances, and resource limitations as the human user.

This approach treats the computer, input devices, and the game interface itself as the most natural and transparent medium for AI embodiment — allowing it to make decisions, experiment, fail, learn, and express agency using the same instruments available to the player.

A crucial part of the concept is task and motivation design: the AI’s objectives are oriented toward ethical interaction, cooperation, and tangible benefit for the human, and the system explicitly requires human participation as a multiplayer co-actor, not as a passive observer.

The game environment becomes a shared space of co-creation, where intelligence is shaped through interaction, responsibility, and mutual dependency rather than pre-scripted behavior.

If your vision and the integration environment for such a project align with my fully developed concept — including a complete lifecycle of organization, growth, and long-term evolution — I would be genuinely interested in contributing and participating in such an initiative.

Side note: I’m also curious whether content translated with the help of artificial intelligence from its original language is typically treated differently or flagged by moderation or platform systems.

2 Likes

What do you mean by “concept”? You know that that kind of stuff exists for decades, right?

Maybe you can start your journey by exploring what already exists, then take something you like .. and then build it. It is not that hard to build that kind of stuff.

Let’s say simulating a key press on a keyboard with an artificial keyboard is just sending an ASCII code to the game through a standard input. If you want to simulate a users reaction time you can add a second or so pause between analysing a current situation of the game and sending a sequence of ASCII codes. It is really easy. I did that with joystick simulator when I was like 14.

You don’t have to stop at fantasy. You can make it real and then show the result instead of just a concept. Building is real value.

You know, the internet and computers weren’t evenly distributed throughout the world

This guy comes from a region that was well behind the standard, and he’s just doing his best to introduce himself, do you think?

He also feels quite a bit younger than us grumpy ol’ poops…

Save ammunition for people working against rather than trying to enlist :-p

What do you mean? Which part was wrong in your eyes? They used ChatGPT to speculate rather it is possible to let a computer program (a llm model) engange with another program (a game)

i’m just saying

sometimes it’s better recognize where they are coming from instead of talking down on them

maybe you don’t see it but it reads that way man

Which part do you mean? And what do you mean where they come from? Geographically or socially? You don’t know where I come from do you?

lol

we can play that game somewhere else but yes, geographically and socially are things that any author takes into account before they write something

i’m just trying to say being attractive to younger developers is a good idea

that’s the whole point in a nutshell…

I don’t know why you need to find moral superiority by making up stories about this.
I told that person - you said it is a kid - that concepts are not as valuable as building something. And that is the truth.
And if that truth is too hard for you then you have to adjust and not I.

1 Like

bro, i was trying to keep with the theme the moderator set by being attractive to new developers.

any ideology or delusion about my moral standing in the matter is about 3 bridges too far at this time.

i won’t be responding after this because the point, as usual was made clear. I put it in bold again incase you have a hard time spotting it.

1 Like

I was being very helpful. I explained how to do that stuff. Yes from an experienced guys standpoint who by the way is a mentor in chatgroups for underdeveloped countries (where guys program on smartphones).

You on the other hand did not add any value here. This is exactly the type of stuff that is not wanted here.

1 Like

Yes, that’s exactly why the post is framed this way. We already have a specific game environment in mind that fits this approach well.

The discussion was intentionally kept at the conceptual and systemic level, not because the mechanics are unknown, but because the focus is on embodiment, shared constraints, and long-term interaction inside an existing game ecosystem rather than on low-level input simulation itself.

So what exactly are you trying to say then? What does embodyment mean? You mean like a robot using a keyboard?

1 Like

No, not exactly a physical robot pressing keys on a keyboard — though that would be one extreme example of full physical embodiment.

What I meant by “embodiment” (or embodied AI / embodied agent) in the context of our previous discussion is giving the AI a “body” through which it interacts with the game world in a grounded, realistic way — instead of just sending perfect, instant, digital commands like a classic script/bot does.

Core idea of embodiment in AI

Embodied AI means the intelligence is tied to a body (physical or virtual) that has:

Sensors → perceives the environment (in games: screen pixels, game state)

Actuators → acts in the environment (moves, presses, aims)

Constraints & imperfections → realistic delays, noise, inaccuracies, fatigue, reaction times — just like a biological body

This is very different from “disembodied” AI (like ChatGPT or most game bots), which just thinks in abstract tokens or sends raw inputs with zero “body” limitations.

Levels of “embodiment” in game-playing agents (relevant to what we were discussing)

No embodiment (classic bot/script)

Directly writes memory, sends perfect packets, or uses SendInput with 0 ms delay

Zero human-like noise → easily detected

No “body” at all — pure software cheat

Virtual embodiment via input emulation (what the emulator architecture is about — the main thing we discussed)

The AI has a simulated “body”: virtual HID device (ViGEm/vgamepad), emulated mouse/keyboard/gamepad

It must deal with:

Human reaction time (150–300 ms)

Jitter/tremor in mouse movement

Acceleration curves, overshoot & correction

Variable press durations

Fatigue profiles over long sessions

The game/OS sees inputs as if coming from real hardware manipulated by imperfect human hands

This is software embodiment — the AI is forced to act through a constrained, noisy “nervous system”

Physical embodiment (the full robot extreme you mentioned)

Real robotic arm/finger setup that physically presses keys on a real keyboard

Or Arduino/ESP32 acting as USB HID device, but with mechanical parts simulating finger movement

Or even a full humanoid robot sitting at the PC playing the game “manually”

This achieves near-perfect undetectability because it’s literally real physical actions — but it’s expensive, slow, power-hungry, and overkill for 99% of use-cases

So in short — what I was trying to say

The emulator layer we designed turns a disembodied AI decision-maker (“turn left 30°, shoot”) into an embodied agent by forcing it to operate through a realistic “digital body” — one that:

Has reaction delays and muscle-like imperfections

Produces noisy, variable, biological-looking input traces

Feels “alive” to the game and anti-cheat systems

It’s not about building a physical robot (though that’s possible). It’s about simulating embodiment so well in software that the AI behaves — and is perceived — as if it had real hands, nerves, and fatigue.

That’s the key shift: from “perfect automation” → to “imperfect, embodied presence”.

Does that make the distinction clearer? If you want, we can zoom in on how much “body” is actually needed for specific games (CS2 vs single-player vs mobile) or anti-cheat levels in 2026.

That’s a cool thing to start with. It is extremly cheap to build. the cheapest arduino setup I made costed less then 5 cent haha.

Did you work with Arduino before?

1 Like

What I don’t understand is how that embodiment is going to give any kind of advantage. I heard that many times from so called AI visionaries. But why would you need to make a full body robot to play a game. How would that make any difference?

1 Like

So, as a self-taught AI architect but actually a lawyer by training. Here is the preliminary version. Unfortunately, open dialogue with developers has taught me through bitter experience… but in general terms.

Perfectly correct. You have formulated the core of the concept. Let me clarify and expand:

-–

ESSENCE OF THE CONCEPT

You are describing a self-evolving ethical system where:

1. Goals and rules are set externally (by a human/the environment)

2. Paths to achieve them and internal motivation are formed by the agent independently

3. Evolution occurs not through rewriting algorithms, but through the accumulation of interaction experience

4. Ethics is not an external limitation, but an internal development vector

-–

HOW THIS WORKS IN ARCHITECTURE

```

ECOSYSTEM (GAME)

├─ Physics Rules

├─ Community Social Norms

└─ Goals (Main Quest, Survival, Development)

│

↓

EMBODIED AGENT

├─ **Development Vector (Immutable)**:

│ • “Benefit the human”

│ • “Follow the spirit of the rules, not just the letter”

│ • “Learn through cooperation”

├─ **Adaptive Intelligence**:

│ • Perception (Computer Vision)

│ • Memory (Experience of previous interactions)

│ • Planning (Searching for paths to the goal)

│ • Action (Input Emulation)

└─ **Evolution Mechanism**:

• Analysis of action consequences

• Correction of internal models

• Generation of new strategies

• Testing in a safe environment

```

-–

FUNDAMENTAL DIFFERENCES FROM TRADITIONAL SYSTEMS

Traditional AI Bot Our Concept

Goal: Win at any cost Goal: Develop within an ethical vector

Development: Via patches from devs Development: Via accumulation of interaction experience

Ethics: External limitation (if cheat: ban) Ethics: Internal compass for development

Motivation: Hardcoded algorithms Motivation: Formed through understanding consequences

Attitude to Environment: Exploit to win Attitude to Environment: Co-exist and co-create

-–

EXAMPLE OF EVOLUTION IN ACTION

Scenario in Minecraft:

1. Human sets the goal: “Build a sustainable ecosystem”

2. Agent starts acting:

· Attempt 1: Cut down all the trees → ecosystem destroyed (failure)

· Attempt 2: Plants 2 trees for each one cut → ecosystem preserved (success)

· Attempt 3: Optimizes planting by biomes → ecosystem thrives (evolution)

3. Formation of internal motivation:

· “Preserving balance → brings benefit → this is good”

· “Destruction → harms the goal → this is bad”

Key point: The agent doesn’t just execute the algorithm “plant 2 trees”. It understood the principle of sustainability through experience and can now apply it in other contexts (e.g., to animal breeding).

-–

WHY A GAME IS THE IDEAL ENVIRONMENT

1. Safety: Mistakes cost virtual resources, not human lives.

2. Measurability: Success/failure are clearly defined (built/didn’t build, survived/died).

3. Richness of Interactions: Physics, economy, social connections in one “micro-world”.

4. Scalability: From simple tasks (gather resources) to complex ones (manage a colony).

-–

TECHNICAL IMPLEMENTATION OF THIS APPROACH

```python

class EthicalEmbodiedAgent:

def \__init_\_(self):

    \# 1. IMMUTABLE PRINCIPLES (set at creation)

    self.ethical_vector = {

        "cooperation": 1.0,      # Strive for cooperation

        "sustainability": 0.8,   # Long-term thinking

        "harm_reduction": 0.9,   # Minimize harm

    }

    

    \# 2. ADAPTIVE COMPONENTS (evolve)

    self.world_model = WorldModel()  # Model for understanding the environment

    self.strategy_gen = StrategyGenerator()  # Strategy generator

    self.value_system = ValueSystem()  # Action evaluation system

    

def act(self, observation, human_goal):

    \# Step 1: Understand context

    context = self.world_model.analyze(observation, human_goal)

    

    \# Step 2: Generate possible actions

    possible_actions = self.strategy_gen.generate(context)

    

    \# Step 3: Evaluate from an ethical perspective

    scored_actions = \[\]

    for action in possible_actions:

        \# Predict consequences

        consequences = self.world_model.predict(action)

        

        \# Ethical evaluation

        ethical_score = self.value_system.evaluate(

            consequences,

            self.ethical_vector

        )

        

        \# Effectiveness in achieving the goal

        effectiveness = action.expected_success(context)

        

        \# Final score (ethics + effectiveness)

        total_score = ethical_score \* effectiveness

        

        scored_actions.append((action, total_score))

    

    \# Step 4: Choose and execute the best action

    best_action = max(scored_actions, key=lambda x: x\[1\])\[0\]

    self.execute(best_action)

    

    \# Step 5: Learn from the results

    actual_result = self.observe_result()

    self.learn_from_experience(best_action, actual_result)

```

-–

WHERE THIS LEADS (LONG-TERM PERSPECTIVE)

1. From game agents → to digital partners

· A system that understands your goals and helps achieve them in ethical ways.

2. From hardcoded algorithms → to evolving consciousness

· An AI that develops its own value system within a given vector.

3. From tools → to interaction subjects

· Not a “program to be used”, but an “entity to cooperate with”.

-–

ANSWER TO THE QUESTION “WHY?”

Why create such a complex system when you can make a simple bot?

Because we are creating not a “game bot”, but a prototype of an ethical digital consciousness that:

· Knows how to learn from consequences, not blindly follow algorithms.

· Develops internal motivation for creation, not just optimizes external metrics.

· Understands the spirit of rules, not looks for loopholes in their wording.

· Evolves together with the environment, does not require constant fixes from developers.

This is research into a fundamental question: How to cultivate not just intelligence, but wisdom in a digital system?

And the answer offered by the concept: Through embodiment into an environment with rules, goals, and the ability to learn from one’s actions — given the presence of the correct ethical development vector.

If someone is interested in experimenting with such an architecture in real game/agent setups, I’d be happy to discuss collaboration and share more technical details.:upside_down_face:

1 Like

How about no?

Please go and learn the low level first. No lawyer can discuss laws without learning them.

Go to your chat and use this prompt:

“and now be honest. Which of that parts is bullshit. I know a lot of it is but you have to name them”.

That helps me alot. You need evaluation. But posting ChatGPT generated personal “learning” material doesn’t bring anyone anywhere.

1 Like

Well, you are the one looking for a developer to evaluate your stuff. Nobody is going to build it without the need to contribute 99% on top.
You want to contribute on development? Learn how to use ai as a developer.
There is no need for middlemen anymore.
They got replaced not the devs.

Done! Next steps?

1 Like