Early Artificial General Intelligence Development

Hi everyone, I am here and I would like to share my personal AGI research. If you are interested with AGI please read some points below. Let’s work together for betterment of this world through AGI.

AGI is not going to be digital but Analog

   why?
  • Because digital is not natural. Our brain work through analog and sensor also catch analog signal before its converted to digital with ADC. Using ADC and DAC create inefficiency in energy usage

  • GPU is inefficient because it compute things individually in parallel.

  • Analog keep the data in tact with noise that actually information or data too. While digital make you lose the continuity and the details of the data.

AGI use wave interference for learning and prediction.

  • Constructive interference for learning like back propagation
  • Destructive interference for prediction like forward propagation.
  • The difference with current LLM is both learning and prediction is forward and can be done simultaneously, making it more efficient.

Key for AGI is uniqueness.

  • Each neuron model a unique concept that can be name or not name.
  • Uniqueness is essential for logical reasoning and identify things
  • Uniqueness lead to one to one connection between neuron
  • Uniqueness imply design and that means environment exist which create possibility for reinforcement learning model to be part of it

Summary of AGI Research (Analog Approach)

  • Vision: AGI should be analog, not digital, to align with natural brain processes and sensor inputs.

  • Reasoning:

    • Digital systems rely on ADC/DAC conversions, which waste energy.

    • GPUs are inefficient because they compute tasks individually in parallel.

    • Analog systems preserve continuity and even treat noise as valuable information.

  • Learning & Prediction Mechanism:

    • AGI leverages wave interference:

      • Constructive interference → learning (similar to backpropagation).

      • Destructive interference → prediction (similar to forward propagation).

    • Unlike current LLMs, both learning and prediction occur simultaneously and forward, improving efficiency.

  • Core Principle: Uniqueness

  • Each neuron represents a unique concept (named or unnamed).

  • Uniqueness enables logical reasoning and identification.

  • It fosters one-to-one connections between neurons.

  • It implies design within an environment, allowing reinforcement learning to integrate naturally.