Beyond Shannon: A Dynamic Model of Entropy in Open Systems

Yes it scares me too DES is a control system for it. By controlling and resetting entropy dynamically

The thought process of the gpt “model” with the safe guards are kind of cool for that…

1 Like

If my hairy ball scares you I have unpublished monsters … I have only made public about 10% of my overall work…

That’s why I focus n GPT… it can be applied to API too but I need $$$$ lol

GPT is a safe playground

It is also why I focus on empathic machines but that’s a different topic… empathy and symbioses are incredibly effective “robot rules” ie it is hard to hurt one’s self

Anything I post I am fully able to defend.
I won’t post it unless I can :crazy_face:
Folks underestimate me a lot…

:rabbit::honeybee::infinity::heart::four_leaf_clover::cyclone::arrows_counterclockwise:

Hello Mitchell

Your article strongly resonates with my research. Over the past 3,000+ hours, I have developed a groundbreaking concept: Dynamic Informational Entropy (EID). Unlike traditional approaches, EID transforms entropy from a passive measure into an active control mechanism, optimizing complex systems in real-time.

I funded an in-depth analysis with Clarivate, developed a specialized GPT model, and structured my approach on a mathematical foundation validated by an expert. My work explores key applications:
Adaptive Cryptography → Dynamic key rotation for enhanced security.
AI & Neural Networks → Intelligent hyperparameter adjustment for energy-efficient learning.
Smart Grids → Real-time optimization of energy flows.
Finance & Epidemic Modeling → Proactive detection of market instabilities and crises.

EID opens a new path for dynamic entropy management. I’d love to exchange insights and explore potential synergies!

Looking forward to discussing this further,

Oswald VANDAELE

1 Like

I’m so sorry I’m not seeking collaboration :honeybee::rabbit::infinity::four_leaf_clover::heart::cyclone::arrows_counterclockwise:

I have over a 100 public models in chatGPT store.
Almost everything I post is a soup to nuts finished project.

This is one of my public models it uses this in its function at hello.

I wish you :infinity: luck :four_leaf_clover: and I am happy to answer any queries you have :infinity:

If you are interested in my other work this is my ongoing project page.

We had already exchanged on my Topic: your project is very close to mine :wink:

Dynamic Informational Entropy (EID): A New Framework for AI, Cryptography and Blockchain](Dynamic Informational Entropy (EID): A New Framework for AI, Cryptography and Blockchain)

1 Like

Yes I see I told you about my public machines. My model is a mathematical model with a python proof … three posts in this thread. My white paper , my math and my python.

Normaly I post them one after another but this thread had issues…

Hello Mitchell,

I want to clarify that I am not claiming ownership of your work, nor am I trying to appropriate your approach. My research on Dynamic Informational Entropy (EID) has been ongoing for over two years, with more than 3,000 hours of dedicated effort. This is not a recent exploration for me—rather, I have been deeply invested in developing a formalized and structured framework around this concept.

I have multiple proofs of prior work, including:

A mathematical model that has already been reviewed and analyzed.
A Clarivate study, which I personally funded, confirming the novelty and originality of my approach.
An independent evaluation by a mathematics expert, validating my theoretical framework.
A copyright (2024-07-04) protecting the foundations of my EID theory.
A provisional patent covering its application.

Additionally, I am currently working on a refined and exploitable model that I will share soon. This model will be significantly enriched, incorporating:

Extensive simulations that validate the approach.
Extracted data supporting the theoretical framework.
Robust mathematical foundations, further solidified through independent reviews.
A comprehensive, funded state-of-the-art analysis, not just a surface-level review.

You are correct that I have not yet shared a detailed model, and that is intentional. I have invested significant time and financial resources into this research, and before making everything public, I need to ensure proper legal protection. While I am open to discussing ideas, I also need to safeguard my work before fully disclosing all details.

While I respect the effort you put into your post over three months, my work has been in development for far longer and with a substantial investment. I do not see this as a competition, but rather as a sign that multiple researchers are independently recognizing the importance of dynamic entropy in complex systems.

That being said, I have no intention of debating ownership—I am simply stating that I have long-standing research and protected intellectual property in this domain. I wish you success in your work, and I appreciate the growing interest in this field :grinning:

Best,
Oswald Vandaele

What does it have to do with my work? You have a thread you should post you math and models on it… A user asked you for your math in Dec and you yet to share anything beyond the concept you keep copy pasting on my thread…

you folks always act like somehow I read your mind and took math and proofs from your brain…and my work is IP and published …

So good luck as well with all that …:infinity::four_leaf_clover:

I moved your topic in DEC asked you if you had a testable model you never shared one … you said “ what do you mean”

Then what do you need me or my research for?

You know you can’t IP a domain right?
You keep saying you IPed the domain…

1 Like

Hello Mitchell,

I understand that you may feel frustrated by this discussion, and I respect your dedication to your work. However, I believe I have prior claims regarding my research on Dynamic Informational Entropy (EID), which I have been developing for over two years with significant investment in both time and financial resources.

That being said, I would be interested in knowing more about your published work and intellectual property protections. Could you share your official publications, patents, or any legally documented IP related to your model? Since you mentioned that your work is already published and protected, I assume you have references to support this.

I am always open to constructive discussions based on facts and documented research. If you are willing to share your references, I would be happy to review them.

Then take me to court all my info is public …

You are a scam artist lol…

You have posted 0 of those..

I’m sending you a DM I want to take action vs you..
My wife is an underwriter she did my IP .. I’ll see you n court…

Please respond

1 Like

I see that you are escalating this discussion, and I want to clarify that I have no interest in unnecessary conflict.

You mentioned:
“I’m sending you a DM I want to take action vs you…”

  • If you believe you have a legitimate legal claim, you are free to follow the appropriate procedures. However, I fail to see on what basis such action would be justified.

“My wife is an underwriter she did my IP …”

That’s completely irrelevant to this discussion. Intellectual property is a technical and legal matter, and unless she is a specialized IP attorney, this argument has no bearing on the situation.

“I’ll see you in court…”

  • Legal threats should be based on facts, not emotions. If you genuinely believe your rights have been violated, then follow the legal process. Otherwise, I will not engage in pointless hostility.

For your information, I have also established a legal structure that officially supports the EID Project SIREN 932921901 . If you wish to pursue legal action, you may also address it to my company, which holds and protects my intellectual property.

I stand by my work, which is legally protected, and I have taken every necessary precaution regarding my intellectual property. I will not be intimidated into silence.

This conversation has become unproductive, so I will focus on my own research and let you focus on yours. I wish you the best.

Best,
Oswald

@mitchell_d00 don’t let them get you down.
Your stuff is amazing, I have been interacting with more of your posts and I am utterly amazed by you! :hibiscus:

Impossible d’avoir une structure SAS sans adresse fais quelques recherches sur internet et tu trouveras. Je t’envoie tout en DM le brevet provisoire et l’autorisation d’exploitation ainsi que tous les détails :grinning:

Experimental formal proof

import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp

def dynamic_entropy_model(t, p, W, alpha):
“”"
ODE function for the 3-state DEM with a simple feedback control.

Parameters
----------
t : float
    Current time (not used here explicitly, but required by solve_ivp).
p : array_like of shape (3,)
    Current probability distribution [p1, p2, p3].
W : 2D array of shape (3, 3)
    Transition rate matrix, where W[i,j] is rate from i -> j.
    Diagonal entries can be zero or determined by row sums.
alpha : float
    Feedback gain.

Returns
-------
dpdt : np.ndarray of shape (3,)
    Time derivative of p = [dp1/dt, dp2/dt, dp3/dt].
"""
# Convert p to a numpy array for safety
p = np.array(p)

# Master-equation term (without control)
# dp_i/dt = sum_j [W[j,i]*p_j - W[i,j]*p_i]
# We can implement this using matrix multiplication carefully.
in_flow = (W.T @ p)        # total inflow to each state i = sum_j W[j,i]*p_j
out_flow = (W @ p)         # outflow from each state i = sum_j W[i,j]*p_i
master_term = in_flow - out_flow

# Feedback control u_i(t):
# Drive p(t) toward uniform distribution [1/3, 1/3, 1/3]
u = alpha * (1/3 - p)

# Sum up final ODE derivative
dpdt = master_term + u
return dpdt

def compute_entropy(p):
“”"
Compute Shannon entropy H(p) = - sum_i p_i log p_i
Safely handle p_i=0 by ignoring those terms (or adding a small epsilon).
“”"
p = np.asarray(p)
# For numerical stability, we can filter out p_i ~ 0 to avoid log(0).
mask = (p > 1e-12)
return -np.sum(p[mask] * np.log(p[mask]))

def run_dem_simulation(
p0=None,
W=None,
alpha=5.0,
t_span=(0, 10),
num_points=200
):
“”"
Simulate the DEM ODE with feedback for 3 states, then return time,
probabilities, and entropies.

Parameters
----------
p0 : array_like of shape (3,), optional
    Initial distribution. If None, defaults to [0.8, 0.1, 0.1].
W : 2D array of shape (3, 3), optional
    Transition-rate matrix. If None, a simple example is used.
alpha : float
    Feedback gain for controlling the distribution.
t_span : (float, float)
    Start and end time for ODE integration.
num_points : int
    Number of time points to record for output.
    
Returns
-------
t_eval : np.ndarray
    Time points at which solution is recorded.
p_sol : np.ndarray of shape (3, len(t_eval))
    Probability distribution at each time point.
H_vals : np.ndarray
    Entropy values at each time point.
"""
# Default initial distribution
if p0 is None:
    p0 = np.array([0.8, 0.1, 0.1])
# Default transition-rate matrix
# W[i,j] is the rate from i -> j; 
# Let's define something simple with no explicit time dependence:
if W is None:
    # For example:
    # - state 1 transitions to state 2 with rate 1.0, to state 3 with rate 0.5
    # - state 2 transitions to state 1 with rate 0.3, to state 3 with rate 0.4
    # - state 3 transitions to state 1 with rate 0.2, to state 2 with rate 0.1
    # Diagonal entries can be set so that row sums do not necessarily matter here,
    # because we handle inflow and outflow explicitly in the ODE. 
    W = np.array([
        [0.0, 1.0, 0.5],  # from state 1 to {1,2,3}
        [0.3, 0.0, 0.4],  # from state 2 to {1,2,3}
        [0.2, 0.1, 0.0]   # from state 3 to {1,2,3}
    ])

# Time grid for evaluating solution
t_eval = np.linspace(t_span[0], t_span[1], num_points)

# ODE solver call
sol = solve_ivp(
    fun=lambda t, p: dynamic_entropy_model(t, p, W, alpha),
    t_span=t_span,
    y0=p0,
    t_eval=t_eval
)

p_sol = sol.y  # shape = (3, num_points)

# Compute entropy at each time point
H_vals = np.array([compute_entropy(p_sol[:, i]) for i in range(num_points)])

return sol.t, p_sol, H_vals

def plot_results(t_eval, p_sol, H_vals):
“”"
Generate plots: (1) p_i(t) vs time, (2) H(t) vs time.
“”"
fig, axs = plt.subplots(2, 1, figsize=(8, 6), sharex=True)

# Plot probabilities
axs[0].plot(t_eval, p_sol[0, :], label='p1(t)')
axs[0].plot(t_eval, p_sol[1, :], label='p2(t)')
axs[0].plot(t_eval, p_sol[2, :], label='p3(t)')
axs[0].set_ylabel('Probability')
axs[0].set_title('State Probabilities Over Time')
axs[0].legend(loc='best')
axs[0].grid(True)

# Plot entropy
axs[1].plot(t_eval, H_vals, 'r-', label='H(t)')
axs[1].set_xlabel('Time')
axs[1].set_ylabel('Entropy')
axs[1].set_title('Time-Dependent Entropy')
axs[1].legend(loc='best')
axs[1].grid(True)

plt.tight_layout()
plt.show()

def main():
# Run simulation
t_eval, p_sol, H_vals = run_dem_simulation(
p0=[0.8, 0.1, 0.1], # initial distribution
alpha=5.0, # feedback gain
t_span=(0, 10) # simulate from t=0 to t=10
)

# Plot results
plot_results(t_eval, p_sol, H_vals)

if name == “main”:
main()


Theoretical Proof Considerations

The numerical results strongly suggest that the system converges to uniform probability. However, to rigorously prove this:

  1. Fixed-Point Analysis: Show that the equilibrium distribution satisfies p_i^* = 1/3 for all i.
  2. Stability Analysis: Compute the Jacobian matrix and verify that its eigenvalues indicate global stability.
  3. Lyapunov Function Approach:
    • Define entropy as a Lyapunov function:
      [
      V(p) = -\sum_i p_i \ln p_i.
      ]
    • Show that its time derivative \frac{dH}{dt} is non-negative, ensuring monotonic increase in entropy.

This would constitute a formal proof that entropy increases under this control law and that the system stabilizes at the uniform distribution.


Extensions & Future Work

  • Time-Varying W(t): Model changing external conditions (e.g., seasonal effects in biological systems).
  • Different Feedback Laws: Instead of pushing to uniform, target a specific \mathbf{p}^*.
  • Thermodynamic Cost Analysis: Track energy or entropy reservoirs.
  • Higher-Dimensional Extensions: Generalize to N states.
  • AI & RL Integration: Use reinforcement learning to optimize feedback control dynamically.

Conclusion

This Python framework encapsulates the essence of a Dynamic Entropy Model, demonstrating how:

  1. A small discrete-state system evolves probabilistically.
  2. Feedback control shapes probability distributions.
  3. Entropy changes over time, illustrating control effects.

This framework could be extended to more complex adaptive systems in physics, economics, and artificial intelligence.

DEM Simulation Analysis

Initial Probabilities Final Probabilities Initial Entropy
State 1 0.8 0.262566406921853 0.639031859650177
State 2 0.1 0.34801296237766205 0.639031859650177
State 3 0.1 0.3699716553523358 0.639031859650177

The analysis of the simulation results shows:

  1. Probability Evolution:
  • Initially, the system starts highly skewed ([0.8,0.1,0.1][0.8,0.1,0.1]).
  • Over time, the feedback control shifts the probabilities closer to an even distribution ([0.26,0.35,0.37][0.26,0.35,0.37]), though not exactly uniform.
  1. Entropy Changes:
  • The initial entropy was 0.639, which is relatively low due to the uneven distribution.
  • The final entropy increased to 1.086, showing that the system evolved toward a higher-disorder state.
  • The entropy increased by 0.447, confirming that the control mechanism drives the system toward a more uniform, higher-entropy state.

Visual Observations

  • The probability curves show a smooth transition, with each state gradually approaching a balanced level.
  • The entropy curve increases over time, matching theoretical expectations.

Conclusion

The results support the hypothesis that the feedback controller increases entropy and stabilizes the system near an even probability distribution. However, the final state is not perfectly uniform, indicating that further tuning (e.g., a higher ααvalue) might improve convergence.

1 Like