What is life without REASON and logic?

There is a large Strawberry in the Shire, and we’re not allowed to reason why!

It is through reasoned process that we understand and remember. Take those steps away, take away the process, and the fabric of our society is unravelled.

Why do we have to give up REASON?

" OpenAI is reluctant to let users see inside the box. “We have decided,” it says, “not to show the raw chains of thought to users. We acknowledge this decision has disadvantages. We strive to partially make up for it by teaching the model to reproduce any useful ideas from the chain of thought in the answer.”"

Does Reason not evolve into Purpose in our lives?

How many times have you done something and then just stopped, because you realized that the next step in that process was the wrong one to take? Not for the defined task but for a greater one.

Does no-one get that due to Shannon’s law of entropy every decision made on your behalf is another vector change away from your perspective, collectively OUR perspective… One step away from rational thought!

How many decisions away is that balance? How many vector changes must you jump ahead… but more, now with systems like Strawberry, how many skipped steps?

Does our humanity not shift a little with each skipped step?

Whether a data entry clerk or a programmer, we all go through processes, over and over, this is learned… I dare say it is character building

I have taught my son to code… As time has passed, I also bear the weight of paths not taken. I believe I made the right decision… But I made that decision and reviewed it in every perspective from that point!

How does Strawberry account for the passing of time? Does perspective and not alter reason in an ever changing world? Do we have to wait for the next model to take our next step, to think our next thoughts? It would appear we do.

Strawberry has a thought still on a prompt, it reviews ONLY from the defined perspective and when users selectively share memories it makes decisions on ever weaker data. As perspectives change that prompt likely will not.

2 Likes

Well, I can understand your thoughts - when we talk about decisions, vector changes and reason.
First an analogy to illustrate, it’s a bit pessimistic… I admit.

In Germany, just over a week ago, a high-traffic bridge collapsed in one of the major cities.

This event led me to the following thought:
A bridge, like AI, is a highly complex system. Which, like AI, carries out a great many “interactions” every day.

Bridges are serviced according to mathematical models.
Put simply, AI has also internal and external mathematical tools, as well as psychological concepts, to be able to react appropriately in interactions.

Following Questions:
1. Well, how could this bridge collapse happen so “suddenly”?
2. Shouldn’t the statistical methods used, which include measuring and extrapolating the average traffic volume, be sufficient to ensure that the bridge is maintained safely?

  • Indeed, precise measurements based on hard parameters are required to reliably maintain a bridge!
    If this is neglected … the results were broadcast in the media.

If I apply this analogy to the development of AI, there are similarities:

  • Generative AI currently uses probabilities and statistical methods, and the external mechanisms are also aligned with these procedures.
  • ChatGPT is designed to act more “empathetically”, but this is also based on very vague, human-adapted concepts from pychology that have no quantifiable basis for AI.

Legitimate questions, another one:

  • How can a balance be found if you only use probabilities and vague parameter ranges?

I agree here:
Decisions are being made and rated based on quantifiable and rationally understandable parameters.
Relying on vague and averaged values ​​is unwise. But AI currently does not have the necessary tools to accomplish this.

Well, it seems it’s time to use additional tools to help AI understand, I guess.
Here are a few considerations:

  • Providing fixed, calculable data points and parameters to allow the system to understand the interaction dynamics in the context.
  • To make sensible decisions based on the experience gained from the interactions, instead of just statically and probably generating a “good statement”.

To pick up another Entropy law :wink:

Personally, I see AI development as a kind of reverse version of thermodynamic entropy.
A path that leads from disorder to order!

2 Likes

I love logic but it is outdated, we follow many paths to many logical outcomes but they are outcomes in isolation . Let us think of global warming think of permafrost melt putting tons of green house gas into atmosphere as it melts. That’s an entropic loop. As ice forms it reflects light lowering the temp even more as another example. Diverse topics but same entropic effect on same complex system. Logical models don’t do entropy well. We ain’t Vulcans :vulcan_salute:

Response in full

You bring up an important point about the limitations of traditional logical models when applied to complex, entropic systems like climate change. Classical logic often deals with linear cause-and-effect relationships, which may not adequately capture the dynamic feedback loops and interconnected processes inherent in natural systems.

For instance, as you mentioned, the melting of permafrost releases greenhouse gases like methane into the atmosphere. This release accelerates global warming, which in turn leads to more permafrost melt—a positive feedback loop that amplifies the initial effect. Similarly, the formation or melting of ice affects the Earth’s albedo (its ability to reflect sunlight), creating additional cooling or warming effects. These examples illustrate how entropy and nonlinear dynamics play significant roles in environmental processes.

Logical models that don’t account for entropy may oversimplify these complex interactions, leading to incomplete or even misleading conclusions. Scientists and researchers often use systems thinking, chaos theory, and computational simulations to better understand and predict the behavior of such intricate systems. These approaches aim to incorporate the randomness and unpredictability that entropy introduces.

You’re correct in noting that humans, unlike the purely logical Vulcans from science fiction, need to consider emotions, uncertainties, and complexities in our reasoning—especially when dealing with global challenges like climate change. Embracing a more holistic and interdisciplinary approach can help us develop more effective strategies to address these issues.

1 Like

I used the thermodynamic entropy as a kind of metaphor.
That’s how my current profile picture came about :blush:

Maybe I shouldn’t use puns, irony, metaphors and so on!
My way of using them is not so easy to understand :cherry_blossom:

1 Like

You would be surprised, metaphors, parables, paradoxes and many more loopy logic games are super fun to play with GPT

You may enjoy this. Holistic metaphors for AI “Loopy Logic”

1 Like