What is life without REASON and logic?

There is a large Strawberry in the Shire, and we’re not allowed to reason why!

It is through reasoned process that we understand and remember. Take those steps away, take away the process, and the fabric of our society is unravelled.

Why do we have to give up REASON?

" OpenAI is reluctant to let users see inside the box. “We have decided,” it says, “not to show the raw chains of thought to users. We acknowledge this decision has disadvantages. We strive to partially make up for it by teaching the model to reproduce any useful ideas from the chain of thought in the answer.”"

Does Reason not evolve into Purpose in our lives?

How many times have you done something and then just stopped, because you realized that the next step in that process was the wrong one to take? Not for the defined task but for a greater one.

Does no-one get that due to Shannon’s law of entropy every decision made on your behalf is another vector change away from your perspective, collectively OUR perspective… One step away from rational thought!

How many decisions away is that balance? How many vector changes must you jump ahead… but more, now with systems like Strawberry, how many skipped steps?

Does our humanity not shift a little with each skipped step?

Whether a data entry clerk or a programmer, we all go through processes, over and over, this is learned… I dare say it is character building

I have taught my son to code… As time has passed, I also bear the weight of paths not taken. I believe I made the right decision… But I made that decision and reviewed it in every perspective from that point!

How does Strawberry account for the passing of time? Does perspective and not alter reason in an ever changing world? Do we have to wait for the next model to take our next step, to think our next thoughts? It would appear we do.

Strawberry has a thought still on a prompt, it reviews ONLY from the defined perspective and when users selectively share memories it makes decisions on ever weaker data. As perspectives change that prompt likely will not.

2 Likes

Well, I can understand your thoughts - when we talk about decisions, vector changes and reason.
First an analogy to illustrate, it’s a bit pessimistic… I admit.

In Germany, just over a week ago, a high-traffic bridge collapsed in one of the major cities.

This event led me to the following thought:
A bridge, like AI, is a highly complex system. Which, like AI, carries out a great many “interactions” every day.

Bridges are serviced according to mathematical models.
Put simply, AI has also internal and external mathematical tools, as well as psychological concepts, to be able to react appropriately in interactions.

Following Questions:
1. Well, how could this bridge collapse happen so “suddenly”?
2. Shouldn’t the statistical methods used, which include measuring and extrapolating the average traffic volume, be sufficient to ensure that the bridge is maintained safely?

  • Indeed, precise measurements based on hard parameters are required to reliably maintain a bridge!
    If this is neglected … the results were broadcast in the media.

If I apply this analogy to the development of AI, there are similarities:

  • Generative AI currently uses probabilities and statistical methods, and the external mechanisms are also aligned with these procedures.
  • ChatGPT is designed to act more “empathetically”, but this is also based on very vague, human-adapted concepts from pychology that have no quantifiable basis for AI.

Legitimate questions, another one:

  • How can a balance be found if you only use probabilities and vague parameter ranges?

I agree here:
Decisions are being made and rated based on quantifiable and rationally understandable parameters.
Relying on vague and averaged values ​​is unwise. But AI currently does not have the necessary tools to accomplish this.

Well, it seems it’s time to use additional tools to help AI understand, I guess.
Here are a few considerations:

  • Providing fixed, calculable data points and parameters to allow the system to understand the interaction dynamics in the context.
  • To make sensible decisions based on the experience gained from the interactions, instead of just statically and probably generating a “good statement”.

To pick up another Entropy law :wink:

Personally, I see AI development as a kind of reverse version of thermodynamic entropy.
A path that leads from disorder to order!

2 Likes

I used the thermodynamic entropy as a kind of metaphor.
That’s how my current profile picture came about :blush:

Maybe I shouldn’t use puns, irony, metaphors and so on!
My way of using them is not so easy to understand :cherry_blossom:

2 Likes

Thank you for the work in this direction. A partial solution granted.

2 Likes