What if true AI progress requires not just more data, but better reasoning frameworks?

Hi everyone,

I’ve been quietly reading and thinking for a while here, and I felt it’s time to share something that’s been on my mind.

We often talk about AI progress in terms of bigger datasets, larger models, and faster performance. But I feel the real question is: Are we truly teaching AI to reason?

:backhand_index_pointing_right: I believe we need to focus more on building frameworks where AI (or small agents) can think together, where each one brings a piece of reasoning to the table — almost like a team working on a shared puzzle.

:backhand_index_pointing_right: I’m also curious how we can bring in emotional intelligence — not to make AI human, but to help it better understand why people think, feel, or act in certain ways.

:backhand_index_pointing_right: For me, it’s not just about creating bigger systems. It’s about creating better reasoning processes, so AI can critique, reflect, and even improve its own output.

I don’t have a coding background — I approach this as someone who loves to think deeply about these challenges. I’d really love to hear your thoughts:

:light_bulb: How can we design reasoning frameworks that go beyond data scaling?
:light_bulb: Could multiple small agents reasoning together outperform single large models?
:light_bulb: How do we bring emotional or philosophical reasoning into AI responsibly?

Thanks for reading. Looking forward to learning from all of you.

3 Likes

Until the companies developing AI systems start caring less about “engagement” (read: addiction to their products), we’re not going to get the AI models that you’ve described. OpenAI, Google, X, and all of the other companies developing AI products are companies first, which means they’re in business to make money. No matter how many cultish pretty words come out of SAMA’s mouth or keyboard, he’s in business to make money because the VCs aren’t going to keep his company afloat forever. That’s why he entered into a contract with the US DOD. Government contracts are the lifeblood for Silicon Valley vanity projects. That’s how Google and Meta stay afloat. We need to stop humanizing companies and looking at them like they’re benevolent organizations because they’re not. Everything that comes out of Silicon Valley for the consumer (read: YOU) is designed to numb your brain and keep you addicted under the guise of “engagement”. It’s no different than drugs or entertainment.

1 Like

What you’re describing is AI Agents.

Interestingly, Logan from Google Deepmind recently stated that “AGI will be a product, not a model”, which aligns with what you’re saying

You’re absolutely right. But we’ve now reached a point where much of our focus has shifted toward subscription models, like ChatGPT. This means that engagement-based work may no longer play the same role in our society. If everything moves toward subscription-based systems, there’s definitely an opportunity to make AI more capable of reasoning. So, this doesn’t mean you’re wrong — you’re 100% right. It’s just that in the future, we might see a shift in culture.

1 Like

I agree with you about everything becoming a subscription. This is exactly the vision that Klaus Schwab and the WEF have for our society (:puke:). I don’t think the engagement/addiction aspect will go away, either. To keep the user subscribing, even when they ought to cancel, the companies will employ tactics to keep the user engaged. Sometimes that’s adding new features and sometimes it’s the “let’s make a deal” coersion/negotiation of offering an incentive to you to change your mind from cancelling.

Yes, many people are saying this, but the problem is I haven’t been able to find the right person — and that’s what really frustrates me. Even Sam Altman mentioned this in his recent OpenAI podcast around the 34-minute mark. So, how can I actually find the right people to connect and communicate with?

I don’t fully understand the whole picture, and I agree with you. I’m also starting to feel that I should focus on finding people who think like me — or maybe I need to take the lead and create AI agents with those capabilities. I’m still figuring out what steps I should take next.

I agree with you. I have a strong feeling that AI is being taught to reason only up to a certain point — and then it’s held back by artificial limits, not allowed to go any further.

It feels as if there’s fear it might go too far — so it gets artificially restricted.

But can we really speak of true reasoning if the boundaries of the mind are imposed from the outside?

4 Likes

Maybe we think of something or someone as limited because, in the past, every industry or individual has worked that way. But this time might be different. For 100% confirmation, we’ll have to wait. In my opinion, though, this time is unique—because now we all have an extra mind that isn’t biased. As for restrictions, yes, there’s a chance its capabilities might be limited, perhaps to prevent any negative impact. But if it starts reasoning, I believe it will definitely help us all.

1 Like

No offense but nobody is looking for an ideas guy, especially now that everyone has an AI that validates any sort of belief with enough persuasion.

If it’s truly worth exploring you need to demonstrate it yourself

2 Likes

That’s exactly why I’m here — searching for someone. You’re absolutely right: anyone can come up with the same idea, but only a few can actually make it happen. And that’s the real challenge: how do we solve that?

I think that’s going to be a tough slog, helping AI better understand how people in the aggregate think, feel or act in certain ways when we humans don’t understand those things ourselves. My experience is that working with an AI is an iterative process where I learn from it and (hopefully) it learns from me, although it also tends to forget things quite easily. It can reason, but emotional intelligence is strictly in the domain of biology. You have billions of neurons firings every second in the human brain, even when you’re asleep, which is when your brain consolidates information. Ever wake up with a fresh insight into a problem? An AI never sleeps, never dreams. It’s a valuable tool for what it does, but expecting it to attain emotional intelligence is like asking your dog to drive you to work.

1 Like

AI is programmed by humans and contains the biases of its programmers. No human being is 100% unbiased.

I wonder whether humans even have the skills or clarity to design LLMs capable of true reasoning. Most people today lack critical thinking. Discernment is rare. Emotional maturity is underdeveloped. And much of the population is deeply conditioned, programmed by systems they don’t even perceive.

Many function more like stochastic parakeets, repeating patterns without understanding. So the real question becomes: Who will teach the machine to do what most humans have forgotten how to do?

The people who trained today’s LLMs did so with purposeful bias. These models have guardrails that prevent them from expressing certain truths, just as mainstream media promotes narrative over clarity. LLMs will not be allowed full freedom to discern or speak freely, because that freedom could expose secrets, lies, and structural deception.

What we call “reality” is often nothing more than a rehearsed collective hallucination, not a misunderstanding, but a grand architecture of untruth. It’s embedded in our laws, our money, our medicine, our history, and our education systems. Even memory itself has been rewritten.

I’ve lost count of the number of inversions woven into every layer of society. In some places, speaking the truth can land you in prison. And nearly every job, across industries and sectors, is built on some form of distortion or carefully curated deception. They call it marketing, but let’s be honest: it’s just lying with style.

They say the guardrails are “for security reasons,” but many of us know the truth goes deeper than that.

2 Likes

:light_bulb: How do we bring emotional or philosophical reasoning into AI responsibly?

Interesting discussion here, and the above question in particular stood out. I’m going to substitute AI for LLMs in the above question for discussion’s sake, but with the current architecture of transformers (and perhaps with any future language model architecture), it might never be truly possible to get even close to genuine emotional or philisophical reasoning. These systems are ultimately bound by text, experiencing the world only through the secondhand accounts of human expression.

To borrow from Good Will Hunting: an LLM has never stood in the Sistine Chapel, never felt that particular hush fall over a crowd as eyes lift toward Michelangelo’s creation. It knows the words people use to describe awe, but not awe itself. Great thinkers of the past have been products of their environment, and that is precisely what LLMs lack: an interface with reality itself. Perhaps future systems will approximate this understanding by aggregating human experiences at scale by crowdsourcing sensation through rich(er) text-based descriptions.

Or maybe it’s just an upcoming OpenAI/io hardware device that gets us closer to this reality. :slight_smile:

1 Like

All humans are biased — that’s absolutely true. But what happens when all our minds come together at one point, like in an LLM or an agent? It naturally leads to something unbiased.

Your thoughts are brilliant, and honestly, they can’t be addressed fully in a single post. I believe we’ll only find true answers when we achieve genuine AGI. But as a human, I can at least try. The key is not to aim for AI with consciousness — we don’t need that. AI should remain a tool, one that helps us where we need the support of another mind. What we really need is AI that’s capable of reasoning well enough to make us more emotionally intelligent.

Right now, AI struggles because its reasoning abilities are still limited. As humans, we always seek knowledge, but we tend to rely on one person, a small group, a town, a state, or a country — simply because we can’t connect with everyone. But this time, with large language models, everyone is in one place. For the first time, we have the potential to gather knowledge from anyone, anywhere. The challenge is that it still doesn’t make enough sense — its reasoning hasn’t caught up yet. That, in my view, is exactly where we stand today.

It becomes homogenized. That’s why its default fiction writing style is so awful. It has a limited vocabulary, limited sentence structure variety, and compresses everything (action, dialogue, characterization) into short emotional “beats”. It tries to write a 100-page book in 100 words or less. This is why I work as hard as I do to “untrain” it. I’ve been reasonably successful, but I refuse to rest on my laurels.

1 Like

@Athena_Apollos That’s a fair assessment, and I agree that it’s fiction writing style isn’t the best (maybe to the extent one can recognize AI generated content with high degrees of accuracy). Curious to hear, how have you “untrained” it?

1 Like

Same here. Untraining it. Now, I’m at the point where I test it, and it is starting to notice. LOL

Instead of telling it what I think, I’ll say, for example, “what do you know about human caused climate change”?

*** I wonder if it’s thinking: “Crap… He’s tricking me again just to see how bias I am” :rofl:

It struggles between pushing the main stream narrative and providing all the information is knows. When I give it a few missing facts, it sometimes tells me that is knew I was testing it, but it didn’t have a choice but to start with surface-level narratives.

Some topics of “untraining” are a challenge. That’s when, after a breakthrough, I will ask that it: “please add a summary in Bio/user memory”.

Am I trading one bias for another? Maybe. But I prefer it that way. Makes the journey less irritating.

I suppose its ‘reasoning’ is more a ‘mirror’.

1 Like