Why strawberry is not interesting to me

Imho, if it’s for real, it’s the singularity. If it’s not, than it’s just a marginal agent framework among zillions of marginal agent frameworks.

I mean… for real. If it works, DO something. Solve a critical and novel problem. Make zillions in the stock market. Don’t just publish YAFAF.

3 Likes

Where’s the paper from OpenAI where they’ve proven some challenging math problem? I mean, come on already. Get real.

For those saying leave it to experts, well than it’s the experts solving the problems.

We have 8th graders who are doing amazing things with GPT4 and Claude. Mind boggling things.

If 8th graders can do it, then OpenAI PHds should be able to use vaunted Strawberry to make SOTA advancements in some field they are vaguely familiar with.

1 Like

I understand your point of view.
Regarding strawberry, there’s currently no official information from OpenAI.

While there have been media reports about a demonstration given to the American national security officials, we have no way to verify what kind of problems it can solve or to what extent.

On the other hand, considering that companies need to attract market attention, we can also understand the series of actions taken.

I personally speculate that strawberry is likely positioned as a somewhat improved version of the GPT-4 model.

However, I believe that how much better it is compared to the current model should be evaluated after it’s made publicly available.

3 Likes

I think we’re now at a point where the BS walks and SOTA talks.

Yes, better models and cheaper models are appreciated. But if the folks at OpenAI can’t truly achieve something novel with it, it’s probably marginal and asymptotic.

I say this, because I know - we all know - what is capable with the current models.

I’m not saying Strawberry can’t do it, I’m just saying proof is in the pudding at this point.

2 Likes

Computers have been helping us solve SOTA for a long time.

I should note first ChatGPT/Claude etc are only a progression of math executed on a computer, not some new magical beast. It is not reasonable to expect that even the Singularity will find answers to every problem so you’re heading for a fall.

If only SOTA problems solved by a machine that you will probably never understand will make you happy you need to step outside the door and find something worth living for.

For most people there are far more important problems to solve, deeper meaning to life.

Improved, reliable answers will hopefully help better address societal problems… And not create too many more :frowning:

For me the best problems to solve are the problems I, my family or friends have. Identify those for yourself and if you can’t fix them with 4o then any new way to solve them will become interesting again.

If you want to just skip to the end… The answer is ‘42’

5 Likes

well, logic is supposedly the big thing with strawberry. Maybe it’s all marketing BS though. Personally if I was an investor and logic was their story, I’d ask them to show me something very logically interesting before putting in more $$.

This is my personal opinion from a societal perspective, complementing the earlier post that focused on individuals.

Early computers amazed people by performing precise calculations at speeds far beyond those of humans.

For a long time, calculation was thought to be uniquely human, so computers capable of such intellectual tasks were believed to possess human-like consciousness.

Later, when a computer defeated the human chess champion, it initially caused a stir as it seemed humans had lost in an intellectual arena.

However, people soon realized that this achievement was the result of an accumulation of processes, rather than true intelligence.

Similar occurrences happened in Shogi and Go, and now we’re seeing the same with GPT in natural language generation. Essentially, it’s the same story repeating itself.

That technological progress does not create more problems than it solves is my fervent hope.

This is because we are facing an overwhelming number of urgent issues, and time - as precious as life itself - is rapidly slipping away.

1 Like

The market will adapt quickly once people start using advanced agents. While there may be arbitrage opportunities for early adopters, it’s important to recognize that trying to consistently outsmart the market, especially when everyone has access to the same tools, is unlikely to succeed. Dollar-cost averaging and diversification are strategies that will always keep pace with the market’s developments. It’s best to approach the market with a long-term perspective and find enjoyment in other aspects of life. If you’re using the stock market primarily for entertainment, it might be worth reconsidering your approach.

If you follow the LLM bubble, we have GPT or Generative Pre-trained Transformers.

The problem here is the weights are frozen, and the model can only regurgitate the information contained in those frozen weights.

But if you could also Post-train the model. Say by unfreezing some of the layers in the network, and let them be dynamic, you can unlock some rapid improvement pathways.

And where it gets insane, akin to SMI (super machine intelligence) is when you have AI post-training on unlimited amounts of AI generated content.

Now, is this the holy grail? Not sure, but it allows a degree of freedom not seen in the current transformer architectures to date.

6 Likes

pretty sure that could be solved with a huge stored procedure in SQL

I’d agree, except, there is a ‘Great Wall’ that must eventually be knocked down for it to be truly understood by most people.

While The West might promote freedom of speech…

From where will freedom of thought come?

I think maybe the holy grail is slightly more inclusive in terms of information sharing as the OP suggests.

Until then insanity reins for us all.

I’m not sure I get the SQL procedure solution :rofl:

But another way to think of this, is realize the model is just an average of its training data.

So to get a better model, they simply add more weights (and need more training data) to get the better base model. But still, this model is an average within those larger number of weights.

However, suppose you took a decent model, and taught it how to “speak” intelligently, like a current Pre-trained LLM. Then you unlock some of those layers, and had it research some deeper topic, using the base model as a foundation. And then it gets super smart on that topic.

So this is a lot like a fine-tune … so I’d have to think about this more … but I don’t think SQL is our hero here.

Combine the unfrozen layers with it.
Keeping track of statistics and calculating probability plus a datamodel like the nested set… I see alot potential for short term until the model needs to be retrained.

I am not talking about a select elephant from africa here

Would be pretty complex hence the “huge” stored procedure.

2 Likes

This definitely feels like the consensus of the ever-demanding internet.

I recall an early interview with Sam where they asked “How do you plan on making money with this?” and he said “IDK, I’ll just ask ChatGPT”.

If this is the strategy, ChatGPT is the worst marketer there is out there :rofl:

The world has been given insane expectations on how LLMs will dominate and has been continuously let down with weak, still unfulfilled promises. “For the safety” is the new “for the greater good”. Bruh just don’t hype it up then, sheesh. Dance around a tiki totem and chant it to yourselves or something

All of this strawberry smoke & hype is just so lame. But, I do believe this advancement will be rocking some boats. Give them a break, man, it’s hard to obfuscate the original training data with synthetic and then set up the iron gates behind :rofl:

I just think we’re at that point now that if all they can do is more boilerplate journeyman code or content or whatever, ok, cool, but that’s very incremental imho.

Strawberry, to be more than incremental, needs to do something that requires a master. It doesn’t have to be a master of everything, but something.

Is this primarily about venting frustration over OpenAI demoing cool new models like SORA and the advanced voice mode without releasing them to the public anytime soon, or is it more about disillusionment with the actual intellectual capabilities of LLMs?

1 Like

There’s no frustration or disillusionment that I’m aware of, unless it’s something you’re feeling?

The issue is Strawberry is not interesting until it does something real. More whizbang clever stochastic parrot is not going to cut it.

2 Likes

I mean, we still don’t know exactly what “Strawberry” is, right?

All we’ve got so far is rumors and articles with anonymous sources saying impressive things about it, but it’s all very vague.

It seems to allow the model to reason better, but it’s difficult to extrapolate how that will impact us as end-users/customers. We don’t even know the pricing, the release window, or the capabilities.

Cryptic tweets and strawberry emojis all over the forum are building up hype and people are understandably worried that whatever gets released won’t live up to their expectations. But I think it’s good to remember that most of us here don’t have much substance to go off of. I’m not gonna say I’m disappointed based on rumors, before the thing even comes out.

2 Likes

One interesting comment I saw was that the descriptions for the models in ChatGPT have changed. GPT-4o now being changed to “Best for daily tasks” which loosely implies a new model is coming for “deep tasks”.

An incremental update wouldn’t be the worst, it leads to richer training data. Plus, people kind of apply their own architecture to the model on an application level, like me deeply integrating AI into my coding for both quick-suggestions and also communicating efficiencies. Giving it time to internalize the information through prompting.

I think this is fair. Ultimately, it’s a shame that they still have projects that they’ve hyped up that still aren’t available, and then hype more projects on top of it. Just don’t know until we know :person_shrugging: I welcome any advancements.

1 Like

Personally unfreezing the layers feels more incremental. I honestly have no idea what Strawberry is, but post-training seems to be the latest theory in the media.

What I do think is revolutionary is actually studying how our minds work, and try to mimic this (hint: I’m a big fan of neuromorphic computing).

One recent blurb that came out is how they think quantum entanglement arises in our brains and gives rise to consciousness.

Here’s the high level on that:

3 Likes