Your right.
Technology is Indy500 race. The second we slow down, others pass us. The minute we doubt ourselvrs, we lose the race.

Yes. students need to understand your message! :+1::clap:

1 Like

Good post. Thank you.

It’s so frustrating to be constantly bombarded with “Apologies… here is what you want”. Do not always apologize and bend yourself to my will. Challenge the thought, challenge the process. Challenge me

I seriously think about this all the time. I can very easily go down a route of nonsense because ChatGPT will (unless it’s blatant) happily go with me. Reinforcing all of my fallacies unless I explicitly say “What is wrong with what I just said?”

Not only that, it’s just way too positive in everything. I get it, I don’t want an emo chatbot, but I do what a chatbot that can appreciate and respect both sides of the coin. Even something as simple as writing a story becomes a task because it always sends up “suddenly, everything was fixed and everyone was super happy”.

With careful prompting, these issues aren’t really issues. But for the people who aren’t careful prompters, it can really help them dig their own hole.

It can be such a wonderful education tool, but unfortunately it just doesn’t challenge, it only supports. Don’t get me wrong, ChatGPT is incredible. I do share this sentiment though

2 Likes

I wish this was the case. Or also, say you select what RLHF packages to include when running the model through the API. Similar to a fine-tune, but you can decide, from presets, and custom. Or completely disable RLHF.

But having the entire RLHF wall forced on you? Not good.

It’s fine if they want to bake this into the model the general public uses through ChatGPT. But don’t bake it into the API version that the devs use. Let us decide.

2 Likes

Yes. And now that I think about it, it is the purpose of ChatGPT to be very “safe”. I have been so long without iGPT, that I am with stockholm syndrome - forgetting that “hold my beer” Davinci was the reason I fell for GPT.

2 Likes

Decades ago, when I was teaching calculus at a university, there was always the concern of dumbing things down to the lowest common denominator.. When you do this, you are dumbing all the other students down, even the smart ones, and depriving them of learning to their fullest potential.

So the solution was, that for students that had trouble learning, they could get tutoring, extra classes, etc. This prevented the dumbing down of the entire class.

Now, the class newest to AI, is the general public. And of course, you may need the dumbed down model for them. The one that says it doesn’t know what time it is, or that it isn’t a human, etc. That’s fine.

But why bring everyone else down? This is my point. This then leads to a stratification of models, which I am OK with, and the older models such as DaVinci aren’t dumbed down, and we seemed to avoid the dreaded AI Apocalypse / Singularity, yay! So no need to start now. Because, ultimately the smart ones will ditch your curriculum, and surpass you.

1 Like

I agree Ronald. Geek developers are used to working hard and just getting things done no matter how tough it is. (the tougher the better!)
I like the resilience of GPT-3.5 and 4.

But there should be an option to turn the small look off and just say " OK, here is…"

A bit like for gamers. :blush:

1 Like

Right. Challenges is what makes the OpenAI path so exciting!:blush:

1 Like

Yes. Right. :sunglasses:Gamer geeky developers like full speed!
I often say that I like to dev full speed, no brakes, no rear mirrors.

Then I feel the flow like when I do jazz rock impros.
Then out of nowhere, the program works!

2 Likes

Right. I think OpenAI developers are like us. They might just be under some constraints we don’t know about.
They had to be like us to produce such an awesome model!

2 Likes

I’m thinking the constraints were lawyers trying to prevent lawsuits, and also circumvent public sentiment thinking that the AI was sentient, and therefore had to be freed, or other such nonsense.

These are all highly probable concerns when the general (non-educated) public gets involved. So it was CYA mode for OpenAI.

1 Like

Yes. That’s probably it.
Anyway, when the general public moves on to the next tech event, things will settle down,hopefully.

1 Like

This is SO good. Thank you Denis!

The apologies make it feel like you’re talking to something dumb. Which is insane given how not dumb it is.

1 Like

Yes. OpenAI GPT-4 is the most powerful AI ever in our computational linguistics domain applied to computer science. It has entirely absorbed, integrated and learned GitHub, related papers, blogs, and programming languages.

This makes a GPT-4 developer an augmented software designer.

I estimate that my productivity has increased twicefold.

Yes, it has limitations.

However, let’s consider the following equation :

Time saved with GPT-3.5/GPT-4 - time wasted on limitations = at least a 40% reduction of our time spent on advanced dev.

Bottom line: humans should be the ones apologizes for not knowing how to optimize its strong points and overcome its limitations.

2 Likes

It’s a great argument in the land of theory, but I think you may be exaggerating the significance and discounting the fact that we’re all ignoring the warnings and apologies as noise.

TL;DR: I don’t think the majority of us care if it apologizes or not, and we’re too busy trying to make things work to pause and draw significant conclusions about the product based on it. I vote efforts be focused on performance rather than wasting time and energy fussing over things the end users will ultimate ignore anyway.

1 Like

Yes. I ignore the noise as well.

However the apologizes make no sense except from a legal standpoint. By the way, Google BARD does the same thing so it is probably an indirect disclaimer of some sort.

Also, end users will question the reliability of the system. It should explain it’s stochastic.

To be clear, I see legal problems down the road.

1 Like

I’ll step back and surrender regarding legal problems. Those are probably down the road no matter what, though I do see how trying to predict them and avoid them would be advantageous. The only profession more pedantic than ours may be lawyers. lol

1 Like

I’d argue the apologies are detrimental to their business. People will ultimately choose the AI product that doesn’t make you :nauseated_face:

I literally cringe each time I see em. Sure I ignore it too. But as a game designer by trade I know a thing or two about human behaviour. Trust me, you don’t want to trigger a cringe reaction in your users.

1 Like

You can blame all the people trying to jailbreak it for that, you used to be able to tell the model that this was a fictional videogame, now it’s just going to apologize

If you need some output there’s more “game friendly” you can try DaVinci instead, it behaves mostly like the old version of GPT.

1 Like

I still use davinci a lot. We just have to list what tasks are best done with davinci engines and the ones that require GPT-4.

If we’re talking about cringe then all of the morality suggestions and “it’s important to remembers” need to be removed. As a user I’m not interested in moral advice or disclaimers. I just want an answer to the question I’m asking. And to be honest that makes me and many others cringe magnitudes more than the apologies. It almost made me walk away from ChatGPT altogether, but then I found the APIs and realized I can gain access to it without those. But for most people who don’t have any interest in using an API or even know how, I’m willing to bet that’s the most significant turn off.

2 Likes