OpenAI shouldn't display this message

OpenAI should not write negative apologetic messages. It’s an innovation mistake.
OpenAI is a computer science AI research organization that produces new models, some as beta.
Implementing these models requires expertise and a careful scientific approach.

For the past few days, OpenAI ChatGPT Plus displays this message when we login:
“Some users are reporting errors when using GPT-4. We are actively investigating”

Also, every time GPT-3.5 (legacy), GPT-3.5(default), GPT-4 produces something that we think is incorrect, ChatGPT Plus apologizes: “I apologize”

This a strategic and educational mistake.

Let me break this down.

1. A foundation model is a risk.
Stanford University published: [!On the Opportunities and Risks of Foundation Models]
(https://fsi.stanford.edu/publication/opportunities-and-risks-foundation-models)

This article begins with:
“AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character.”

There are two takeaways:
-the OPPORTUNITIES
-the risks: incomplete hence problems with beta version, scaling, etc.

For me: Innovation is a risk, but not innovating is far higher risk of vanishing in the haze.

This is the fundamental definition of disruptive innovations: making huge technological and commercial leaps and accepting the risks of being on the cutting-edge.

2.Emergence
GPT 3.5-turbo and GPT-4 models are emergent models. What does this mean?
It means that an OpenAI model can perform hundreds of tasks without anyone being able
to make an inventory. The prompt produces generative AI.
So OpenAI has no means to see the limits of a system that can do so many tasks.
I explained this in a post on Linkedin regarding chess sequences. GPT-4 is not good at
chess and made an “error” if we read to OpenAI’s message.

I refuse to call this an “error”. It’s a LIMIT.
Why? See item 3.

3.OpenAI GPT models are ANI, not AGI, not ASI.
The conclusion is that GPT models have LIMITS.
OpenAI should NOT “apologize” for the natural limits of a model.
The message should be : “Our models are ANI,not AGI. As such they have limits. Make sure to make inventory of these limits when you implement our models in your project domain.”

An Artificial Narrow Intelligence model will NOT do anything or everything.
It’s up to a computer scientist to use a scientific approach when deploying a model.
How? See item 4.

4.Advanced Prompt Engineering
In this open source notebook, I go through the implementing innovative model such as GPT-3.5-turmodelsd GPT-4 based on my decades of implementing AI in leading comporations(see my LinkedIn profile) :
!Advanced prompt enginneering for OpenAI Models

Although it is only example, here are the basic project management steps and rules to apply
when implementing OpenAI Models beyong the security issues of an API:

building a corporate knowledge base Yes, this takes time. And it costs a lot of money. AI is expensive. Think of your knowledge base as a search engine stores data.

building a solid metadata repository Once the knowledge base is reliable (start with a limited scope), build a metadata repository with keywords pointing to the knowledge base.

**implement OpenAI’s moderation model" All content, input/output, should go through the moderation model. Then you need to parse the output and a corporate RULE BASE with corporate best practices.
Yes. It’s a lot of work and it costs a lot of money.

Now run OpenAI’s generative AI models You parse the user’s input, find the keywords that match your well-designed metadata repository that points to the right locations in your knowledge base in
a map/retrieve (big data) fashion.
OpenAI’s GPT-4, soon GPT-x or even better, will transformer your knowledge base bland data into a nice dialog like message.

You apply RLHF to YOUR implementation with a human quality control panel.
You restrict the OpenAI model to what it can do, not complaining about what it can’t.

You monitor and control 100% of the system.

Bottom line: Innovation is calculated risk. You take it, seize the opportunity, accept the risks, don’t be apologetic, kindly explain the limits, and move forward as fast as you can before your competition
obliterates you.

I enjoy OpenAI’s innovations tremendously and find the limits exciting. Why? Because extending
those limits through workarounds, additional addons, plugins, developing code is what makes AI exhilirating and why I get up every morning to explore the unknown!

Be disruptive, kindly explain. But don’t be apologelitic when you are making history!

6 Likes

Great ramble my friend, I especially liked this part:

I’m going to yoink that, print it, and put it on the wall of my office.

Here’s the short version for all the busy people out there:

TL;DR: Denis argues that OpenAI’s apologetic messages for errors and limitations in its models are a strategic and educational mistake. He emphasize that innovation comes with risks, and that GPT models are emergent and have natural limits as ANI/AGI systems. Instead of apologizing, OpenAI should acknowledge the limits and encourage users to apply scientific approaches and advanced prompt engineering when deploying their models. Embracing the risks, seizing opportunities, and pushing boundaries are essential for progress and staying ahead of the competition.

3 Likes

Thanks the great summary of the ramble N2U!:+1::slightly_smiling_face:

2 Likes

You’re welcome! Always happy to help!

I always tell my students “don’t be an apology for yourself”, I’ll admit the phrase works better in my native language, but i think it applies here as well.

I believe the apologetic behavior by OpenAI is necessary at the moment, they need to show governments, regulators and the general population, that they’re willing to comply, otherwise their services might get banned.

2 Likes

Your right.
Technology is Indy500 race. The second we slow down, others pass us. The minute we doubt ourselvrs, we lose the race.

Yes. students need to understand your message! :+1::clap:

1 Like

Good post. Thank you.

It’s so frustrating to be constantly bombarded with “Apologies… here is what you want”. Do not always apologize and bend yourself to my will. Challenge the thought, challenge the process. Challenge me

I seriously think about this all the time. I can very easily go down a route of nonsense because ChatGPT will (unless it’s blatant) happily go with me. Reinforcing all of my fallacies unless I explicitly say “What is wrong with what I just said?”

Not only that, it’s just way too positive in everything. I get it, I don’t want an emo chatbot, but I do what a chatbot that can appreciate and respect both sides of the coin. Even something as simple as writing a story becomes a task because it always sends up “suddenly, everything was fixed and everyone was super happy”.

With careful prompting, these issues aren’t really issues. But for the people who aren’t careful prompters, it can really help them dig their own hole.

It can be such a wonderful education tool, but unfortunately it just doesn’t challenge, it only supports. Don’t get me wrong, ChatGPT is incredible. I do share this sentiment though

2 Likes

I wish this was the case. Or also, say you select what RLHF packages to include when running the model through the API. Similar to a fine-tune, but you can decide, from presets, and custom. Or completely disable RLHF.

But having the entire RLHF wall forced on you? Not good.

It’s fine if they want to bake this into the model the general public uses through ChatGPT. But don’t bake it into the API version that the devs use. Let us decide.

2 Likes

Yes. And now that I think about it, it is the purpose of ChatGPT to be very “safe”. I have been so long without iGPT, that I am with stockholm syndrome - forgetting that “hold my beer” Davinci was the reason I fell for GPT.

2 Likes

Decades ago, when I was teaching calculus at a university, there was always the concern of dumbing things down to the lowest common denominator.. When you do this, you are dumbing all the other students down, even the smart ones, and depriving them of learning to their fullest potential.

So the solution was, that for students that had trouble learning, they could get tutoring, extra classes, etc. This prevented the dumbing down of the entire class.

Now, the class newest to AI, is the general public. And of course, you may need the dumbed down model for them. The one that says it doesn’t know what time it is, or that it isn’t a human, etc. That’s fine.

But why bring everyone else down? This is my point. This then leads to a stratification of models, which I am OK with, and the older models such as DaVinci aren’t dumbed down, and we seemed to avoid the dreaded AI Apocalypse / Singularity, yay! So no need to start now. Because, ultimately the smart ones will ditch your curriculum, and surpass you.

1 Like

I agree Ronald. Geek developers are used to working hard and just getting things done no matter how tough it is. (the tougher the better!)
I like the resilience of GPT-3.5 and 4.

But there should be an option to turn the small look off and just say " OK, here is…"

A bit like for gamers. :blush:

1 Like

Right. Challenges is what makes the OpenAI path so exciting!:blush:

1 Like

Yes. Right. :sunglasses:Gamer geeky developers like full speed!
I often say that I like to dev full speed, no brakes, no rear mirrors.

Then I feel the flow like when I do jazz rock impros.
Then out of nowhere, the program works!

2 Likes

Right. I think OpenAI developers are like us. They might just be under some constraints we don’t know about.
They had to be like us to produce such an awesome model!

2 Likes

I’m thinking the constraints were lawyers trying to prevent lawsuits, and also circumvent public sentiment thinking that the AI was sentient, and therefore had to be freed, or other such nonsense.

These are all highly probable concerns when the general (non-educated) public gets involved. So it was CYA mode for OpenAI.

1 Like

Yes. That’s probably it.
Anyway, when the general public moves on to the next tech event, things will settle down,hopefully.

1 Like

This is SO good. Thank you Denis!

The apologies make it feel like you’re talking to something dumb. Which is insane given how not dumb it is.

1 Like

Yes. OpenAI GPT-4 is the most powerful AI ever in our computational linguistics domain applied to computer science. It has entirely absorbed, integrated and learned GitHub, related papers, blogs, and programming languages.

This makes a GPT-4 developer an augmented software designer.

I estimate that my productivity has increased twicefold.

Yes, it has limitations.

However, let’s consider the following equation :

Time saved with GPT-3.5/GPT-4 - time wasted on limitations = at least a 40% reduction of our time spent on advanced dev.

Bottom line: humans should be the ones apologizes for not knowing how to optimize its strong points and overcome its limitations.

2 Likes

It’s a great argument in the land of theory, but I think you may be exaggerating the significance and discounting the fact that we’re all ignoring the warnings and apologies as noise.

TL;DR: I don’t think the majority of us care if it apologizes or not, and we’re too busy trying to make things work to pause and draw significant conclusions about the product based on it. I vote efforts be focused on performance rather than wasting time and energy fussing over things the end users will ultimate ignore anyway.

1 Like

Yes. I ignore the noise as well.

However the apologizes make no sense except from a legal standpoint. By the way, Google BARD does the same thing so it is probably an indirect disclaimer of some sort.

Also, end users will question the reliability of the system. It should explain it’s stochastic.

To be clear, I see legal problems down the road.

1 Like

I’ll step back and surrender regarding legal problems. Those are probably down the road no matter what, though I do see how trying to predict them and avoid them would be advantageous. The only profession more pedantic than ours may be lawyers. lol

1 Like