OpenAI should not write negative apologetic messages. It’s an innovation mistake.
OpenAI is a computer science AI research organization that produces new models, some as beta.
Implementing these models requires expertise and a careful scientific approach.
For the past few days, OpenAI ChatGPT Plus displays this message when we login:
“Some users are reporting errors when using GPT-4. We are actively investigating”
Also, every time GPT-3.5 (legacy), GPT-3.5(default), GPT-4 produces something that we think is incorrect, ChatGPT Plus apologizes: “I apologize”
This a strategic and educational mistake.
Let me break this down.
1. A foundation model is a risk.
Stanford University published: [!On the Opportunities and Risks of Foundation Models]
(https://fsi.stanford.edu/publication/opportunities-and-risks-foundation-models)
This article begins with:
“AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character.”
There are two takeaways:
-the OPPORTUNITIES
-the risks: incomplete hence problems with beta version, scaling, etc.
For me: Innovation is a risk, but not innovating is far higher risk of vanishing in the haze.
This is the fundamental definition of disruptive innovations: making huge technological and commercial leaps and accepting the risks of being on the cutting-edge.
2.Emergence
GPT 3.5-turbo and GPT-4 models are emergent models. What does this mean?
It means that an OpenAI model can perform hundreds of tasks without anyone being able
to make an inventory. The prompt produces generative AI.
So OpenAI has no means to see the limits of a system that can do so many tasks.
I explained this in a post on Linkedin regarding chess sequences. GPT-4 is not good at
chess and made an “error” if we read to OpenAI’s message.
I refuse to call this an “error”. It’s a LIMIT.
Why? See item 3.
3.OpenAI GPT models are ANI, not AGI, not ASI.
The conclusion is that GPT models have LIMITS.
OpenAI should NOT “apologize” for the natural limits of a model.
The message should be : “Our models are ANI,not AGI. As such they have limits. Make sure to make inventory of these limits when you implement our models in your project domain.”
An Artificial Narrow Intelligence model will NOT do anything or everything.
It’s up to a computer scientist to use a scientific approach when deploying a model.
How? See item 4.
4.Advanced Prompt Engineering
In this open source notebook, I go through the implementing innovative model such as GPT-3.5-turmodelsd GPT-4 based on my decades of implementing AI in leading comporations(see my LinkedIn profile) :
!Advanced prompt enginneering for OpenAI Models
Although it is only example, here are the basic project management steps and rules to apply
when implementing OpenAI Models beyong the security issues of an API:
building a corporate knowledge base Yes, this takes time. And it costs a lot of money. AI is expensive. Think of your knowledge base as a search engine stores data.
building a solid metadata repository Once the knowledge base is reliable (start with a limited scope), build a metadata repository with keywords pointing to the knowledge base.
**implement OpenAI’s moderation model" All content, input/output, should go through the moderation model. Then you need to parse the output and a corporate RULE BASE with corporate best practices.
Yes. It’s a lot of work and it costs a lot of money.
Now run OpenAI’s generative AI models You parse the user’s input, find the keywords that match your well-designed metadata repository that points to the right locations in your knowledge base in
a map/retrieve (big data) fashion.
OpenAI’s GPT-4, soon GPT-x or even better, will transformer your knowledge base bland data into a nice dialog like message.
You apply RLHF to YOUR implementation with a human quality control panel.
You restrict the OpenAI model to what it can do, not complaining about what it can’t.
You monitor and control 100% of the system.
Bottom line: Innovation is calculated risk. You take it, seize the opportunity, accept the risks, don’t be apologetic, kindly explain the limits, and move forward as fast as you can before your competition
obliterates you.
I enjoy OpenAI’s innovations tremendously and find the limits exciting. Why? Because extending
those limits through workarounds, additional addons, plugins, developing code is what makes AI exhilirating and why I get up every morning to explore the unknown!
Be disruptive, kindly explain. But don’t be apologelitic when you are making history!