The ability to write code was discovered, not designed.
In-Context Learning was discovered, not designed.
It is entirely possible that the models we are using can do things we do not know about, even Gpt3.
It is entirely possible that someone somewhere will type something in and the model will do something so unexpected that the api has to be turned off until there is a mitigation strategy.
Not a doomer, I’m putting all my eggs in openai’s basket. I’m a huge fan and I believe my products will ship and be amazing and reliable.
The problem is how much control you can control it. What I’ve found is that GPT can be a constructive or a destructive tool in the way you use it.
The problem is not the tools, but the people and the methods.
Like using a pen to kill, John Wick is far from a journalist.
For sure. And because we don’t know how they work (see: mechanistic interpretability) we can’t truly know how they will behave.
Even if you are the one who created it, you will never know what it will become,
Knowing the science well does not mean knowing how to solve the problem.I am from a field of management that often does not look at problems from the perspective of people in that field. Oftentimes, taking the perspective of the consumer or end user is necessary to find important problems.
As in here I see outsiders asking for information, such as psychologists or people who study financial instruments. Often receive information that is prohibited from Usage policies and forget that in some cases it is something that is seen and happens in society, such as being used to predict various investments. Even though I don’t have much knowledge in AI, for financial management I need both quantitative data. and quality for analysis It may be necessary to convert the type to be usable in other ways.
The field of psychology is written as being inappropriate. But no one knows that the science of psychology has only been around for a few decades but is developing quickly and still works well with ChatGPT. It’s marketing.
We should follow the rules but if we always reject them because of the rules that are being told. And we cannot understand questions from outsiders. Are we humans or AI?
Just because you think about the rules doesn’t mean you have to break them. It may be better to take possible problems into consideration for solutions.
I know that there are limitations in human beings. Therefore, we need diversity to help answer questions in order to find solutions and develop problems. In the area that I lack But in the end, I still haven’t found anyone who is suitable. People who bring up problems from messages to write code
And even though I have an OpenAI attitude, I don’t have the qualifications to apply for the job. But I will still do the best I can.
Kind of lost you at the end there but, yeah, this is nascent stuff tread with care.