How is it that GPT-3 produces different answers for the same question asked on separate instances?
Is the model still changing and learning?
Or, if it was trained once and only once, are there random choice operations in its code?
Otherwise, how could a computer program output different results from the same input, without some random elements somewhere within?
What was GPT-3 trained on?
Are there any systematic studies of what GPT-3 is good at and what not; i.e. guidelines for applications where it could be effective? For example, it appears to be very good at writing code. I have found it to be good at answering simple informational questions, like facts about dates, if it knows that specific fact. But I don’t know how to anticipate what facts it does and doesn’t know. The docs say giving an example and a beginning to complete can be successful. What else is known about what GPT-3 can and cannot do?
I think GPT-3 was trained on Wikipedia. I think it might have been trained on some other things but i’m not sure. What’s for sure is that it was trained on a heck ton of data.
When you say that its good at writing code, are you referring to Codex. Codex is a fine-tuned version of GPT-3 trained on a heck ton of Github code. If you’re not referring to Codex, I’ve tried getting GPT-3 to write basic code, and a lot of the time it impressed me, but other times it did a poor job. However, the more context/examples I gave it, the better it did.
I find that GPT-3 is good at basically anything language related as long as you give it enough context. And I think the potential of GPT-3 is even greater considering that you can fine-tune it. You can basically make any topic its strength with enough hard work and training.
Also, different GPT-3 engines are better at different things. Some are better at writing than others, but it depends on what you want to do. I recommend just experimenting with the engines and settings, as those can make a difference in the quality of its responses.
Those are just some things I noticed when playing around with GPT-3.
Hopefully i answered some of your questions