Hey folks, Community Ambassador and Admin here, thanks for joining our community. I’m happy to help you all with prompt design. A good way to start is to use your playground like a journal. Every time you get an idea, try it out in the playground and save the session. I suggest always starting by clearly stating your intent and goal in the playground. Zero shot is overrated. Some examples of things that I’ve created:
Business idea generator
Press Release writer
Biopic film treatment generator
Film production budget generator
Drug indication summarizer
HR Training Expert
Marvel superhero AI
Historical figure AI
Legal advice for police encounters
Feel free to reach out if you get stuck or have an idea you’d like to flesh out.
Hey @Abran, I’ve been using GPT to translate stuff and I’m pretty satisfied by the production tool I’ve built for myself.
Now I’d like to create a prompt that turns movie scripts into novels. Assisting the AI is ok, but it seems I can’t make it go in the right direction using the same approach I had for translations.
Do you have any idea / related experience?
I feel like a tutor’s job is largely to answer questions in an easy-to-understand way, and the student’s job is just to keep asking questions. If I were to go about making a tutoring bot, I’d thus start with the Q&A bot, and give examples for whatever field you want the bot to be in (e.g. English, math, history, etc.).
Pick a subject and grade level. Involve some educators or subject matter experts so they can help you script real world scenarios. And they can respond to a student in an authentic way as a teacher would. GPT-3 will begin to pick up on the pattern.
Einstein did not have a brother but it says it did because I suggested it in the question. How can I make the AI answer factually? In the case I shared the AI should have said “I did not have a brother”.
I think GPT-3 is mixing up people in the Einstein lineage. Zero-shot factual question answering is very tricky unless the questions are very easy. I think of GPT-3 as being trained on how to write, and sound human, in the English language. It’s dangerous to think of GPT-3 as being trained on the truth. Sure, some facts from the internet are easy for GPT-3 to get right, but generally GPT-3 is pretty dumb when it comes to knowing or deducing or inferring the truth.