Here’s my next billion-dollar idea:
I don’t have a heck of a lot of time lately, but if you have any questions or ideas I’m happy to chat here.
Edit: I think I tried to reply to a direct message but I’m on mobile. Lol.
Didn’t get a chance to watch your video yet but I thought this would be a cool idea for learning languages or chatting with a bot to practice. It would correct your grammar and practice speaking at the appropriate level the learner is at.
If you add the prompt “Hello, my name is Ali. (Merhaba, benim adım ali)”, the response will be in the same format, that is, with the translation attached in paranthesis.
That’s a great idea. I’ll add that as a prompt when I expand on this idea. I think I’m onto something really valuable here.
I’m going to add a few cases:
the user is distracted for various reasons (sleep deprived, drama at home, hungry, etc). Teachers often have to engage with students who can’t show up for one reason or another. In this, I’ll add compassionate listening back to the toolkit
the user is reluctant to engage because they are shy, insecure, or detached from their passions. In this case I’ll use the reference interview to investigate what the user really wants and needs
the user is mischievous and keeps trying to talk about unrelated things like video games and gossip, but the chatbot uses it as a teachable moment by subtly redirecting the conversation. For instance, if the user wants to gossip, the chatbot might discuss healthy communication techniques and boundaries. If the user wants to talk about video games, the chatbot might discuss the art of storytelling. The idea is to meet the user where they are.
Let me know if there’s any way I can help. I’m working on developing business ideas and building working prototypes full-time.
I’ve gauged interest in the language learning idea on Reddit and it was received well, so I think there’s something there if you’re serious about commercialization.
That’s really great! I encourage you to borrow my code and join a startup. I am not presently interested in startups for a number of reasons, but primarily I will do more good for the world by focusing all my energy on research. However, if you have any ideas, proposals, or research problems I’m happy to make a YouTube about them!
Hi, I’m a new player of OPENAI,i have a practice in medical QA and have a problem, in our model has 100 examples, we did fine tunning but the last sentence in the picture exceeds the maximum length. How to fix this problem
Increase your token limit with the slider
Hi Dave, This is great! Would it be possible to include a textbook chapter in the prompt so the student can ask questions about it? I wrote an OER textbook at howargumentswork.org and want to create followup noncommercial interactive opportunities for students.
I doubt it would be necessary to include a chapter. GPT-3 was trained on hundreds of gigabytes of text.
Follow-up video is done now that finetuning is working. This video covers adding edge cases and adversarial behavior. It’s pretty solid. It can’t yet handle everything a real teacher would be confronted with, but it handled inappropriate sexual topics, anger, and frustration with flying colors.
Okay, I just want it to respond in ways that are consistent with the textbook’s approach to the material. I’m testing it.
Hi @daveshapautomator ,
A few months ago I was playing with a similar idea. I coded a script to convert audio(microphone) to text, translate it to english, send the question to GPT3, translate it back to the original language and speak it in audio again.
My prompt was engineered to answer questions to children on safe topics (it would redirect children to talk with their parents on more sensitive topics like sex, violence or religion), and using simple language.
My problem was, how can I check that the information being said by GPT3 is in fact correct? There were times when even if the prompt focused on factual info and the temperature was not too high, the information was still a bit controversial. How would you approach that in your billion-dollar education idea?
This is a nontrivial problem! Maintaining ground truth and “knowing what you know” requires theory of mind as well as a repository of facts. This was one of my earliest experiments in NLP and GPT-3 where I tried to create an offline repository of knowledge by downloading and registering Wikipedia in SOLR.
However, I found that GPT-3 is pretty well versed on a lot of facts and ideas. I will have to think about how to get GPT-3 to be more reliable on facts, and to verify. For instance you can just ask if a statement is true or false.
It even corrected me on the population of Toronto:
This is similar to my video on reducing confabulation.
Basically, what you do is split a task up into several parts. You can ask for facts, figures, ideas, and data. Then you can ask about its veracity. These are distinct cognitive tasks for humans, so it should be no surprise that they are separate tasks for GPT-3.