Sounds great! Helping to review papers can save many hours of work. I’m looking forward to seeing how far you can go using GPT-3 to approach the enormous amount of information that is available and humanly impossible to review as a whole. Another idea that occurs to me would be to use these abstracts to create explanatory diagrams or drawings of the molecular signaling pathways using DALLE-2, maybe we will have to wait until Dalle-3 for that… It would be cool to try that if you eventually have access to this tool.
The video you made on getting GPT3 to critique arguments really got me thinking, and now I’m focused on a new direction for this: a program where the user inputs a short argument and then chooses whether to generate a list of main points, possible objections, strengths of the argument, description of its tone, questions about its meaning, and questions for further discussion and exploration. It seems like a prompt for each of those wouldn’t be hard to write–I played around with tone and discussion questions already, and your video shows how well it summarized main points in a list.
This would be incredibly helpful as a critical thinking tool for anyone reading something and wanting to engage with it: the general public, academics, students. It wouldn’t be creating an end product, so perhaps that would make it more likely to be used to stimulate critical thinking rather than substituting for it.
Since I’m not a software developer (though I’m starting to learn Python), I don’t know how to go about trying this out! Any suggestions?
Programming does take a while, so I’m not sure what advice to give. You can find folks to help on places like Fiverr and Upwork.
As far as formatting, it sounds like you’re imagining an automated Kialo. I recommend you look that platform up as part of constructing and organizing formal arguments.
Thanks! I had forgotten about Kialo. Just took a look–that’s a great tool, but I wasn’t quite so debate-focused. I’m thinking of something that looks like an AI summarizer (i.e. Quillbot) but instead of just a summary option it has “list of points,” “discussion questions,” “possible implications,” “argument strengths” and “argument weaknesses” options as well.
This might not be the kind of thing you are looking for, but i wonder if this is something you could address. How do you get gpt to error correct. Here’s my interaction with gpt-3.
Write a version of the following chorus for a professor's syllabus. Notice that it has an alternate rhyming scheme. This means the last word in the first line and the last word in the third line should rhyme. Likewise the last word in the fifth and seventh line should also rhyme. Make sure that your versions the same rhyme pattern and meter as the following example:
If you want to call me "baby"
Just go ahead now
And if you like to tell me "maybe"
Just go ahead now
And if you wanna buy me flowers
Just go ahead now
And if you would like to talk for hours
Just go ahead now.
-----
If you have questions for me
Just go ahead now
And if you'd like to talk more daily
Just go ahead now
And if you need more clarity
Just go ahead now
Before our time gets too cavalier
Just go ahead now.
List 10 words that rhyme with clarity.
A: charity, clarity, disparity, filigree, gallery, guarantee, longevity, malady, marauder
Redo the assignment and incorporate one of those words as the rhyme for the next to last line.
If you have questions for me
Just go ahead now
And if you'd like to talk more daily
Just go ahead now
And if you need more clarity
Just go ahead now
Before our time gets too cavalier
Just go ahead now.
You can see that I tried help it. But it did the same thing. I had the temp set at 1, and everything else at the playground defaults.
I think it has difficulty with word rules like that. See this other thread where we tried getting it to find a word without certain letters.
Hey everyone, thanks for sending me ideas and data. I’m a Renaissance Soul, which means I cycle through projects and phases and so now I’m cycling out of research, YouTube, and GPT-3 for a while. Instead, I’ll be working on my books for the next few months.
I have a trilogy I’m working on, which are written but in revisions. I also have a philosophical work that is also written, and is with beta readers. Then there’s the proverbs book I wrote with GPT-3 in the final stages of revision. Lastly, I have a book about artificial cognition that is in early stages (I’ll be incorporating much of what I’ve learned from this recent surge of research).
Many of you have provided fascinating insights and interesting problems, all of which have been invaluable fuel for thought and new ideas. My growing following on YouTube has been equally valuable. I hope to give back to the community in the form of my YouTube videos, GitHub repos, and books. Typically, I go through 3 or 4 cycles per year, so they average about 3 or 4 months. Accordingly, I should be back to research mode sometime in the autumn.
Thank you all once again. Looking forward to the next phase of the journey. Take care.
Thank you so much for high quality content. Best of luck in all your future endeavors and keep us updated with the status of your books.
Will you still be checking in on this forum or are you doing a hard checkout for now?
I have a problem engeniering the correct prompt for Codex to create a game of Hangman in the Javascript programming language.
Thanks for sharing this update–it seems like a great model for a Renaissance person to work in cycles. I so appreciate your ideas, teaching, and desire to collaborate. I learned a lot from the way you approached the argument analysis with GPT-3. Looking forward to learning more from your past videos while you’re away.
Best of luck with the fiction, philosophy, and proverbs. I hope your next projects are enjoyable.
Take care, Anna
Dave-
We are following your advise on using embeddings (not finetuning) for our chatbot. So we have a 200 pages document. We want to use Babbage (2048 token max). Do you have any video (I could not find one on your YouTube) on how to divide the file into 100 documents each 2 pages long (<2048 tokens) and create 100 embeddings?
I do have to confess that recommendation was based on intuition and some experience with chatbots and cognitive architecture.
The best way to break up a document I’ve found is to use the wordwrap module in Python. This allows you to rapidly break any document into equally sized chunks, which means you can get the most mileage out of your embeddings.
You can see an example of word wrap being used in my “compress anything” video.
Hope that helps.
how to finetune codex?
You dont, it is not a feature
I want to generate YouTube video ideas with the help of Open AI and YouTube Data v3 to get genuine next video ideas. I’m using the flask framework. Please make a video tutorial on this.
I don’t know if the original message is still valid, but I asked something which can be the actual title of a tutorial.
I’ve seen that others have struggles with the fine tuned models.
cant view
video is private
Hi David,
thanks again for video experiments you share on your video channel! The problem I submit/share with you/all the community is this:
Title
How a chatbot, built using a GPT3-model, is allowed to call an external API,during/inside the dialog flow?
Context
Suppose you build a chatbot as described in your nice video: Python & GPT-3 for Absolute Beginners #2 - Your first chatbot - YouTube : A python program follows the basic idea you live-coded in the above video: the program get user input sentence from the terminal, add the sentence at the bottom of a dialog turn, call the openai completion API appending the GPT3 response at the bottom of the growing text:
You are my personal assistant BOT expert on weather forecast. Everyday you give me advices and ideas related to the weather.
USR: blablabla
BOT: blablabla
USR: blablabla
BOT: blablabla
Now suppose we want to let the chatbot aware of the today REAL weather forecast in a specific location/city (e.g. Genova, Italy).
Suppose you have some web service API that returns the weather forecast. Let incapsulate that service under a python function with signature: request_weather(city)
. Usage example:
request_weather('Genova, Italy')
# 'today (25/12/2022) in Genoa the weather is beautiful, there is the sun and the temperature is 20 degrees.'
A first idea, for the dialog initialization, is to call the above function before the chatbot turns-exchange, injecting the weather forecast statement (ad any other useful DATA) in the above prompt. Like that:
You are my personal assistant BOT expert on weather forecast. Everyday you give me advices and ideas related to the weather.
Weather forecast: today (25/12/2022) in Genoa the weather is beautiful, there is the sun and the temperature is 20 degrees.
USR: blablabla
BOT: blablabla
Ok, but what if the user ask, at the middle of the initiated dialog, about the weather in another location, e.g. New York?
Suppose you want to answer the user, without a fake answer (GPT is able to invent anything if you let it be free) but instead answering the user with the real weather forecast in New York. In this case you do need to call the function request_weather('New York, USA')
.
In general, you want to call an external API inside the dialog. How to implement that feature?
Ideas/approaches
1- What comes to mind is to catch the user “intent” before the GPT3 completion request, so you could implement an usual intent classifier on top GPT3. Maybe, but I do not like this solution because I need to implement that classifier by ‘hands’ with usual efforts (training the intent/entities phrases dataset etc. etc.)
2- Instruct the GPT3 model to reply a ‘command’ w (with a specified known syntax) when it match a known intent, as roughly shown here: OpenAI API. The idea here is to call the function grequest_weather
when the model return this command inside the answer (let’s call this a back-command; So the python program could catch the back-command, run-it the function return text (say a text that describe the weather) and the dialog prompt will be appended with function return text as a response to the user. That sound good but I couldn’t make this work.
Note
The general problem here is how to create conversational applications based on GPT3 but can accomplish task-oriented deterministic activities, taking advantages from the ‘partially-no-deterministic’ elaborations of generative systems. Practical examples? Consider a chatbot that help customers with usual informative answers but that allow to open a ticket on some help-desk system. Or consider a question/answering system that do need to retrieve some info in realtime, etc. etc.
Any idea? Any suggestion is welcome.
Thanks
Giorgio
Wait, with text-davinci-003
, the second approach seems to work! I inserted in the initial prompt some pseudocode instructions. See the example here below (sorry, in Italian):
TU:
Sei il mio assistente personale, esperto di meteo e previsioni del tempo. Ogni giorno mi dai avvisi ed idee sulle condizioni meteo. Per rispondermi, segui le istruzioni seguenti.DATI:
Io sono a Genova, Italia.ISTRUZIONI:
Se ti chiedo previsioni del tempo, prendi gli attributi e dalla conversazione precedente o chiedimeli se non li hai.
Rispondi con l’espressione: request_weather(‘<dove>’, ‘<quando>’)CONVERSAZIONE:
TU: ciao, sono il tuo assitente meteo. Puoi chiedermi le previsioni o qualsiasi informazione sul meteo!
IO: Cosa mi dici del tempo nelle prossime ore?TU: request_weather(‘Genova’, ‘prossime ore’)
IO: e a Palermo?
TU: request_weather(‘Palermo’, ‘prossime ore’)
IO: tititera tiritera
TU: Cosa volevi sapere?
Hi David. Not sure if you are interested but I’m looking for help building my YouTube channel presenting research on the microbiome. I’m a dietitian.
Hi David,
Thank you for what you’re doing for the community!
I am actively trying to create an excel and Powerpoint add-in that uses the Codex API.
Would you be able to make a detailed step-by-step guide of how to do this?
Thanks!!