Send me your GPT problems, I'll solve them for free and make a YouTube video

cant view
video is private

Hi David,
thanks again for video experiments you share on your video channel! The problem I submit/share with you/all the community is this:

Title
How a chatbot, built using a GPT3-model, is allowed to call an external API,during/inside the dialog flow?

Context
Suppose you build a chatbot as described in your nice video: https://www.youtube.com/watch?v=ePdmv4ucmb8 : A python program follows the basic idea you live-coded in the above video: the program get user input sentence from the terminal, add the sentence at the bottom of a dialog turn, call the openai completion API appending the GPT3 response at the bottom of the growing text:

You are my personal assistant BOT expert on weather forecast. Everyday you give me advices and ideas related to the weather.

USR: blablabla
BOT: blablabla
USR: blablabla
BOT: blablabla

Now suppose we want to let the chatbot aware of the today REAL weather forecast in a specific location/city (e.g. Genova, Italy).

Suppose you have some web service API that returns the weather forecast. Let incapsulate that service under a python function with signature: request_weather(city). Usage example:

request_weather('Genova, Italy')
# 'today (25/12/2022) in Genoa the weather is beautiful, there is the sun and the temperature is 20 degrees.'

A first idea, for the dialog initialization, is to call the above function before the chatbot turns-exchange, injecting the weather forecast statement (ad any other useful DATA) in the above prompt. Like that:

You are my personal assistant BOT expert on weather forecast. Everyday you give me advices and ideas related to the weather.

Weather forecast: today (25/12/2022) in Genoa the weather is beautiful, there is the sun and the temperature is 20 degrees.

USR: blablabla
BOT: blablabla

Ok, but what if the user ask, at the middle of the initiated dialog, about the weather in another location, e.g. New York?
Suppose you want to answer the user, without a fake answer (GPT is able to invent anything if you let it be free) but instead answering the user with the real weather forecast in New York. In this case you do need to call the function request_weather('New York, USA').

In general, you want to call an external API inside the dialog. How to implement that feature?

Ideas/approaches
1- What comes to mind is to catch the user “intent” before the GPT3 completion request, so you could implement an usual intent classifier on top GPT3. Maybe, but I do not like this solution because I need to implement that classifier by ‘hands’ with usual efforts (training the intent/entities phrases dataset etc. etc.)

2- Instruct the GPT3 model to reply a ‘command’ w (with a specified known syntax) when it match a known intent, as roughly shown here: https://beta.openai.com/examples/default-text-to-command. The idea here is to call the function grequest_weather when the model return this command inside the answer (let’s call this a back-command; So the python program could catch the back-command, run-it the function return text (say a text that describe the weather) and the dialog prompt will be appended with function return text as a response to the user. That sound good but I couldn’t make this work.

Note
The general problem here is how to create conversational applications based on GPT3 but can accomplish task-oriented deterministic activities, taking advantages from the ‘partially-no-deterministic’ elaborations of generative systems. Practical examples? Consider a chatbot that help customers with usual informative answers but that allow to open a ticket on some help-desk system. Or consider a question/answering system that do need to retrieve some info in realtime, etc. etc.

Any idea? Any suggestion is welcome.
Thanks
Giorgio


Wait, with text-davinci-003, the second approach seems to work! I inserted in the initial prompt some pseudocode instructions. See the example here below (sorry, in Italian):

TU:
Sei il mio assistente personale, esperto di meteo e previsioni del tempo. Ogni giorno mi dai avvisi ed idee sulle condizioni meteo. Per rispondermi, segui le istruzioni seguenti.

DATI:
Io sono a Genova, Italia.

ISTRUZIONI:
Se ti chiedo previsioni del tempo, prendi gli attributi e dalla conversazione precedente o chiedimeli se non li hai.
Rispondi con l’espressione: request_weather(‘<dove>’, ‘<quando>’)

CONVERSAZIONE:
TU: ciao, sono il tuo assitente meteo. Puoi chiedermi le previsioni o qualsiasi informazione sul meteo!
IO: Cosa mi dici del tempo nelle prossime ore?

TU: request_weather(‘Genova’, ‘prossime ore’)

IO: e a Palermo?

TU: request_weather(‘Palermo’, ‘prossime ore’)

IO: tititera tiritera

TU: Cosa volevi sapere?

1 Like

Hi David. Not sure if you are interested but I’m looking for help building my YouTube channel presenting research on the microbiome. I’m a dietitian.

Hi David,

Thank you for what you’re doing for the community!
I am actively trying to create an excel and Powerpoint add-in that uses the Codex API.
Would you be able to make a detailed step-by-step guide of how to do this?

Thanks!!

Hi David, thank you for what your doing to this community,

I have a problem with the API in this topic:

thanks.

Hello David,

My name is Ricardo Fernandes and I am a researcher at the Max Planck Institute for Geoanthropology. More information on our research group here: Systems Archaeology | Max Planck Institute for the Science of Human History

We are interested in a problem that has appeared previously in this thread. Summarizing and questioning collections of academic articles and books on a certain topic. In our case, we are interested in research questions concerning the study of the human past and of its environmental context.

A problem that we face in compiling historical information is that the study of the past is dispersed among multiple disciplines (e.g., history, archaeology, palaeoclimatology, etc). Plus it is often the case that historical research is written in local languages. Since English is the academic lingua franca there is a lot of historical research that does not become mainstream. This is also a barrier for historical researchers that are non-native English speakers. Their research is less visible and, in turn, they have less access to funding.

The above is the motivation for our AI research. To have a system in place capable of being queried on historical questions using academic sources written in multiple languages. We have a workflow in place for this that involves the use of scientific search engines, document formatting, translation, and text writing following prompts using an NLM (not GPT-3). However, the performance of the latter has not been the best and we are interested in testing GPT-3.

Would you be interested in helping us? We have funding for this project and all code that it generates can be made freely available (results also to be presented in a publication). If you are interested please contact me: fernandes@shh.mpg.de

All the best,

Ricardo Fernandes

HI, I saw your vids and there are impressive. I am looking for someone for an MVP using GPT. Could I send you an email to describe what I am looking to do or how can I DM you ?

@daveshapautomator Hi David, I wan to finetune GPT-3 on my business context. I have a document that has organization’s context, policies and guidelines. I have questions and answers for finetuning. I am not able to figure out from Open AI documentation as to how should i point my fine tuning data ( i.e. prompt and completions ) to my file. Can you please guide me here?

1 Like

@lalituor the Embeddings API is what you’re looking for, not fine-tuning.

After embedding your documents you can combine search queries with GPT-3 to make the UIUX conversational if that’s your objective.

You might find this video helpful.

1 Like

Happy Cake Day!

Thanks for all your positive assistance on the forum, @wfhbrian …Good to have you here with us.

1 Like

Thanks for the warm welcome :hugs: I’m happy to be here!

1 Like

Creating a tool inside my web app using GPT3 and PHP (GitHub - orhanerday/open-ai: OpenAI PHP SDK : Most downloaded, forked, contributed, huge community supported, and used PHP (Laravel , Symfony, Yii, Cake PHP or any PHP framework) SDK for OpenAI GPT-3 and DALL-E. It also supports chatGPT-like streaming. (ChatGPT AI is supported))**

The tool writes service page content for local sites. When I paste a longer outline inside the textarea for it to write the full content for me using a single prompt I always get a 500 server error.

But when it is short I get a response back.

Sometimes it doesn’t return with a good amount of content for me. What is the best way to approach this.

is there any other option than embedding api for document question answering. Like I upload all the documents on open ai as my personal database for search, and when i ask a question open ai will answer from saved documents. Is it possible ?

1 Like

Hello David, I like your channel.
Problem: I can’t access Bing Chat, GPT and ChatGPT from China.
Insufficient solutions:
vpn - Not enough, Openai also asks for a phone number and China/Hong Kong numbers are rejected.
Opera, Snapchat - They said they embedded GPT for free but it doesn’t seem so, looking from China…
Perplexity and You.com work, perhaps something similar would satisfy me for short term.
Thank you for reading and,maybe, addressing this problem.

Hi Dave Can you please help get this kicked started ? Thank you in advance.

  1. Project Title: AI Chatbot for Top 10 Popular APIs
  2. Project Summary: The proposed project aims to create an AI chatbot that can assist users in operating the top 10 most popular APIs. The chatbot will be designed to communicate with users using natural language processing and to operate the APIs automatically to fulfill the user’s requirements. The project will be significant in democratizing the use of APIs by enabling even non-technical users to benefit from their functionality.
  3. Objectives: a. To develop an AI chatbot that can communicate with users using natural language processing. b. To integrate the top 10 most popular APIs with the chatbot. c. To develop a user-friendly interface for the chatbot that can assist users in operating the APIs automatically. d. To test and evaluate the chatbot’s performance in fulfilling user requirements.
  4. Scope of Work: a. Research and identify the appropriate AI and natural language processing technologies to use in the chatbot. b. Develop a design and architecture for the chatbot that can integrate with the top 10 most popular APIs. c. Develop and test the chatbot’s user interface for ease of use and functionality. d. Integrate the chatbot with the top 10 most popular APIs and test its performance in fulfilling user requirements. e. Evaluate the chatbot’s performance using user feedback and metrics such as accuracy and response time.
  5. Deliverables: a. AI chatbot software integrated with the top 10 most popular APIs. b. User-friendly chatbot interface. c. User documentation for the chatbot. d. Test results and performance evaluation of the chatbot.
  6. Budget: a. Personnel: [list personnel required, e.g., developers, project manager, etc.] b. Technology: [list technology required, e.g., cloud services, APIs, software licenses, etc.] c. Other expenses: [list any other expenses, e.g., travel, equipment, etc.]
  7. Timeline: [provide a project timeline, including milestones and deliverables]
  8. Conclusion: The proposed project will create an AI chatbot that can assist users in operating the top 10 most popular APIs, democratizing their use by enabling non-technical users to benefit from their functionality. The project will be significant in promoting digital inclusivity and promoting the use of APIs for various applications.
1 Like

Thank you for your help and support,

It was amazing. I was looking for an opportunity to share my thoughts with you, but I will wait for you to come back again with the same aim of solve our stuff. So then I will share with you my problems.

Please, share with us your trilogy, I would love to take a look into it.
Best regards.

Hello,
First, let me tell you that you make great educational videos that help me a lot in my work.
Can you show step by step how to make AI chatbot with our own questions/answering, put it in vector database and then query them with Python code. How can we save each conversation with chatbot for later reference and all unanswered from chatbot questions to be saved too?