How i put ChatGPT giving recomendations based on Database

Hello guys, with this API for chatGPT is there anyway to instruct Chat GTP search from certain Database.

I mean, i want to use chat GPT in my restaurant website. But want to Chat to only answer based on my data.

Is this possible?

Thanks in advance.

7 Likes

I am looking for a similar answer. I only want to provide answers based on what I provide it.

4 Likes

Yes, that’s it. I already used the “System” in the prompt, but that take to much tokens.

2 Likes

One relatively easy way to solve this could be to get a Codex model to transform the user request into a query for your database (e.g. SQL), by giving it the pseudo schema of your database, then query the database and return the results to chatGPT as the context of your user request (this is basically how bing chat works, as well):

For example:

  1. User → App: “Does the onion soup have mushrooms in it?”
  2. App → OpenAI API (using codex model completion): “Given this database schema: [insert the structure of your tables], write me a SQL query for this question: [insert user question]”.
  3. OpenAI API → App: “Here is your Select statement: SELECT x, y, z FROM table1 JOIN table 2 ON…” etc.
  4. App → Restaurant Database: [execute SQL and retrieve results]
  5. App → OpenAI API (using text model completion this time): “Act as the manager of a restaurant, who wants to answer user queries about the restaurant and only about the restaurant. Given the following user question and the related search results, respond to the user using only the results as context: User question: ‘Does the onion soup have mushrooms in it?’ - Search Results: [insert the results table of what the codex model gave you]”
  6. OpenAI API - > App: “Yes, our onion soup does contain traces of mushrooms. Do you not like them, would you like me to recommend alternative dishes without mushrooms?”
    … and so on.

This is simplistic and you’d need to make sure you construct and tweak the right prompts, add error handling, maybe also use the moderation end point that OpenAI offers etc.
But roughly, this is how I’d do it.

13 Likes

Thanks for your amazing answer i will take a look at this and try to get this working.

1 Like

That is great. How could the step 2 be permanent, in a way that it would not have to be repeated on each call?

1 Like

Id be terrified to run codex generated queries on a production server without verification, but thats just me

1 Like

You can probably cache the returned SQL query, e.g. in Redis, based on the user question.
But the problem you’ll have is that the question from the user won’t be repeated all that often (and if it does, it may be phrased differently and it’s then hard to tell if you still need the same SQL statement).

So while you may save the occasional API call this way, it’s probably rare and you still make actual API calls most of the time.

But at least in the case of a restaurant, you won’t expect that many people interacting with your chatbot so the volumes will be small and affordable.

For much larger use cases, you could perhaps work on extracting some sort of normalised version of every user request, to make it more likely to be matched with a cached SQL query. That may also decrease your API calls a bit, but that’s trickier and also won’t be foolproof, not like exact API call requests in the non-ML world.

2 Likes

To offer another idea that’s probably harder but more token efficient and safe than generating sql, you can get embeddings for your menu items and the services you offer, store them in a database, then chunk up the user input and get embeddings for that. Using cosine distance you get the k nearest items/services in your menu embeddings and include those in your system prompt.

3 Likes

Thank you! But sorry, I was not clear. I meant actually the part of the schema, that probably won’t change that often.

“Given this database schema: [insert the structure of your tables]”

Embeddings is interesting, but than it would require to update the embedding for any new insert or update.

Actually embeddings is the way to go.
There is plenty of discussions on this forum and for this use case it’s most likely the best option.

1 Like

Yes, each time you change your menu database you’ll have to update the embeddings. Ideally menu changes dont happen that often, but no menu is so large that you couldnt get the whole thing chunked and embedded every day and be paying more than a few pennies. The chats are going to be your token-cost leader, with both an embedding request for the user input and then a completion request once youve put together your system message. That being said, i think this would overall be cheaper than making codex create queries, because then you’d need to be sending your whole schema along with every code completion request (every user message). Plus going through a remote sql server is going to add a ton of latency, if that was the plan.

Ah I see.
Yes, I’m afraid you will probably have to pass in your schema every time, with every new completion. Your schema may not change, but the model API doesn’t remember it.

Even with the new chat model, the API is “stateless”, i.e. it requires you to re-send your entire context with every request, even if you are continuing an existing conversation.

You could create a new “fine-tuned” model, using OpenAI’s API to send training completions based on your menu and restaurant information, and then just query that.
But I don’t know how accurate that would be.

Embeddings is a great, safe method for information retrieval.

You may like to read this:

3 Likes

No. ChatGPT cannot search a DB; but as mentioned, you can take the response from the API and use that response to drive actions which query the DB.

I do not recommend having the chatbot generate the SQL queries on the fly however. It’s better, in my view wearing my systems engeerning hat, to write your own SQL queries because you cannot trust a text-generating language model to produce error free queries. Doing this adds a layer of complexity which is not necessary, in my view.

HTH

:slight_smile:

1 Like

Can we add our own dataset over here? i have a CSV file…in the above link they have added a dataset from huggingface datasets hub. So is it possible to modify the code slightly so that we can add our own dataset…if yes then pls tell me how to go about it. Thanks!

I’ve never used the collections before so I wouldn’t know, sorry.

I imagine you would need to process your CSV file into an acceptable format which matches the JSON format they accept through the API. This is a very basic one below

      {
        "id": "A",
        "values": [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
      },
      {
        "id": "B",
        "values": [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2]
      },
      {....}