How can I optimize my GPT4 workflow

Hi,
I have a text-to-sql workflow with 3 Open AI API calls.

  • Re-writing the original question to make it simpler and understandable.
  • Converting the re-written question to SQL(this SQL is used to fetch data from DB)
  • USing the question in step 1, result in step 2 to create an comprehensible answer.

I feel sometimes it takes a lot of time to complete these calls.
Can this workflow be optimized?

It seems these two steps you could try to combine.

If the AI has the capability to rewrite a question, then it should have the ability to understand that question as it stands.

If having the simple version of the question in context helps, you can simply have the AI summarize what the question is asking before it produces the query in a special format that you can parse out. Then AI has both original question and rephrased question to shape its thinking even better.

Exact queries to provide exact data really can’t be beat if you have non-knowledge data that embeddings wouldn’t work on and that has little semantic meaning that could match up automatically with user inputs.

Improvements depend on time budget awaiting output, monetary budget, and the quality that your tasks must have. Then you can find the AI models that do what you need at high token production rates (like gpt-3.5-turbo) and reduced steps (like might require GPT-4).

Fine-tuning also may be an option to make a specialist AI that doesn’t need as much instruction and can be faster than GPT-4.

I hope my lack of answers was helpful!

1 Like

Thanks for you answer!

I am doing the below mentioned part of your answer.

If having the simple version of the question in context helps, you can simply have the AI summarize what the question is asking before it produces the query in a special format that you can parse out. Then AI has both original question and rephrased question to shape its thinking even better.

However, I have tried fine-tuning(reading in the open-ai documentation and different articles) with 50 question and answers but my results did not improve.
Can you please point me to a good resource for fine-tuning if you know?

Thanks

How did you approach the fine-tuning for this case?

I once - albeit just for test purposes - created a fine-tuned model for text to sql, so may have a couple of pointers if you share more details about your approach.

I created a data set using the below given template for 50 questions.
{‘system’:Prompt, ‘user’:‘question’,‘assistant’:model answer}

  • I used the same custom made prompt for 50 questions.
  • I had about 30 different questions. I used variations of the original 30 for the next 20.
    For ex: Q1: What is the capital of USA?
    Q2: What is the capital of Canada?

I added this dataset to the Azure fine tuning pipeline and re-deployed the model.

Please let me know if more info is needed.

Taking the two questions as examples, can you share the specific prompt and the assistant response, i.e. the SQL query, you used for the training data set?

And separately back to a point that @_j made above, what is the actual nature of questions you are looking for answers to? The two examples indeed make me wonder whether you may not be better off with an embeddings-based approach.

I will create an almost similar example(just different domain) as I cannot give the direct use-case.

I have 2 huge tables.
Customer (All detailed info on customer, with age, birthdate, time joined the website, how much time this customer spends on this website, favourite department, etc)

Products. - what all products this customer has purchased. (their cost, manufacturer details, name, classification category, etc)

My chatbot is for an Admin:
Exact question: What percentage of customers with furniture as their favourite department bought xyz couch in the last 6 months.

Steps:

1. First I pass this question to an embeddings model which helps me get the IDs for furniture department and xyz couch. I get a list of closest embeddings.

(xyz- product_id = 5
 xyy- product_id=10,
 furniture-department_id=3,
 food-department_id=1)

2.Open AI API call 1: I append this to the question and send it to open ai api with the following prompt.

a. Send the question
Re-write the question in a simpler format.
#instructions on how to add dates to the question
#instructions on how to map IDS to the question
#Examples to resolve mapping related ambiguities
#Some other instructions: Ex: If no mathematical operation is mentioned in the question use average.

Result:

  1. Calculate the total number of customers in the furniture(department_id=3) department between dates based on instructions
    2.Calculate the total number of customers in the furniture(department_id=3) department who also bought xyz(product_id=5) between dates based on instructions
  2. percentage = (value in point 2/value in point 1) * 100 (I have to lay it all out as tool_calls on its own does not produce great results for percentages.)

3. Open AI API call 2: I send this re-written question to open ai to generate SQL. this prompt has the table structure and the steps to use which tables , joins etc.

4. USe the returning SQL to fetch data from DB.

5. Open AI API call 3: USe the answer from step 2,3,4 to generate final answer.

I hope I didnt confuse you!
Once again thanks!

Since every application is specific and the development of a training set is usually proprietary, there’s few examples of “here’s what to do”, besides simply you must show the AI over and over how it should respond to the entire breadth of inputs you would provide, by training on examples like would actually be used and response desired.

I posted a topic to gather more data, given how little there is about fine-tuning, especially on top of chat models, without a lot of feedback.

1 Like

thanks I will check this out and provide my feedback

Thanks for sharing - I am away from my laptop for a couple of hours but will take a closer look later.

1 Like

You might want to consider creating a model that is fine-tuned for function calling. Your sequence of steps already mimics the common sequence of steps involved in function calling - even though you currently don’t have a function specified.

The challenge in function calling - especially when using it to perform SQL queries - is to properly extract the function’s arguments. This is where the fine-tuning part can come in handy as you feed the model a long list of examples. In the function description you can provide similar instructions that you would normally include in your prompt. However, you can reap the benefits of fine-tuning and omit or reduce the level of detail for certain instructions provided they are exemplified in your training set (e.g. how data needs to be formatted).

OpenAI has shared an example of what the training data structure for fine-tuning for function calling should look like (Source: https://platform.openai.com/docs/guides/fine-tuning/fine-tuning-examples):

{
    "messages": [
        {"role": "user", "content": "What is the weather in San Francisco?"},
        {"role": "assistant", "function_call": {"name": "get_current_weather", "arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celsius\"}"}}
        {"role": "function", "name": "get_current_weather", "content": "21.0"},
        {"role": "assistant", "content": "It is 21 degrees celsius in San Francisco, CA"}
    ],
    "functions": [...] // same as before
}

When you combine this with the following OpenAI cookbook example on function calling (scroll-down for the specific SQL example), it should become clear how to create a training data set specifically for text to SQL function calling.


All this to say there is one final caveat here:

Function calling has been deprecated with the introduction of tools. Under the new approach, a function is a type of a tool. Fine-tuning however still relies on the old function calling approach. Unfortunately, we do not know when OpenAI will switch to tools for fine-tuning and how long you may be able to use a fine-tuned model that is based on the legacy function calling approach.

1 Like

Hey! So you’ve got a slow workflow with three OpenAI API calls? Try batching them together, use caching to store previous results, and make concurrent calls to speed things up! Also, optimize your question rewriting algorithm and fine-tune the OpenAI model for better performance. That should help!

1 Like