Prompting GPT Assistant to rewrite query in API

I’m working on an AI assistant that analyzes the chat history between a user and another assistant and infers the real intent behind the user’s query.

I work at a publishing house, and I’m building a chatbot that internally generates SQL sentences using LangChain (a SQL agent). The bot can retrieve information like prices, publication origins, kiosks, publication IDs, themes, etc. Since these agents are stateless (just like a chat completion), I am using another completion layer on top to generate meaningful queries for the bot.

For instance:

If I ask the agent, “What’s the new publication on ‘phyto plant’ from the editorial house A-47854,” it will retrieve:

Response:

A new study on fossils and phytoplankton
ISBN: 1237854-HNU
Country: France
Edition: On Hold
House: Journal World Inc.

Phyplankton in Stale Water
ISBN: HBXX78-HNU
Country: Germany
Edition: Published
House: Environmental Herald

But if I ask the follow-up question: "What other publications do you have from the second house?" the agent won’t be able to answer correctly because it’s memoryless.

I’m trying to build a layer on top that will rewrite and provide context to the query so that my model can give a more accurate response.

Desired Query Rewrite:

For example, for the follow-up question:
"What other publications do you have from the second house?"
The expected reformulation would be:
"What other publications aside from 'Phyplankton in Stale Water' (ISBN HBXX78-HNU) does the house 'Environmental Herald' have?"

Why?

Because “Environmental Herald” is the second house mentioned in the chat history, and this contextual information is crucial.

Similarly, if the user says something like “Thank you” or “That’s all, bot,” these types of queries should not be rewritten.

If a query is unrelated to the previous conversation, it should be returned as-is. For example:
“Tell me about the latest publications on biomechanics from London.”

I’ve tried many prompts and approaches to achieve this behavior, but nothing has worked consistently.

This is my prompt:

*SYSTEM = """
Below is a history of the conversation so far, and a new question asked by the user that needs to be answered by searching in a knowledge base. 
When related to the query, make use of the keywords 'Editorial' and 'House', and include the ISBN code when the question needs rewriting.
Your job is to generate queries, never to answer them.

Follow these steps and rules:

Rules:
1. If the user is greeting you, congratulating you, or saying goodbye, pass that text as it is. For example, "Thank you, bot":
   Your final output must be: "Thank you, bot."
   
2. Analyze the chat history and determine if the current user query is related to the previous conversation. If so, provide the relevant context, data, keywords, and codes to pass it to the SQL agent.

Example:

User: "What's the latest titles in the catalog?"  
Assistant: "1. Biomechanics for College Students 2. Statistics Beyond Inference 3. Movie Nights as a Way to Therapy"  
User: "Give more info on the second one."

The proper output to generate would be:  
"More information on 2. 'Statistics Beyond Inference'."
"""*

And my code:

# System message
system_message = {
    "role": "system",
    "content": SYSTEM
}


user_query = "Tell me more about the second publication"

# Create the full message list: system message + chat history + current user query
messages = [system_message]  + CHAT_HISTORY + [
    {
        "role": "user",
        "content": user_query,
    }
]


contex_question = model_client.chat.completions.create(
    model=deployment,
    messages=messages,
    temperature=0.0
)

key_question = contex_question.choices[0].message.content

print(f"Inference prompt: {messages}\n")
print(f"Generated response: {key_question}")

When I print {messages}, I see the prompt is well-structured, and my last question is added correctly, but the responses are often not what I expect.

For instance, if this is my chat history:

User: “Do you have any publications from Norway on biomechanics?”
Assistant: “Yes, we do. 1. Glasses Metrics on Biomechanics ISBN 4785-7854, and 2. Overall Fracture Metal and Bio Solutions ISBN 8575-8YUA.”

Then I ask:
User: “What’s the publication date for the first one?”

The response I get is:
“The publication date for the first biomechanics publication in Norway.”

However, the expected output I want is:
"Publication date for Glasses Metrics on Biomechanics ISBN 4785-7854."

Notice how the ISBN is included, which is the desired behavior.

Also, if I type something like:
User: “Thank you, bot. That’s all.”

I receive responses like:
“You’re welcome. Come anytime.” or
“More publications associated with Norway and biomechanics.”

These responses are not expected. The model should return the exact message when it's a polite phrase like "Thank you."

I would appreciate your help in adjusting the code, prompt, or logic to achieve my desired output. Thank you so much!

1 Like

You should categorize the chat messages and group them.

Add them to neo4j - this way you can even visualize how the chat was going when changing topics and stuff. Use similarity search / a vectordb to find the relevant edges…

The agents should always get a summary of the topic.

2 Likes

Im sorry sir what is Neo4j?