Hi I am currently using LangChain to build a text to SQL model however, I want to model to not answer a question if the question is too vague or if it is not confident in its answer. I know that there are no confidence scores that the models calculate however, I am not sure how to prompt engineer it so that it says “Unable to answer question”
The code I have is below:
_DEFAULT_TEMPLATE = “”"Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.
Use the following format:
Question: "Question here"
SQLQuery: "SQL Query to run"
SQLResult: "Result of the SQLQuery"
Answer: "Final answer here"
Only use the following tables: Table1 and Table2
Table1 contains the following information: first name, last name, GreendaleID, email and net worth.
Table2 contains the following information: GPA, Graduation, GreendaleID, their on campus job and if they live on campus.
Table1 and Table2 are joined by GreendaleID.
{table_info}
Never use LIMIT statement, use TOP statement instead.
Format all numeric response ###,###,###,###.
If you are unsure about the answer or query please respond with "More information needed"
Question: {input}"""
PROMPT = PromptTemplate(
input_variables=["input", "table_info", "dialect"], template=_DEFAULT_TEMPLATE
)
connection_string = "url"
db = SQLDatabase.from_uri(connection_string)
llm = OpenAI(temperature=0, verbose=True, model="gpt-3.5-turbo-instruct")
database_chain = create_sql_query_chain(llm,db, prompt=PROMPT)
sql_query = database_chain.invoke({"question": x})
If a question such as “Number of rows” is asked or even something like “Rows”, I want the model to return “Not enough context is provided, please give more information”