I would like to limite the scope of questions which can be answered by chatGPT. I am trying to code something like this:
session["chat_history"] = [
{"role": "system", "content": "You can only answer some stuff in literature and arts subject"},]
new_message = {"role": "user", "content": question}
session["chat_history"].append(new_message)
response = client.chat.completions.create(
model=model,
messages=session["chat_history"],
temperature=temperature,
max_tokens=max_tokens,
)
Variables defined as model=“gpt-3.5-turbo”, temperature=0.7, max_tokens=1000
The problem is… Even something is out of scope it is answering… For example, a question about basic maths os even how to build a pool… How do I limit it?
1. Improve your system prompt and give it a better persona
If you build out your system prompt and tell it exactly what it’s for and what it’s supposed to do, and how it should react if it gets off topic.
2. you can use use a content filter
2.1. with a chat/completion model
Run a parallel call to a model of your choice, with something like
“You are a content identification agent. Given the conversation above, does the user’s last query pertain to literature and arts? Only answer Yes or No, otherwise the system will break.”
max tokens = 1
something along those lines
if that passes, continue with your normal workflow
filter = OpenAI(
api_key=‘’, # this is also the default, it can be omitted
)
def classified (question):
session[“chat_history”] = [
{“role”: “system”, “content”: “You are a content identification agent. Given the conversation above, does the user’s last query pertain to literature and arts? Only answer Yes or No, otherwise the system will break”}
]
new_message = {“role”: “user”, “content”: question}
session[“chat_history”].append(new_message)
try:
validate = filter.chat.completion.create(
model="gpt-3.5-turbo",
messages=session["chat_history"],
max_tokens=1,
temperature=0
)
ans = validate .choices[0].message.content
print(f"ans")
return "true" in ans.lower()
except Exception as e:
print(f"Error in question: {e}")
return False
but it is outputing something like this:
‘Chat’ object has no attribute ‘completion’
Would be funny if the model actually said that. (Un)fortunately, you just got an error in your code. It’s not completion, it’s completions. Have you considered using an IDE with syntax highlighting?