Limit scope of chat between person and chatGPT using openia lib

Hi all!

I would like to limite the scope of questions which can be answered by chatGPT. I am trying to code something like this:

    session["chat_history"] = [
                                  {"role": "system", "content": "You can only answer some stuff in literature and arts subject"},]
    new_message = {"role": "user", "content": question}
    session["chat_history"].append(new_message)

    response = client.chat.completions.create(
               model=model,
               messages=session["chat_history"],
               temperature=temperature,
               max_tokens=max_tokens,
               )

Variables defined as model=“gpt-3.5-turbo”, temperature=0.7, max_tokens=1000

The problem is… Even something is out of scope it is answering… For example, a question about basic maths os even how to build a pool… How do I limit it?

Thank you

1 Like

You have tons of options!

1. Improve your system prompt and give it a better persona

If you build out your system prompt and tell it exactly what it’s for and what it’s supposed to do, and how it should react if it gets off topic.

2. you can use use a content filter

2.1. with a chat/completion model

Run a parallel call to a model of your choice, with something like

“You are a content identification agent. Given the conversation above, does the user’s last query pertain to literature and arts? Only answer Yes or No, otherwise the system will break.”

max tokens = 1

something along those lines

if that passes, continue with your normal workflow

2.2. using embedding models

If you have enough data (maybe from 2.1) you can do something like Fine tuning classification example | OpenAI Cookbook

where you can use an embedding model instead of a chat/completion model for the same task.

3. use a better model

Consider using GPT-4, if you can!

Hi! thank you a lot for your explanation…

You mean something like this:

filter = OpenAI(
api_key=‘’, # this is also the default, it can be omitted
)

def classified (question):
session[“chat_history”] = [
{“role”: “system”, “content”: “You are a content identification agent. Given the conversation above, does the user’s last query pertain to literature and arts? Only answer Yes or No, otherwise the system will break”}
]
new_message = {“role”: “user”, “content”: question}
session[“chat_history”].append(new_message)

try:
       validate = filter.chat.completion.create(
                  model="gpt-3.5-turbo",  
                  messages=session["chat_history"],
                  max_tokens=1,
                  temperature=0
        )
    ans = validate .choices[0].message.content
    print(f"ans")
    return "true" in ans.lower()  
except Exception as e:
    print(f"Error in question: {e}")
    return False

but it is outputing something like this:
‘Chat’ object has no attribute ‘completion’

regards

:rofl:

Would be funny if the model actually said that. (Un)fortunately, you just got an error in your code. It’s not completion, it’s completions. Have you considered using an IDE with syntax highlighting?