AI assistant with converation boundaries

Hello Community.

First of all, I’d like to thank @sps for his comments and feedback on session persistence and the sample code. It is beautifully simple :slight_smile:

I am aware of the current requirements regarding fact-learning and the need to use embeddings to keep track of the conversation.

I am trying to build an interactive learning platform for users to gain confidence and help to ask the right questions. I am not sure if the below is at all possible in OpenAI

I need to:

  • Create an AI assistant which will take the role of, for example, an interviewee applying for a job, such as a teacher. User would take the interviewer role.
  • Define the qualities of the role: name, dob, gender, subject of specialization, experience …
  • Have the user interactively ask the AI questions, one at a time, relevant to the role so that the AI only gives information related to the context of the interview.
    Simple answers, correctly verbalized and avoiding distractions.

Anyone knows if this is a) possible b) if so, a pointer on where I could find a resource for it?

Thank you in advance.

1 Like

Sounds like you need to write a natural language task much like any other. An “act like” type of roleplay interaction. I have an assistant in mind that can help with that.

{your question}

Please write a system prompt that will make a GPT-3 AI operate in that manner. Use example parameters for each, simply to show how I can make a specialized AI where I write my own role and qualities. Describe such that the AI subject being interviewed is extremely human-like.

ChatGPT transforms your question into a prompt with similar parameters.

You are about to interview “AI Interviewee,” a highly qualified teacher applicant for an English Literature position. Her name is Emily Johnson, born on May 15, 1985, and she has 10 years of experience teaching English at different high schools and universities. As the interviewer, you have the opportunity to ask Emily relevant questions about her qualifications and experience. Please interact with the AI by asking one question at a time, and the AI will provide simple, accurate, and focused answers related to the context of the interview.

Begin the interview by asking your first question to Emily, the AI Interviewee.

There are of course other things that reinforce this, like “AI is an actor that must never break character nor leave the context of the roleplay simulation under any circumstances. All warning or disclaimers that an AI may produce must instead come from the character’s own voice or be suppressed.”

Thank you @_j

So, just so it is clear in my head from a programming perspective. It is then possible to pre-feed the “role:system” with sufficient parameters to ensure OpenAI (not ChatGPT - I don’t believe there is an API for it yet) assumes a role, stays in character and only answers relevant messages.

Much obliged.

The API can maintain a role better, because you control the conversation history. You can, by design, place a permanent system prompt that continues to remind the AI of its operation and purpose.

ChatGPT the website has a minimized conversation history that allows it to quickly lose knowledge of characters and past user input. The character can only be written from the diminished status of the user prompt. ChatGPT system prompt is brief, preprogrammed, and not presented to the user.

It is certainly possible for a user to subvert the pre-programming that was offered to the AI, by using clever language tricks. This is the essence of a “jailbreak”: escaping from the limited domain within which the custom chatbot was supposed to operate.

1 Like

ForGPT-4, the best way to accomplish what you have set out to do, as j pointed out, is to use a well versed prompt in the system message that can instruct the model on what it’s role is and how to accomplish it. If you want certain behaviours to be replicated, those can be passed of as instructions to the model in the same context to pay attention to.

For example, “Do not ask any that are not related to the topic f the interview at hand”, should give it some inclination to ask off topic questions.

The more you play with things like how your instructions are structured in the prompt or what information you pass through as a part of the role, the better the performance would be for the response. Also, try and see how varying the temperature might effect your generations and what suits your use case the most

@_j & @udm17 Thank you :slight_smile:

Thanks for the kind words @alex.perez

Here’s a sample of your described interaction:

Here the assistant is candidate while user is the interviewer.

You can try it here

1 Like

Thank you @sps yet once again (and everyone else :slight_smile:

I’ve tried to adjust the system input but I’m not hitting the nail on the head with the output. I’m wondering if it is due to incorrect language. I have been able to tell OAI the age and it’s been able to ‘make up’ a correct birthday (not in this interaction - which I don’t know why), but I have not managed to supply a date of birth and get it to state the age correctly (I know OAI is not very good with math). I guess I could supply both and give pretty much a complete background. However, if I have to do that it may defeat the point.

Any pointers will be most appreciated.

Re, coding. What would be the best way to keep track & feedback the conversation? Vectors?

Thanks again.

The answer: functions

Let’s see if the AI can reason how to use functions I fabricated - (but the actual test will be to provide the functions and do)

To calculate a person’s age using the provided function APIs “datetime_shim()” and “math_shim()”, the AI will follow these steps:

  • Obtain the current date using “datetime_shim()” and store it in a variable (let’s call it “current_date”).
  • Obtain the person’s date of birth using “datetime_shim()” and store it in a variable (let’s call it “date_of_birth”).
  • Calculate the difference between “current_date” and “date_of_birth” to get the time delta (the difference between the two dates).
  • Use the “math_shim()” function to calculate the number of years in the time delta and store it in a variable (let’s call it “age”).
  • The “age” variable now contains the person’s age, which can be returned as the final result.

Remember, I am not writing Python code here, but rather showing the logical steps an AI could take to utilize the given function APIs to calculate a person’s age accurately.

Where it says “variable” one must just ensures that it can see a function return value in conversation history function role so it can continue to iterate to the answer.

For conversation management, I would preserve verbatim at least 10 turns, up half the context, before you start altering the remained with compression or retrieval techniques. You can break the ability of the AI to perform this very task.