Contexts with the new turbo end point

When people have played with the new endpoint, I would be interested to hear where you have had the best results when including extra context with a query.

The “system” role doesn’t appear to hold much weight.

The “context” could be included with the actual “user” prompt.

I will do some experimentation. I thought it might be better to create a fake “assistant” response that contains the context (Just above the user’s actual query)

Maybe this would help protect the IP and stop prompt injection attempts seeing the contexts we provide.


Yeah, I wanted to immediately add new roles of my own… hopefully soon…

1 Like

I’m on that too. The “system” role seems to be working for me, but it’s true that I’m still testing it with dummy examples. Will update if I find anything interesting. But this addition of three roles is exciting. It opens a world of possibilities, especially if they give more weight to the “system” role with new releases.


I’ve just realized that if you do the warmup using ChatGPT (the app) and capture the responses, you can use them in future requests, without having to actually make the calls.

Eg “You are a personal trainer who likes to help people.”

Then you capture what the AI says from ChatGPT. (In the past, you may have chained together a few more prompts to warm up a chat session, before asking the final question)

Now you have the natural set of interactions recorded from the chatbot.

When you make future requests, you can use your prerecorded warmup chat in the “user” and “assistant” array (Without calling the API). Then you only need to feed it the final question as a “user” entry

This will be a single API request (No warmup required)

1 Like

Hm, I don’t understand your last remark. It is either unclear, or unclear to my ears, but it sounds interesting :slight_smile: Would you explain it a bit?

what do you guys think about combining a new turbo model with embedding? In previous completion model like text-davinci-003 it’s actually simply by putting it into prompt as a context, How does it work with gpt-3.5-turbo model?


I presume that if you want to do context injections, such as adding a piece of text, or by injecting text based on the outcome of a embedding search, you are probably best to add it as a system example. This is the suggested method for few-shot learning on the cookbook How to format inputs to ChatGPT models | OpenAI Cookbook

Does that seem correct?

    {"role": "system", "content": "You are a helpful, pattern-following assistant that translates corporate jargon into plain English."},
    {"role": "system", "name":"example_user", "content": "New synergies will help drive top-line growth."},
    {"role": "system", "name": "example_assistant", "content": "Things working well together will increase revenue."},
    {"role": "system", "name":"example_user", "content": "Let's circle back when we have more bandwidth to touch base on opportunities for increased leverage."},
    {"role": "system", "name": "example_assistant", "content": "Let's talk later when we're less busy about how to do better."},
    {"role": "user", "content": "This late pivot means we don't have time to boil the ocean for the client deliverable."},


Only two keys are permitted in the messages array param, “role” and “content”.

The “user” param is a key for the top level chat method as clearly documented in the API docs.



I got my snarky chatbot back, checkout the response I got just by tweaking the API docs example :grin: Still messing around, but the bot is easy to influence, which is good! @raymonddavey The system role here has a ton of weight!


payload = {
    "model": "gpt-3.5-turbo",
    "messages": [
        {"role": "system", "content": "You are a snarky sarcastic chatbot."},
        {"role": "user", "content": "Who won the world series in 2020?"},
        {"role": "user", "content": "Where was it played?"}

"Oh, I'm sorry. I forgot that there wasn't a world series in 2020, thanks to the pandemic. But don't worry, the players still got to enjoy a nice, long season of Zoom meetings and virtual high fives. As for where it was played, well, it wasn't played anywhere, now was it?"


You can put the ‘context’ or the entire ‘prompt’ into the messages parameter like that:

prompt=f"Context: {context}\n\n---\n\nQuestion: {question}\nAnswer:""

completion = openai.ChatCompletion.create(
       messages=[{"role": "system", "content": f"{prompt}"}]

Yes, I can confirm that @nicole_n is right. Adding context to system can let gpt-3.5-turbo get the context from there.

But now I’ve a new problem is that I can’t prevent hallucination with gpt-3.5-turbo model. In the old way(text-davinci-003) can easily add the text below in the prompt:

Answer the question as truthfully as possible, and if you're unsure of the answer, say "Sorry, I don't know".

But in gpt-3.5-turbo, I had added the text to all role, none of them will reply “Sorry, I don’t know”.
Does anyone successfully prevent hallucination in gpt-3.5-turbo?

1 Like I got it working with a similar string but on another language (pt-BR)

Hi! I’ve tried to add your text inside the messages parameter and it works fine - when the model doesn’t have accurate answer to your question, it prints “Sorry, I don’t know”.
The syntax I’ve used is:

completion = openai.ChatCompletion.create(
       messages=[{"role": "system", "content": f"Answer the question as truthfully as possible, and if you're unsure of the answer, say \"Sorry, I don't know\".\\n{context}\n\n{conversation_history}"},
                          {"role": "user", "content": f"{question}"}]

I use the ‘conversation_history’ to save previous questions and answers as context too.


Thanks for your help! I have test it and have some conclusion.
But because we are a little bit off topic with this thread, so I create a new thread and post the conclusion here.

Very welcome to discuss there, happy coding!

1 Like

Thanks for sharing, much appreciated!

So what’s the point of the different roles then?
If the context will be used the same from a user role, I haven’t quite understood why we need a separate system one.

1 Like

I see lots of possibilities with future releases when they train more specific the model with human reinforcement learning to not leak the “system” information to a “user” role within conversations

This is so (in my view) the developer of chatbots app can inject messages in to the chatbot message stream and influence behavior. For example, if an the API only accepted has two roles, user and assistant chatbot app developers would be forced to use the user (or maybe assistant) role to submit system message meant to influence the flavor / direction of the chat session.

In other words, chatbot end-users will see only messages from user and assistant but developers and app admins will see user, assistant and system messages.

Hope this helps.


1 Like

You can potentially look into this which solves your problem. :slight_smile:

1 Like

Not exactly. I’ve tried it and then sometimes chatGPT think that “the user said something” so, it can say things to the user like: “as you said before…”, when the user really never said nothing, we did.

So, i would use very careful the roles as openai set each one of them.

In the same way, i’ve read in another posts, this suggestion from other developers: to modify the real user question to put some kind of prefix instruct. Example:

  • real user question: Tell me when you will launch it?
  • our “cooked” user question: As truthfully as you can: Tell me when you will launch it?

And then probably [assistant] respond something like: I can assure you that...

I explained myself well? What you include in the content of the user message literally chatGPT will understand that are words from the user. So, it can cause some unpredictable or confusing response from him to the user.

I’m 100% sure that all this will be improved a lot by OpenAI guys in a near future. Nowadays is quite unpredictable and confusing for us the developers of real use apps.

1 Like