How to encode examples into GPT3.5-Turbo prompts

The guides for GPT3.5-turbo presently do not (so far as I can tell) explain how to provide “example exchanges” to the chatbot. You have to specify a “role” for each message, but it’s not obvious which role should be submitted for examples. Should these be “system” messages? Or should you subject “user” and “assistant” in as the examples?

From what I can tell, only system, user, and assistant are available as roles at this time.

So, you would send user and assistant filled with the appropriate messages.

Hope that helps.

According to this link, you can try to submit them as “system” messages with a special “name” field or as “user” messages. No idea about about what works better

I just tried this and it seems to work nicely, at least for answering the first User Chat:

“model”: “gpt-3.5-turbo”,
“messages”: [
{“role”: “system”, “content”: “You answer questions factually based on the context provided”},
{“role”: “system”, “name”:“context”, “content”: “ChatGPT and Whisper models are now available on our API, giving developers access to cutting-edge language (not just chat!) and speech-to-text capabilities. Through a series of system-wide optimizations, we’ve achieved 90% cost reduction for ChatGPT since December; we’re now passing through those savings to API users. Developers can now use our open-source Whisper large-v2 model in the API with much faster and cost-effective results. ChatGPT API users can expect continuous model improvements and the option to choose dedicated capacity for deeper control over the models. We’ve also listened closely to feedback from our developers and refined our API terms of service to better meet their needs.”},
{“role”: “user”, “content”: “Give me a brief one sentence summary of the article”}],
“temperature”: 0.2,
“max_tokens”: 256,
“top_p”: 1,
“frequency_penalty”: 0,
“presence_penalty”: 0,
“user”: “Testing”

“id”: “chatcmpl-6pVV8SvN5EGopPttvukN7rXuOf5Tx”,
“object”: “chat.completion”,
“created”: 1677733838,
“model”: “gpt-3.5-turbo-0301”,
“usage”: {
“prompt_tokens”: 172,
“completion_tokens”: 37,
“total_tokens”: 209
“choices”: [
“message”: {
“role”: “assistant”,
“content”: “The article announces the availability of ChatGPT and Whisper models on their API, with cost reductions and improvements, and refined API terms of service to better meet developers’ needs.”
“finish_reason”: “stop”,
“index”: 0

Hi @t.mcgregor

The above is not correct. You have a key you call “name” in your messages array with value “context”. That key is not permitted in the messages array.

There are only two keys permitted in the messages array, “role” and “content”.

“user” is an optional param (key) for the chat completion.

See, the docs (very good at helping):

Chat: Introduction



Curiously though, this last example in the openAI cookbook indicates you can do something like this. And indeed it did work to answer accurately. I am not saying the value “context” actually means anything, but it seems “name” is a valid key

Scroll down to the final example in this cookbook page

Yes, I saw that, but my best guess it is might be an error in the cookbook, because the API clearly states that only two keys are allowed in the messages, which are “role” and “content”.

The only place in the API docs where “user” is a key for the chat completion method is in as a top-level key for chat (outside of the message).

However, I have not tested to see if I send a “user” key in the message or another key, what happens.

Hold on… let me test. Give me a sec…



OK, I added the “user” key and tested it like so:

 line = {"role": role, "user": "test","content": prompt}

and here was the response:

{ "error": { "message": "Additional properties are not allowed ('user' was unexpected) - 'messages.0'", "type": "invalid_request_error", "param": null, "code": null } }

Hope this helps.

Note, I’m a big fan of testing the API and answering questions from actual hands-on coding and testing, and honestly speaking, I do not use the cookbook (have never used it) as I write all my own code from the API reference only … so glad to have helped you out @t.mcgregor


Thanks is see what you are saying

I tested it also by making the API call with no library, ie using postman. My message works and returns the result that uses the context to answer the question accurately. Maybe simply the API ignores unknown keys.

Try my construction. It works


I have already tested the API, and it errors on all “unknown” keys I tested except the 2 in the API.

In all cases, the error is clearly like this:

{ "error": { "message": "Additional properties are not allowed ('user' was unexpected) - 'messages.0'", "type": "invalid_request_error", "param": null, "code": null } }

However, If some folks are using an API wrapper, it is certainly possible that an API wrapper lib can filter out the incorrect keys.

That’s not how I code… I code per the API docs as mentioned.

Best of luck!


Hi again, Sorry for keeping pushing on this. I can confirm the use of the key “name” works. If the additional key is anything other than “name” you will get an error. See the postman screenshot below, where the use of the key “name” gives a successful response

What is interesting (and repeatable) is that if you omit this key “name” and only add the system message it cant answer the question. Thus I do believe the cookbook is correct. The cookbook has this kind of example as a way to do few-shot prompting. See below how with the second system message containing the context, the chatGPT does not know the context and cannot answer


Thank you for the pointers @t.mcgregor !

Role or name they occupy the same space if you put a name the API ignores the role field.
ChatGPT models have an underline format that it uses for reading massages which at the moment we cant interact with.
See Here

Also the the example code for counting tokens for chat API here stats:

  1. every message follows <im_start>{role/name}\n{content}<im_end>\n
  2. if there’s a name, the role is omitted
  3. there is also an example with name