[Regression] createChatCompletion OpenAI API Fails with "name" field in messages

It fails every time with 400 Error code. It didn’t use to fail. This started to happen recently. According to the documentation, name is a valid field for the user messages, but not anymore! If you are ChatGPT engineer, please fix this regression.

const response = await openAIApi.createChatCompletion({
    model: "gpt-3.5-turbo" ,
    messages: [{"role":"system", "content":"assist this user: Sam"}, {"role":"user", "content":"Hi", "name":"Sam Alton"}]
}).catch ((error:any) => {
    console.log("error response = " + JSON.stringify( error, null, 4))

If I remove the “name”: “Sam Alton” in the simple example above, everything will work as expected - brings back success response. The OpenAI status for the APIs are all green in its website, but false! This bug is a show stopper for me because my app allows AI to comment on group chats…

1 Like


The “name” thing is actually called “user” and it’s in the top level of the request (where temperature and model etc are).

I recall it being like you describe, but with this company… …well… I for one don’t know what to believe anymore :laughing:

But yeah, you’re screwed. You need to explore other alternatives, like adding the user name to the start of user messages or something. :confused: (see answer below)

The current documentation supports name field inside the messages field. And this used to work before. If the whole thing is crashing because this name field isn’t supported, it is a bug not a feature. I am sure someone at open AI will read this and fix this crucial bug.


where do you see that the current docs support that? I looked and I couldn’t find it, only for functions and assistants, and it’s not the message based name name thing.

I wanna be wrong, but OpenAI has a track record of silently introducing great features that work as intended. :frowning:

I already posted the link to the documentation but here. So go here, expand on “messages”, and expand on “user messages”, and you will see name field there. System messages (and most other messages) also has this optional field

1 Like

you’re absolutely right

it feels like the docs have become almost unusable over the last couple of weeks.

1 Like

Any updates from OpenAI? This seems like sev-2 bug

1 Like

Hi @coolcool1994 !


I was looking to escalate this issue but I wanted to triple check if we were absolutely right, and I discovered something fun:

this is the actual 400 message we get

Error: 400 {
  "error": {
    "message": "'Sam Alton' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'messages.1.name'",
    "type": "invalid_request_error",
    "param": null,
    "code": null

so yeah, you need to filter out the spaces or replace them with _ :laughing:

sorry I couldn’t help you resolve this sooner, but I hope we all learned something!

here’s some code you can use to reproduce it btw

import requests
import json
import os

# Ensure you have your OpenAI API key set in the environment variables
openai_api_key = os.getenv("OPENAI_API_KEY")
if openai_api_key is None:
    raise ValueError("OpenAI API key is not set in environment variables.")

url = "https://api.openai.com/v1/chat/completions"

headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {openai_api_key}"

data = {
    "model": "gpt-3.5-turbo",  # Keep only this model
    "temperature": 1, 
    "max_tokens": 256,
    "logit_bias": {1734:-100},
    "messages": [
            "role": "system", 
            "content": "You are the new bosmang of Tycho Station, a tru born and bred belta. You talk like a belta, you act like a belta. The user is a tumang."
            "role": "user",
            "name": "Sam_Alton",
            "content": "how do I become a beltalowda like you?"
    "stream": True,  # Changed to True to enable streaming

response = requests.post(url, headers=headers, json=data, stream=True)

if response.status_code == 200:
    for line in response.iter_lines():
        if line:
            decoded_line = line.decode('utf-8')
            # Check if the stream is done
            if '[DONE]' in decoded_line:
                # print("\nStream ended by the server.")
            json_str = decoded_line[len('data: '):]
                json_response = json.loads(json_str)
                delta = json_response['choices'][0]['delta']
                if 'content' in delta and delta['content']:
                    print(delta['content'], end='', flush=True) 
            except json.JSONDecodeError as e:
                raise Exception(f"Non-JSON content received: {decoded_line}")
    print("Error:", response.status_code, response.text)


Hey there tumang Sam_Alton! You wanna become a Beltalowda like me, huh? Well, it ain’t easy, but I can give ya some pointers.

First and foremost, you gotta learn the language, lingo, and accent of the Belt. Beltalowdas have their own way of speaking, blending languages like English, Spanish, and Hindi with some extra spice of their own. Ya gotta immerse yourself in it, pick up some books, watch holo series, and practice using it every chance you get.

Next, ya gotta get to know the Belt and its people. Beltalowdas are a tight-knit bunch, and we stick together through thick and thin. Spend some time in the different stations and settlements, connect with the locals, listen to their stories, and understand their struggles. Respect and camaraderie go a long way in becoming a true Beltalowda.

Another important thing is adaptability. Life in the Belt ain’t easy, with limited resources, cramped spaces, and constant challenges. Beltalowdas need to be resourceful, resilient, and adaptable. Learn some essential skills like engineering, repair work, navigation, and survival techniques. Showing you can handle the Belt’s harsh environment will earn you

tagging @irthomasthomas if you had a similar issue

1 Like

You beat me to it.

The name can’t have spaces in it.

Extra words are needed here.

Aww thanks. I add last name if two people have same first name, and that case is being triggered in my code.

How did you print the 400 message? If I saw that error detail I would’ve fixed it from my end without this post. When I print the error, I didn’t see that error description by doing JSON.stringify( error, null, 4). I am using typescript in Node.js environment.

const response = await openAIApi.createChatCompletion({
                    model:  "gpt-3.5-turbo";
                    messages: sendingMessages,
                    temperature: 0,
                    user: sender_account_identifier
                }).catch ((error:any) => {
                    console.log("error response = " + JSON.stringify( error, null, 4))

dude to be honest with you, I stopped relying on libraries a long time ago. I’m looking at the http response text.

in node we listen to axios response.on.error

response.data.on('error', (error: any) => {
    // try to recover
    console.error('Stream error:', {type: "error", session: sessionId, error: error});

that being said, it’s not super trivial to get axios running reliably with streams because SSEs aren’t supposed to be sent with a post request like openai wants, but at least we can now eliminate the openai library as a source of errors :laughing:

that said, you don’t need to stringify your json with console log, btw. just use a comma. who knows, maybe there’s more information in there? :person_shrugging:

1 Like

Hmm okay I gotta try just raw http response text like you in the future if I get more errors from openAI. Thanks.

using in my code, error is like object with a lot of fields but didn’t tell me the descriptive error that you had. I hope openAI update its documentation so we use another library and see the error description; I think developers expect that from openAI. I used axios because I merely followed its documentation.

1 Like

Hi, I’m not sure why you tagged me? I don’t remember running in to this problem.

You liked one of his posts, I assumed you might have a similar issue. Apologies!