C# Bug: Trying new ChatGPT API on gpt-3.5-turbo model (bug in developer code)

I had GPT-3 API working, so now I am adding ChatGPT API (3.5) and having problems:

Basically I have the same code like a had in completion, with the following changes:

  • the endpoint changed to “https://api.openai.com/v1chat/completions
  • model changed to “gpt-3.5-turbo”
  • prompt changed to: "[{“role”:“system”,“content”:“You are an avid story teller. You specialize in science fiction stories.”},{“role”:“user”,“content”:“Traveling Mars”}]

The rest of the parameters like temperature, etc. are the same, except unused parameters were removed… and yet,
I am getting Bad Request: 400

Any ideas?

Can you post some of the code?
For example, has the prompt property been renamed to messages, containing the array you mention?

curl https://api.openai.com/v1/chat/completions \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer YOUR_API_KEY' \
  -d '{
  "model": "gpt-3.5-turbo",
  "messages": [{"role":"system","content":"You are an avid story teller. You specialize in science fiction stories."},{"role":"user","content":"Traveling Mars"}]

Also maybe it was a typo in your post, but you seem to be missing a slash in your URL, between v1 and chat.

1 Like

There’s an errror message too that tells you what you got wrong. My mistake was adding prompt when I converted it from old code.

Here is the code:

            request = new ChatCompletionRequest(request) { Stream = true };
            HttpClient client = new HttpClient();

            ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
            ServicePointManager.ServerCertificateValidationCallback = delegate { return true; };

            string jsonContent = JsonConvert.SerializeObject(request, new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore });
            var stringContent = new StringContent(jsonContent, UnicodeEncoding.UTF8, "application/json");
            string url = String.Format("{0}/chat/completions", Api.BaseUrl);
            using (HttpRequestMessage req = new HttpRequestMessage(HttpMethod.Post, url))
                req.Content = stringContent;
                req.Headers.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", Api.Auth.ApiKey); 
                req.Headers.Add("User-Agent", "GSS/OpenAI_GPT3");

                var response = await client.SendAsync(req, HttpCompletionOption.ResponseHeadersRead);

                if (response.IsSuccessStatusCode)

The ChatCompetionRequest looks like this:

I must say I’m wondering if I’m doing something wrong because I’m not finding the system content value as influential as I expected?

The chat API docs state that the “system” content is “gentle” and if you want strong influence then use “user” content.



1 Like

@Securigy can you post the content of the jsonContent string if you are still having a problem

I’m using C# with streaming and it works fine. I will cross check against my code

Try reading the error message in response? It should be in response.data.error.message

this is actually should go to Raymond Davey

{"messages":"[{\"role\":\"system\",\"content\":\"You are an avid story teller. You specialize in real life stories.\"},{\"role\":\"user\",\"content\":\"Traveling Canada\"}]","model":"gpt-3.5-turbo","max_tokens":4000,"temperature":0.9,"top_p":1.0,"presence_penalty":0.5,"frequency_penalty":0.5,"n":1,"stream":true}

Try removing max tokens from your json. I just let it use the default max value.

If that doesn’t work remove all the parameters from your json except model, stream and messages to see if that helps

Then make sure baseurl doesn’t have a trailing /

I’m confused a bit. I thought the OP @Securigy was solving this problem (from his original post):

I don’t think the params are causing a 400 Bad Request problem.

The JSON params are valid, per JSONL specs, FYI:



Hi @Securigy

I’m not sure if your issue is fixed. I think you were sending me the JSON string I requested in my original post. If that is the case, put a try block around this line and get the Exception message.

Also, can you confirm the JSON string was copied from the Watch window (or similar)

I don’t know who confused you… me?
I simply stated that I have a problem because I use the all the ‘right’ parameters and getting Bad Request in response…

Yes, Iwas sending json string to you. Trying now with default msx_token, which I am not sure what the value is…
Also, this call does not throw exception - it just returns 400, Bad Request.

Error 400 can happen for a lot of reasons.

For example when your account at OpenAI hits a hard limit (a possibility), or a request is too long (not in this case), or max_tokens is too high (a small possibility)

try {
var response = await client.SendAsync(req, HttpCompletionOption.ResponseHeadersRead);
} catch (Exception ex) {

What is in response.Content in the changes shown above

I just tried with setting max_tokens to null - same result…

Here is the respose.Content - couple of strange things there…

No @raymonddavey

I was replying to @raymonddavey asking you to change or remove params from a validate JSON params file when you were having 400 error and it’s normally not valid params which can be “changed” or “fixed” which will solve a 400 error.

If you search the community using the magnifying glass icon in the top right corner, @Securigy , and search for “400 errors”, you will see that the majority of these errors are caused by either:

  • Issues with account credits and limits.
  • Issues with malformed JSON params or an invalid API call.

In other words, tweaking params which are valid (and without JSON errors in your params data) will more-than-likely not solve the 400 errors. You need to confirm your account is working, etc and your code is not throwing errors or has malformed params, etc.



As I mentioned, my requests using GPT-3 API “text-davinci-003” work just fine, so it does not look like hitting the hard limit…

I checked my account to be sure: $12.85 out of $18.00 before they start charge me…

1 Like

I had repeated 400 errors last night getting Whisper to work from the OpenAI Python bindings. So I had to revert back to the Curl example in the docs and worked, and then just made a direct call from Python libraries (pythonized the curl call). Same thing with ChatGPT Turbo, just went straight to code with no bindings (after reverse engineering the Curl call).

For me it’s better to have low dependency overhead anyway. Win-win.