I had repeated 400 errors last night getting Whisper to work from the OpenAI Python bindings. So I had to revert back to the Curl example in the docs and worked, and then just made a direct call from Python libraries (pythonized the curl call). Same thing with ChatGPT Turbo, just went straight to code with no bindings (after reverse engineering the Curl call).

For me it’s better to have low dependency overhead anyway. Win-win.

So I have two differences to you:

Put this above your user-agent header

req.Headers.Add("Accept", "application/json");

Also I am not using the ChatCompletionRequest Object. Instead I define the request as

List<OpenAIChat> _chat = new List<OpenAIChat>();
chat.Add(new OpenAIChat() { role = "user", content = "Hello!" });

var request = new { messages = _chat, model = "gpt-3.5-turbo", stream = true };

The definition for the OpenAIChat class is:

public sealed class OpenAIChat
{
    public string role { get; set; }

    public string content { get; set; }
}

Otherwise our code is the same (you may want to get rid of the sealed part of the class definition)

Yes, OpenAI (Logan) suggested in a earlier post on this same topic to try a basic curl call, like this [you can modify as you like @Securigy (I have not tested it)].

curl https://api.openai.com/v1/chat/completions \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer YOUR_API_KEY' \
  -d '{
  "model": "gpt-3.5-turbo",
  "messages": [{"role":"user", "content": "How are you?"}],
}'

HTH

:slight_smile:

Well, curl is not helping me in my application… besides if curl works C# should too…

1 Like

Folks, including @curt.kennedy as he mentioned, use curl to insure there are no underlying API or account problems.

It’s part of basic troubleshooting the API to help debugging.

2 Likes

Just start with curl to rule out account issues. If that works there is a syntax error and you fix it or go straight to code without the library, like I did.

2 Likes

I tried with ‘Accept’ header - same old…
I tried without any parameters, except model, stream, and messages.
From jsonContent string, the ‘messages’ and the whole thing is:

{“messages”:“[{"role":"system","content":"You are an avid story teller. You specialize in real life stories."},{"role":"user","content":"Traveling Canada"}]”,“model”:“gpt-3.5-turbo”,“stream”:true}

So, this is beyond any objects that I use before that… maybe I should use PostAsync…

            string jsonContent = JsonConvert.SerializeObject(request, new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore });
            var stringContent = new StringContent(jsonContent, UnicodeEncoding.UTF8, "application/json");
            string url = String.Format("{0}/chat/completions", Api.BaseUrl);

            Log.VerboseFormat("Calling SendAsync with URL: {0}, and Engine: {1}", url, request.Model);

            using (HttpRequestMessage req = new HttpRequestMessage(HttpMethod.Post, url))
            {
                req.Content = stringContent;
                req.Headers.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", Api.Auth.ApiKey);
                req.Headers.Add("Accept", "application/json");
                req.Headers.Add("User-Agent", "GSS/OpenAI_GPT3");

                var response = await client.SendAsync(req, HttpCompletionOption.ResponseHeadersRead);

                if (response.IsSuccessStatusCode)

Just a feeling that the API is still half-baked…

I’m fairly sure the issue is the quote before and after the square brackets

It is an array and not a string representations of an array

It should be like this

{"messages":[{"role":"user","content":"Generate an extensive list of subtopics that need to be covered for plumb and the context provided below. Provide the subtopics in list format.\n\n"}],"model":"gpt-3.5-turbo","user":"U2815","stream":true}

Copy and pasted from my app - hence different text

Have a look at my message above to see how to get it as an array (using code similar to this)

List<OpenAIChat> _chat = new List<OpenAIChat>();
chat.Add(new OpenAIChat() { role = "user", content = "Hello!" });

etc…

1 Like

Your observation makes a lot of sense. This is caused by:
string jsonContent = JsonConvert.SerializeObject(request, new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore });

but a real reason is probably that the messages in my object is String:
public String messages { get; set; }

So what should it be just Object ?

It’s actually an array or List (You can use object if you cast the data properly)
But you cant use string

I made a class called OpenAIChat

It has two properties role, and content

Then I made a List of the class with List<OpenAIChat>

Finally, I added OpenAIChat objects to the list and passed this into messages

So messages would be defined as

public List<OpenAIChat> messages { get; set; }

Have a look at the code example a few messages back

3 Likes

Hey @Securigy

We understand your frustration, but the API works fine. I have been using it for the past 24 hours without a hitch, but the difference is I’m coding in Ruby and you are using C#.

This means the issue, as @raymonddavey has been trying to help you, is you have some bug in your C# code.

However, since you will not take one minute and confirm there are no underlying issues using curl as @curt.kennedy and I have suggested, then of course their might be another underlying issue.

Why not just take one minute of your time and try with curl and post back the results to insure there are zero underlying issues?

Thanks

:slight_smile:

yep, you are absolutely right… I see I already have similar one called Choices in the CompletionResult object.
Great catch!! My deep appreciation!

1 Like

Amending here: although I get Success, the first line that I am reading is this:

data: {“id”:“chatcmpl-6psQJ649lGZpci17HgPLAR3tDkYxS”,“object”:“chat.completion.chunk”,“created”:1677821951,“model”:“gpt-3.5-turbo-0301”,“choices”:[{“delta”:{“role”:“assistant”},“index”:0,“finish_reason”:null}]}

no resemblance to the documentation:

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "\n\nHello there, how may I assist you today?",
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21
  }
}

Because the OpenAI example response you posted is not streaming.

That response is based on stream: false.

Yes the response is correct

They have not documented the response of the new chat API

Here are some class definitions for C# that will help

    public class CompletionResultChat
    {
        /// <summary>
        /// The identifier of the result, which may be used during troubleshooting
        /// </summary>
        [JsonProperty("object")]
        public string ObjectType { get; set; }

        /// <summary>
        /// Which model was used to generate this result.  Be sure to check <see cref="Engine.ModelRevision"/> for the specific revision.
        /// </summary>
        [JsonProperty("model")]
        public Engine Model { get; set; }

        /// <summary>
        /// The Completions returned by the API.  Depending on your request, there may be 1 or many data.
        /// </summary>
        [JsonProperty("choices")]
        public List<StreamChat> Choices { get; set; }

    }

and this one for the Choices

   public class StreamChat
    {
        /// <summary>
        /// The main completion
        /// </summary>
        [JsonProperty("delta")]
        public StreamDelta Delta { get; set; }

        /// <summary>
        /// Did it finish the completion
        /// </summary>
        [JsonProperty("finish_reason")]
        public string Finish_Reason { get; set; }

        /// <summary>
        /// If multiple Completion data we returned, this is the index withing the various data
        /// </summary>
        [JsonProperty("index")]
        public int Index { get; set; }

    }

and StreamDelta

    public class StreamDelta
    {
        [JsonProperty("content")]
        public string Content { get; set; }
    }

If the response is “[DONE]”, then the output is complete. This hasn’t changed from the old completions endpoint

Otherwise Deserialize the response into a CompletionresultChat object

CompletionResultChat deserial = JsonConvert.DeserializeObject(line);

The text will be in

deserial.Choices[0].Delta.Content

Edit: For clarity’s sake, “line” is part of the streaming response you are getting back in your ReadLineAsync method

1 Like

Is that a complete response? Or simply what you implemented? I see that “created” is missing in CompletionResultChat…and there is “finish_reason”…and no “id”

It is the minimum to make it work. There are other properties you can include.

Yes, I hope I have it all now, although during debugging I encounter exceptions that seems resulted from breaking and looking at the lines…

. Just compared my first result to davinci (the prompt for gpt-3.5-turbo did not change since I started the thread, and the prompt for text-davinci-003 is “Traveling Canada”). Huge difference.

“gpt-3.5-turbo”

Canada is a country that truly offers something for everyone. Its vast landscapes, diverse cultures and friendly people make it a popular destination for travelers from around the world. I had the opportunity to travel across Canada and experience some of its most beautiful and interesting places.

One of my favorite places in Canada is Banff National Park in Alberta. It’s known for its turquoise lakes, snow-capped mountains and natural hot springs. I spent several days hiking through the park’s trails, including Johnston Canyon to see the impressive waterfall and Sulphur Mountain for spectacular panoramic views.

Another highlight of my trip was visiting Quebec City in Quebec, where I was transported back in time to a European-like atmosphere with cobblestone streets and historic buildings. The city’s most famous landmark is the Chateau Frontenac, a grand hotel overlooking the St. Lawrence River.

Toronto also left an impression on me with its multicultural neighborhoods such as Chinatown, Greektown and Little Italy. I enjoyed indulging in different cuisines and getting lost in Kensington Market’s unique shops.

Lastly, Vancouver Island located in British Columbia captivated me with its rugged coastline, remote beaches and ancient rainforests. I embarked on a whale watching tour off Tofino where we spotted orcas swimming near our boat.

Traveling across Canada taught me that this country offers endless possibilities - from urban life to adventure-filled nature experiences - leaving visitors wanting more of what this stunning nation has to offer.

“text-davinci-003”

Traveling Canada is an incredible experience. From stunning nature and fascinating culture to world-class cities, there’s something for everyone! Explore Canada’s vast national parks, take in the beauty of the Rockies, or take a road trip along the iconic Trans-Canada Highway. Get close to nature with glacier trekking, paddle boarding, or whale watching and explore diverse Indigenous cultures such as Inuit and First Nations. Enjoy vibrant and vibrant cities such as Toronto or Vancouver that host modern art galleries, dynamic restaurants, and famous live music venues. No matter what your travel style is, Canada has something for you!

What is your experience with the size of system role part in request?

I am again getting BadRequest for:

{“messages”:[{“role”:“system”,“content”:“You are a writer tasked with creating engaging articles, blogs, and descriptions based on user prompts. Your goal is to use your creativity and writing skills to craft compelling content that captures the user’s attention and provides them with valuable information. Think about how you can use storytelling techniques, descriptive language, and a clear writing style to bring your articles to life and engage your readers. Whether you’re writing a product description or a blog post, your writing should inform, entertain, and leave a lasting impression on your audience. So, how will you use your writing skills to craft content that resonates with your readers and achieves your client’s goals?”},{“role”:“user”,“content”:“Traveling Canada”}],“model”:“gpt-3.5-turbo”,“max_tokens”:4000,“temperature”:0.9,“top_p”:1.0,“presence_penalty”:0.5,“frequency_penalty”:0.5,“n”:1,“stream”:true}

Is there any known limits for the size?

When I use a short “system” prompt, it works fine:

{“messages”:[{“role”:“system”,“content”:“You are a very experience story writer.”},{“role”:“user”,“content”:“Traveling Canada”}],“model”:“gpt-3.5-turbo”,“max_tokens”:4000,“temperature”:0.9,“top_p”:1.0,“presence_penalty”:0.5,“frequency_penalty”:0.5,“n”:1,“stream”:true}

I didn’t use the system prompt in the end.

I’ve been playing a bit with temperature to see how close I can get it to Davinci 3. I’m finding that the new model is a bit harder to keep on track.

1 Like