Still Chat Completion API extremely slow and hanging

Hello,

Having the same issue for two days like in this post:
chat-completion-api-extremely-slow-and-hanging/524604

It was normal before two days, my code and api requests was running fluently. However, for two days, I encounter with the issue above. Also, I am tier 3.

My code is like that:

response = client.chat.completions.create(
        model="gpt-4-1106-preview",
        response_format={ "type": "json_object" },
        temperature=0.9,
        messages=[
            {
                "role": "system",
                "content": PROMPT
            },
            {
                "role": "user",
                "content": str(contents)
            }
        ]
    )

I run the request with loop for processing multiple files and mostly hanging.

1 Like

I see you have response_format={ “type”: “json_object” } specified.

That gives you a different model, but one that will print system-hanging garbage if you don’t specify the json file structure, key/values, and output you desire.

Almost as if “guaranteed json” was a big fib, and you just need as much prompting before.

So that would be where I’d start. The inputs.

Hi

In your prompt do you deliberately state that the output must be in json format?

Did you check the post below?
chat-completion-api-extremely-slow-and-hanging/524604

Yes absolutely I stated, also I stated the format of the json.

I used to have the same behaviour for 3.5-1106, but only when I wasn’t clear enough in the system prompt about the output format.

I have the same issue. It is very unreliable. I may switch to another model provider if this continues for too long

I switched to 3.5-1106 for now.

1 Like