GPT 4 Turbo regression over GPT 4

We’re experimenting with GPT-4-Turbo.
we have a simple chat completion request that always returned correct result however when using “gpt-4-1106-preview” we’re getting a wrong result.
We’re asking a JS function without formatting: gpt-4 always succeeds, however with gpt4-t we’re seeing weird formatting (response starts with ```javascript)


    openAIClient createChatCompletion response: return function(x, y, z) {
        return [];


    return function(x , y, z) {
//code removed 
        return spawnPositions;
1 Like

I noticed major regressions for the first few days after launch, but it settled out for me anyways and now i’m seeing better performance from turbo. I’m finding it’s code is better and it’s reasoning especially when debugging is a lot better.

We’ve noticed exactly the same with GPT4 Turbo: especially in cases where structured response format (e.g. JSON) is required.

Simply said, I have a feeling that it ignores instructions and this can’t be compensated by adjusting e.g. Temperature.