Json responses in gpt-3.5-turbo-1106 much shorter than without json?

I can do a test a little later today and share the results, but my guess would be that nothing will happen related to the behaviour I mentioned. What’s your hypothesis, Curt?

I don’t think it influences the model, but not sure where this parameter comes into play. So test and make sure.

I know that GPT-4-Turbo is really long winded compared to GPT-4. But haven’t done much with the GPT-3.5 version you are working with.

Worst case use 4-Turbo :rofl: :man_shrugging: Assuming you want longer responses. In general, I don’t like them.

1 Like

For the projects I’m working on now, waiting times of gpt-4 family are unfortunately restrictively long.

1 Like

From my tests GPT-4-Turbo is 20-40% faster than vanilla GPT-4.

But GPT-4-32k is king and is 100% faster than GPT-4.

1 Like

How did you get access to GPT-4-32k, man?

I could access it through OpenRouter obviously but I’m reluctant to use it in production.

I don’t know exactly. Probably for being a forum Regular.

1 Like

@curt.kennedy I just did this test, same results as expected.

1 Like