New models too slow - strategy for falling back to old models?

Hi all - I’m finding that the response times from the new GPT 4 model via the API is just too unreliable. Can be quick but can be > 5 minutes for the same prompt.
I see the suggestion in the forum is to fall back to using old models until OpenAI can provide a usable service.
I understand that the completion function is different between the old and new models and see that I have to downgrade to the previous library version to use the old models - is that correct?
Can anyone suggest something that allows an easier config switch to swap between old and new models?
Thanks for any suggestions :wink:

ask it for a different sql something a little more precise like HEIRACHY

Depends on your code. I’m using both old and new models using the 1.2.3 version of official library and it works fine.

1 Like

thanks - I’m using 1.2.4 - I didn’t realise I could use 1.2.3 with the latest models

I believe 1.2.4 should be working with the previous models just fine.

1 Like

Thanks very much Tony - you’re a Champ! When I got the response format error I assumed the whole function was incompatible - removing that param does fixes that.

1 Like

No problem man, we are all here to learn from each other in this new exciting world!

1 Like