Drop in quality of 3.5-turbo-16k

Hello

Has anyone observed a quality drop in the results of 3.5-turbo-16k recently ? We have a pipeline using it since early December last year and never had an issue. We started seeing a drop in the quality of the output; it’s like the model is not understanding the question or becoming too lazy. We use it to extract terms from a document and it only output « 1 » since last Thursday.
Is it linked to deprecation ?

Thanks

2 Likes