I’ve also experienced degradation when using GPT-3.5 1106.
Most of my old prompts with 0613 just no longer work.
To add just one example, here’s a prompt with function calling, I pass it the name of a person and just ask 2 basic questions:
{
"model": "gpt-3.5-turbo-1106",
"temperature": 0.2,
"messages": [
{
"role": "system",
"content": "The user will give a JSON containing a name. You need to determine if that name is a valid person's full name. A valid full name contains at least one fully written first name and one full written last name."
},
{
"role": "user",
"content": "{\"name\":\"John Doe\"}"
}
],
"functions": [
{
"name": "is_person_name",
"parameters": {
"properties": {
"is_person_name": {
"description": "Does the value of 'name' look like a person's valid full name (contains both a complete first and last name)?",
"type": "boolean"
},
"name_count": {
"description": "How many full names are there?",
"type": "number"
}
},
"required": [
"is_person_name",
"name_count"
],
"type": "object"
}
}
]
}
And it replies with:
"message": {
"role": "assistant",
"content": null,
"function_call": {
"name": "is_person_name",
"arguments": "{\"name\":\"John Doe\"}"
}
},
"finish_reason": "function_call"
Whereas the old 0613 was consistently good on this task.
And this is just 1 of many prompts which now just fail.
So I’m sticking to 0613 for now 
For completion, here is the correct 0613 response:
"message": {
"role": "assistant",
"content": null,
"function_call": {
"name": "is_person_name",
"arguments": "{\n \"is_person_name\": true,\n \"name_count\": 1\n}"
}
}