GPT3.5 responds in code for chat use cases

I wanted to check if others have also seen this. When I use GPT 3.5 for chat use cases, sometimes I see the model responds with a python code. I think this may be due to the fact that the foundation model has been fine tuned with code. This probably messed up the weights and biases of the previous layers. Hence it sometimes gets confused and does not know how to respond. It would be a great help to know if others also faced the same issue

Hi Joyee,

Do you have any examples you can post of this happening?

It is sporadic and I will not be able to put the exact output as it is confidential, but what it spits out is as below. I have implemented COT, so some of these come in the thought and action also. it looks like as below

def need_to_find_information():

Also, the content is sometimes shown with in a code block like </>. When I use text-davinci, it is perfect