Why Is GPT-4-1106-preview Giving Unrelated Outputs After Working Fine Before?

If I request the execution of a function in the GPT-4-1106-preview, it generates an output that has no connection to the input provided. However, in previous runs, it worked correctly. I haven’t made any changes to the code. Can someone please explain to me why this is happening?

Below is my code:
def get_plottable_data(file_path):
json_data =read_json_file(file_path)
flat_json=flatten_json(json_data)

system_message1 = """As a data scientist, your role involves analyzing flattened JSON data for plotting. 

Your first step is to make decisions on which specific statistical data points are suitable for visual representation on graphs.
Your objective is to identify key elements that can be quantified and represented clearly on plots.
Consider the characteristics of the data that will contribute meaningfully to your visual analysis.
“”"

class json_data(BaseModel):
    """Retrieved data from json"""
    key: str = Field(description="key of flatten json data")
    value : str = Field(description="corrosponding value for the key")

class Information(BaseModel):
    """List of extracted data."""
    chart_raw_data: List[json_data] = Field(description="List of info about raw data which is required to process before  draw a given chart")

model = ChatOpenAI(model_name='gpt-4-1106-preview',temperature=0)
extraction_functions = [convert_pydantic_to_openai_function(Information)]
extraction_model = model.bind(functions=extraction_functions, function_call={"name": "Information"})
prompt = ChatPromptTemplate.from_messages([
    ("system", system_message1),
    ("human", "{input}")
])

extraction_chain = prompt | extraction_model| JsonOutputFunctionsParser() 
response=extraction_chain.invoke({"input":flat_json})

Likely related to this.

The model, used with tools, has been damaged.