How can we handle OpenAI model finetuning when the tool's function signature has changed, and do we need to restart the finetuning process, or is there a way to make the finetuning process function-definition agnostic?

I have a OpenAI GPT-4o-mini model that has been finetuned to make use of specific tool calls. However, the tool’s function signature (such as input parameters, return types, or other aspects of the API) has recently been updated. I’m concerned about how this change affects the model’s performance. Do I need to start the finetuning process from scratch to accommodate the updated tool, or is there a way to finetune models in a manner that makes them agnostic to specific function definitions? Essentially, I’m looking for strategies to handle situations where the tools or APIs that the model interacts with are subject to change, without needing to redo the entire finetuning process every time.