Not able to fine-tune gpt3.5 for parallel function calling

Hey Team,

I am trying to finetune gpt-3.5-turbo-1106 for my parallel function calling use case. But it looks like that is not supported yet. The function_call property in the finetune dataset needs to be a dictionary but should allow a list to support parallel function calling.

Please see attached image. I am currently using gpt4 for function calling. It is quite slow and would like to use gpt3.5 finetuned to my use case. Please let me know if there are any known workarounds or if I am doing something incorrectly.