What is the final message (string input) generated from chat.completions

This was more for my understanding of how the LLM handles such inputs, specifically the function calls.

Im assuming the models with function call capability were finetuned on a very specific prompt template to handle the function calls and any different prompt structure would probably not give the same level of results.

I dont know if this info was shared elsewhere through some paper. Or if people have been able to finetune models with the ability to work with function calls.

2 Likes