Fine tuning with function calling capability

This has been reported before - continuing since the release of functions.

The assistant response in normal function calling is for it only to emit the to= message to the function recipient, so the only thing you should have the final assistant response be is the function call json when the user input would trigger it.

The function may be over-training on the function stop token (or an oversight absence of one), having the AI produce it for normal chat.

One workaround that could be considered is to train on your own stop sequence at the end of assistant responses, such as ######, so that you get that before the repeat. It would be removed from the response when stop= is used.

Personal stop sequence would only workaround when the AI could infer that, so you might also need some coverage into other user input scenarios or tasks than you anticipate. More training on non-function calls also could get the AI back on track.

1 Like