Preferred way for (reliable) structured chat

In the simplest sense, I would like to monitor a conversation of a user with a LLM model.
To do that properly, the model should send the data in a structured format, e.g.

{
    "content": "How are you today",
    "hasFinished": false
}

There are multiple ways to do it; I am wondering what the preferred way is?

  1. Use the system prompt to inform the model about the expected format.
    This works, but can also go wrong, especially if the format of the expected JSON is more complicated.
    There is the JSON mode, but it still seems to be a bit brittle and only ensures that the JSON will be valid, but not that it will be in any defined schema.
    JSON mode

  2. Use function calling to format the response to a defined format
    Instead of calling the actual function, we use the tools call message to parse the arguments that the model is sending to call the function.
    This will force the model to adhere to the definition of the function and thus the data can reliably be parsed.

Does (2) have any disadvantages over (1), or is there an even better way that I have not considered?

Thanks for any input!