There’s now a very weak version of this in place. The model can be forced to adhere to JSON syntax, but not to follow a specific schema, so it’s still fairly useless. We still have to validate the returned value, all this change brings is that we don’t have to consider the case of the syntax being invalid. Except that we do, because apparently it can technically still spew out infinite whitespace.
Picking on an employee I’ve seen active here recently: @ted-at-openai, do you think we’ll get a good constrained schema adherence feature sometime soon? It’s clear you’ve got most of the pieces in place now with the constrained JSON syntax, so it feels a little silly that you’re announcing “GPT-4 Turbo is more likely to return the right function parameters” when you should be able to fairly easily make it 100% accurate. By adhering to a standard format, you’d also fix that infinite whitespace issue. This would hugely improve the usability of this API.