I’m testing the “tool/function calls”.
The declared tools/functions in my requirements can have many parameters (e.g. 5-7 parameters).
With the model
gpt-3.5-turbo-1106 I get very non determinsitic arguments. It is like every 3-4 API calls they provid totally different outputs.
When defining the arguments in the tools field, I specify a
Do you have any recommendation to improve the accuracy and determinism of the arguments in the API response?
What is your experience with it?
The fix seems to move to
gpt-4-1106-preview (I get much better reliable, consistent and determinstic results from it) however it is not ready yet for production use.