Gpt-realtime-1.5 speaks unnaturally about tool call parameters and output

In both real world applications and the playground, I’m noticing that gpt-realtime-1.5 is worse at referring to the parameters it needs in a natural way (see below)

It’s also been bad at turning tool call responses into natural conversation. It will use the verbatim phrasing of the response when it’s not appropriate. gpt-realtime treats tool responses more like information than a script to read.

This seems to be a basic problem with new models: in an attempt to make a hierarchy of authority, they have a fading concept of the role and rights of “developer” or “user” and their own autonomous operation - and who can use tools and who must supply them correctly.

For your own functions at least, you can write a main “description”: “this is a tool for direct internal use by the AI, who must interpret and extract the data to send and when to use the function.”

as far as I can tell, 1.5 is improved this week compared to last week, much closer to v1 with language choice and pronunciation

1 Like