Prompts playground: no function understanding by auto-generated (tab) function responses

The Playground has an AI-powered function call return simulator. Press tab, and it will fill in the values that an API or code might return.

It only seems to write JSON, even though natural language can be returned by many developer function patterns.

Shortcoming: It seems to know the function name, but does NOT follow function descriptions of what the function returns, even when explicitly spelled out. True even when JSON that would be reproducible is described.

A developer writes function descriptions with a quality such that the chatbot knows what to expect back as a return, to determine the function’s usefulness. “AI remote tool simulator” doesn’t care about such quality description.

Example

Let’s get a parallel tool call. This is going to need my example preset’s function:

user: Player one shoots an arrow at the orc, requiring 1d20>15, and delivering 2d6 damage.

Then press tab sequentially on each of the return boxes:

Parallel call 1 filled out by AI:

Parallel call 2 filled out by AI:

It is understandable for an unguided AI to make up something. However, the function is as deliberately guided as can be:

{
  "name": "get_random_dice",
  "description": "Returns the total of a random generator in the form of dice roll results. The function will return a JSON in either of these schemas: {\"NdX\": int, \"index\": int} | {\"hit\": boolean, \"index\": int}",
  "strict": false,
  "parameters": {

The definition of N and X are also accompanying “rolls” and “sides” in the function, if that isn’t inherently inferred.

Better

The AI might only be following the JSON of the call. The “returns” could be observing the full function including description, and also be observing the earlier completed function call “tool” messages (and have a JSON requirement relaxed), so that the Prompts playground isn’t simply demonstrating how to confuse an AI.

Severity: inconsequential, but niggling

3 Likes