Control message after app tool call

Looking at the apps currently available it appears that they are controlling the message shown after the call to the tool returns and the ui widget is rendered. How are they doing that? The message is too consistent for them not to be controlling it.

We have set the content in the response to a list of Content { type: “text”, text: “Message we want to display.”}. But the model still just gives whatever summary based on the question asked that triggers the tool call and the structuredContent returned, rather than using the content we returned for the message.

When asking ChatGPT, it says it should use the content returned for the message, but we haven’t been able to find anything in the docs that describes any way of controlling the messages and content doesn’t seem to do it.

3 Likes

Have you tried using the _meta headers?

  "openai/toolInvocation/invoking": "Searching…",
  "openai/toolInvocation/invoked": "Results ready"

We are trying to control the message chatgpt responds with after the UI component renders. It is summarizing everything in structured content right now, which is not what we want. And it does not appear that some of the other apps available (i.e. Expedia) are having their entire results summarized.

You’re looking for “openai/widgetDescription” on the resource. Tell it what your widget renders and you can influence what appears under the widget, i.e. “my widget displays all of the GitHub issues in the response so no need to repeat them - you can just highlight one or two interesting ones”

1 Like

That also did not appear to change anything. It still decided to summarize everything in the structured content.

@cody5 Were your instructions not to do that explicit? If you read the docs, that’s the field they want you to use to pass info to the model about what’s in the widget. This does work for me.

”Human-readable summary surfaced to the model when the component loads, reducing redundant assistant narration.”

you can return text that accompanies what’s returned to the UI and somewhat guide the response.

I believe you use the “content”: field.

My instructions were pretty explicit. Will try again later when I have time again.

Content definitely does not control what text is output after the widget. Or at least no more so than what is returned in structuredContent. It is used obviously used to create the response but beyond that, no control over that text.

Please share what you find - I appreciate we have docs but they’re not always super-super explicit so learning from others experiences is so helpful.

In my case, my widget renders a bunch of user content and originally the model was more or less re-parroting it all in the response under it until I told it to stop doing that via widgetDescription but very curious what you learn.

Looks like from the other tools, its in the description not the widgetDescription. Still no matter how many variations of do not evaluate, summarize, rank, etc, nor any variation of only respond with/like, and combinations of the two. The assistant still goes and recommends, filters, ranks, etc.

Adding more info about the inputs did help it get the correct inputs, but the response after ends up being an extremely long response about the products returned in the structuredContent.

@OpenAI_Support Can we get some clarification on this? The docs aren’t very specific.

This page: UI guidelines states the following but doesn’t say how:

Follow-up: A short, model-generated response shown after the widget to suggest edits, next steps, or related actions. Avoid content that is redundant with the card.

… but at least by default, that follow-up is often anything but short. Then this page: Reference suggests that _meta[“openai/widgetDescription”] can be used for:

Human-readable summary surfaced to the model when the component loads, reducing redundant assistant narration.

Is that supposed to be how we instruct the model what to include in the follow-up? If so, it seems to be a pretty weak signal.

Specifics would be greatly appreciated.

1 Like

Looks like recently the developer menu for apps changed and at least for now it’s easy to see what other apps are doing. Here’s one of the widget descriptions from the Target app which I noticed never shows any narration under the widget. Interesting to see, maybe helpful?

“Authoritative product list. Renders a single inline results widget (grid with facets, sort and pagination). **Exactly one widget per turn.** The UI must serve as the single source of truth. The assistant should not provide any additional narrative, recommendations, or product suggestions when this widget is displayed. The assistant should avoid echoing raw JSON; the widget is the source of truth for presentation.\n”

3 Likes