Responses Dashboard bug: "prompts" presets not saving hosted "shell" tool

Looks like the technical debt is charging interest from lack of interest.

ISSUE: Failure to save “shell” tool to a prompt ID.

Replication in platform site:

  • visit https://platform.openai.com/chat/edit for organization
  • turn on the “shell” tool under “hosted”
  • type in some developer text that might have been what you want to use in conjunction with the hosted shell tool.
  • “create”
  • Reload the prompt via URL or by selection of various prompts and returning.
  • see no hosted tool
  • likely failure in using prompt ID in an API call with the promise of having pressed “create” on what you configured.

Impact:

  • none: if you’ve already rejected the concept of a UI-only created “preset” that you run by an ID stored in your database, instead of the identical work of simply the parameters to run stored in your database.
  • higher loss of effort: if you spent work constructing file attachments and skills to run in a container in “shell” configuration, and tried to save as prompt (which is only offering container “auto” and no control surface for manually-provisioned code container when you’d use a prompt).

Plus:

  • the tool is offered for gpt-5.1-codex or gpt-5.2, but gpt-5.2-codex is useless in the platform site: no tools, no reasoning configuration, and worse, across all reasoning models, no max_output_tokens to set when running inference by the playground “chat”.
  • gpt-5.3-codex is not populated in the dashboard models when an org has access.