bug
There seems to be a problem with reusable prompts : when not defined a max_output_tokens value, it seems to default to 2048, causing empty responses when using reasoning models like gpt-5.
I thought it was a legacy of older prompts when the UI used to have a setting to choose a max_output_tokens value, but creating a new reusable prompt in the dashboard also seems to have the same issue.
Here is an example resulting from a newly created prompt:
And here is a minimal run without prompts (no max_output_tokens value):
You can bypass it by setting a value in the max_output_tokens parameter, but that is leaving people puzzled as it is totally unexpected.
@dmitry-p If someone can take a look at this, it will be much appreciated.
It has been reported here:

