Playground returning "invalid_value" error for every request

I’m continuously getting the same error in every response to any prompt for any model. The error text I get is:

Invalid value: ‘web…ces’. Supported values are: ‘file_search_call.results’, ‘web_search_call.results’, ‘message.input_image.image_url’, ‘computer_call_output.output.image_url’, ‘reasoning.encrypted_content’, and ‘message.output_text.logprobs’.

I’ve already tried clearing my cache and using a different browser. I even tried creating a new project to see if that was the problem and nothing changed. I’m using just the default Playground settings here. It appears to only be happening in the playground, because when I try to interact with the API using Node.js, everything works as expected.

Is anyone else having this problem, or is it only myself?

For the technically inclined, the specific error that I get in the HTTP response is the following:

{
  "error": {
    "message": "Invalid value: 'web...ces'. Supported values are: 'file_search_call.results', 'web_search_call.results', 'message.input_image.image_url', 'computer_call_output.output.image_url', 'reasoning.encrypted_content', and 'message.output_text.logprobs'.",
    "type": "invalid_request_error",
    "param": "include[1]",
    "code": "invalid_value"
  }
}
7 Likes

Seeing the same issue here

3 Likes

Yeah i got the same issue, must be a bug because i can’t find the actual setting its talking about

3 Likes

Same, cleared my cookies/cache on Brave and then tried on Firefox with the same error.

2 Likes

Same error. Also I’m getting this when am trying to look Responses logs in the Dashboard.

There are strange include in the code, btw

from openai import OpenAI
client = OpenAI()

response = client.responses.create(
  model="gpt-5",
  input=[],
  text={
    "format": {
      "type": "text"
    },
    "verbosity": "medium"
  },
  reasoning={
    "effort": "medium",
    "summary": "auto"
  },
  tools=[],
  store=True,
  include=[
    "reasoning.encrypted_content",
    "web_search_call.action.sources"
  ]
)
2 Likes

Same here

2 Likes

+1, same error.

I encountered it when looking into API errors which gave `model gpt4-mini not supported or API not configured` , which might or might not be related to the web UI issue.

1 Like

same error :frowning:
your thread is the only one talking about it so i guess it’s a recent bug

1 Like

Invalid value: ‘web…ces’. Supported values are: ‘file_search_call.results’, ‘web_search_call.results’, ‘message.input_image.image_url’, ‘computer_call_output.output.image_url’, ‘reasoning.encrypted_content’, and ‘message.output_text.logprobs’.

I’m getting this error. What could be the problem?

The issue seems to be related to the “code” that the Playground generates to create the response. Thanks to @sergey.petrenko for pointing that out.

If we look at the code using the “View code” window, we can see there is an array at the end of the parameter object, called include with two elements:

{
  "include": [
    "reasoning.encrypted_content",
    "web_search_call.action.sources"
  ]
}

Recall that the truncated text the error gives us as the invalid value is ‘web…ces’. This fits perfectly with web_search_call.action.sources.

This becomes even more likely when you look at the full JSON response from the server when we get the error:

{
  "error": {
    "message": "Invalid value: 'web...ces'. Supported values are: 'file_search_call.results', 'web_search_call.results', 'message.input_image.image_url', 'computer_call_output.output.image_url', 'reasoning.encrypted_content', and 'message.output_text.logprobs'.",
    "type": "invalid_request_error",
    "param": "include[1]",
    "code": "invalid_value"
  }
}

The response calls out ‘include[1]’ as the invalid parameter, which is the second item in the include array we saw before.

So, for whatever reason, the server is not expecting web_search_call.action.sources as a value for the “include” array. I don’t know if it’s just some code that got pushed to the frontend but not the backend or if something is typed incorrectly there, but my hope is that it’s an easy fix on their end. I rely on the playground to do a lot of testing and this has really thrown a wrench into that.