Hey, I’ve noticed that the presence and frequency penalty parameters are missing in the new playground UI but they are not marked as deprecated in the API. Is there any way I can set them in the playground as well while building my prompt?
You can compare the new Responses API you might be looking at (selection at upper right):
To that after you choose Chat Completions:
To get a larger set of sampling parameters where they still exist. Also missing is logprobs, logit_bias, audio models, o1-mini, etc. on the new endpoint.
Bigger concern could be no ability to send max_completions_tokens
to a reasoning model - like if you wanted to cut off your o1-pro “high” output spending to $60 per call (100k tokens).