- the clear button clears system prompt of my preset. (most annoying)
- the preset selection dropdown has become so translucent with its surrounding, its annoying
- the width proportioning is improper for the right sidebar, looks weird in edge browser
- no option to switch to the older preview if unsatisfied
You are correct with the clearing.
The “sweep” button, did not clear system message on the chat playground I had open. A shift-refresh to get the newest code, and then the new behavior wipes everything. (Sorry about your typing for 10 minutes, poof).
The sidebar has nice breathing room. Maybe too much vertical whitespace even. Can almost see the name of a fine-tune.
Can you show the “translucent”? I don’t see that. Only the description that appears with some models in the list has opacity: .7 (instead of setting the text color lighter or the text color with a transparent #C010RSXX;
The “annoy you” now is event action taken upon right-click in the drop-down (to go inspect).
My unaddressed continuing complaint is 800px of conversation width vs 1920px virtual viewport on 4k, so you can’t even see a pep8 line of code.
Well, on Tuesday there was also an issue with text wrapping in the message fields, which made it really difficult to handle long text input.
I’m glad though this got fixed already.
Absolutely agree on the issue with the Clear button that deletes the preset instructions. I hope something’ll be done about it asap.
all my presets are saved with generic names as the dropdown had clear demarcation in the previous version.
you see how my preset “default” looks as if its some heading
Thanks for your attention to detail. The chat playground system prompt preservation is back!
It also behaves well switching between o1
models not supporting this.
If you thought you had lost an system message input of significance, a browser-associated history has been watching everything submitted. You can click the history icon and get back one of your submissions to the API.
Hi,
We use Playground daily to update prompts and evaluate the outputs, and noticed yesterday that the behavior of output was changed even though we reused same prompts.
When we checked with API for the same prompts, we got the expected ones.
And it seems the Playground UI implicitly reuse old prompts even after we cleared all prompts on UI because we sometimes observed the output was apparently influenced by the prior ones.
Does anybody encounter these 2 issues within a few days ?
I totally agree with you on the first issue. I’ve noticed the same thing with the output behavior changing in Playground. As for the second issue, I’m not entirely sure. I haven’t observed the old prompts affecting new outputs, but I’ll keep an eye out.
Is the behavior you are experiencing similar to what is being described here?
Thanks for the info sharing ! I didn’t know the token caching feature.
However, it seems to be avaialbe only via API and I am not sure it could be on Playground UI as well.
And I found some threads related to the side effect with the implicit caching and will monitor the progress.
Thanks again.
The Playground uses the API, and can be considered the developer version of ChatGPT. While I’m not sure if the users in the linked topic are also using the Playground, they are definitely using the API.
If you can provide links to the other topics you mentioned, it would be helpful when reporting the issue to OpenAI.
Thank you!