I’m experiencing an error anytime I try to use the Assistants inside of the Playground. I’m just running a few normal tests on an Assistant I’ve used for months with no issue. All of a sudden I’m getting an array size error.
I’ve tested in Brave, Chrome, and Firefox. All browsers have been updated, too. Ideas?
Hey, man. I appreciate your quick response. Unfortunately, removing the carriage returns/extra paragraph returns isn’t working. I even went as far as escaping them in JSON just to try something else and that didn’t work either.
So, to be clear, removing extra paragraph returns didn’t fix it… nor did escaping the input before running the query.
I am working with code and natural language here. So, even if that fix worked it would be a moot point because of my use case.
I’m using Assistants in the Playground and have updated the browser just now again to make sure it wasn’t something like that. I crossed browsers twice and still no luck.
This is kind of a big deal for me because I’ve loaded a few hundred dollars into my account (which I do weekly) and I need to run these tests. So, I deeply appreciate any help or direction.
I’ve been to Discord but it’s just an echo chamber. There isn’t any actual problem solving going on there.
Can I assume this is a known bug and it’s being pushed to the dev team responsible for the Playground? I’ve noticed that I also can’t elect “JSON response” in the same Playground environment and using “CMD + Return” doesn’t allow me to run a query anymore either. I have to actually click “Run” which is weird.
I thought those two things might be related to a breaking change.
Yeah, sounds like it’s something on their end. It’s been happening a day or so, and they might know, but I’d recommend reaching out to help.openai.com with as many details as possible.
I am observing this issue as well and deleting extra returns does not fix the issue. I am in the same boat, that I use this to help code, so deleting returns is not at all a viable alternative.
Hi! Thank you for reporting this problem - we made some updates to the text editor which I’ve since rolled out fixes for. Apologies and thanks for your patience!
Hey! I appreciate the response. I pushed a bug report with a lot of detail to help.openai.com the same day I started this thread - still no response there.
For the record, it’s still not working. I gave it a day or so to iron out and tonight it’s down. It’s essentially just not usable.
It’s frustrating, but as an engineer I understand. So, the fixes you’ve pushed haven’t cleared the issue.
I just tested on Chrome, Brave, and Firefox; as well as changing the model selection around to see if that would solve it. No luck. Same “Array Error” for all of the above. It’s also still not allowing me to toggle the “JSON Response” switch either.
Any chance we could get you to go back and run tests with code to ensure it works?
Thanks for getting back to me, looking into this. now. Can you give me some more details on the type of input that fails? I’ve been testing with multiple paragraphs and long code input and I’m seeing the text handled as a single text content input when passed to the messages endpoint. We do have limits on the text + image chunks length that can be sent in a single message so I can repro if I pass in a bunch of images but it doesn’t sound like that’s what you’re doing.
On the JSON response toggle - the option should be disabled if you’re using incompatible tools (tooltip helps flag that but will look at making that interaction more obvious). JSON mode only is supported when the model is able to only produce JSON so isn’t compatible with all tools we offer. Can you check if that’s the case for your run?
Hey! I wanted to give you an update - I’m sorry I’m a little late.
As of today, the core issue I had with the “Array Error” seems fixed. Thank you. I will keep you updated in the event there’s a breaking change, but it seems to be working correctly now. Great job.
There is something weird about the JSON Response toggle, though. I can’t seem to toggle it (Brave/Chrome) at all - for any model. I’m also using a prompt/test that specifically requests the JSON response… and the model returns it without toggling the selector.
Either way, the Error is fixed. Let me know if. I can help with clarity or whatever. Have a great day, and thank you for getting to this. It speaks volumes.
For JSON response - it’s actually not particular to a model, it depends on the tools attached to your run + assistant. If you have only functions you can require the model to only output JSON because the model is able to call a function or respond to the user in JSON format successfully. If you have other tools enabled like code interpreter, we can’t require the model to output only JSON because it wouldn’t be able to write code in JSON format only. That’s why prompting it is the best solution for this, we’re exploring how to improve response format options in the future.