Assistant's Performance Goes Downhill

So I’m building a custom application. I have an older RAG model and I am experimenting with Assistants. So far so good, I was excited to implement assistants as they worked so well and you didn’t have to get around with managing a vector DB. However, these past few days the assistants responses have been TERRIBLE worse than ever, I’ve tried them using the API, using playground, using different prompts and models. They are way way worse than my “traditional” RAG implementation.

Is there any reason for this? Has anyone experimented something similar?
I understand is a beta, so maybe the team is trying different configurations?
The problems seem to arise when accessing documents it has access to (I use .txt format since I found is faster and simpler for the model to manipulate however it has been truly downhill at least for me)

1 Like

You were just lucky (or unlucky) that you didn’t experience assistants working bad before. They are in beta and nowhere close to be production ready.


The assistants are so powerful and great, but definitely still a bit flaky. I’ve reverted to my systems for now (rag, steps and runs, local code interporeter) but still working on assistants to learn how to use them most effectively. I’d bet it won’t be long before things stabillise.