DataDog LLMObservability in Assistants API

I have developed an integration between WhatsApp, OpenAI’s Assistants API, and my company’s API, and it’s working well.
However, we want to have better visibility into what’s happening, allowing us to monitor the model’s behavior, detect potential prompt injections, and more.

I haven’t found any documentation specifically covering how to monitor interactions with the Assistants API—only for completions. Additionally, the DataDog documentation is quite limited and simplistic.

Does anyone know how to achieve this integration effectively?

Evals, evals, and then more evals.

For catching malicious activities there’s no one-size-fits-all. Staying ahead of the trends is critical. What’s important is to understand your prompts, your functions, your documents, and understanding the vectors of attack.

Assistants is a framework that utilizes ChatCompletions. So all the information provided is still relevant.


There’s numerous prompt injection datasets that you can customize to match your branding and specifications.

1 Like