Bounty announcement for Mitigating Prompt Injection Attacks on GPT3 based Customer support App

Head over to the medium post on Mitigating Prompt Injection Attacks on an LLM based Customer support App to participate in a bug bounty for oHandle.

While developing oHandle, which is an application to help businesses automate their customer support operations, we delved deep into the world of mitigating Prompt Injection attacks.

As a lot of you might be aware, Prompt injection was in quite a bit of news recently due to Bing Chat being hijacked to leak its prompt. [AI-powered Bing Chat spills its secrets via prompt injection attack [Updated] | Ars Technica]

We were able to mitigate a lot of these possibilities, scouring the published and reported texts across the LLM landscape. We have come up with some novel mitigations and, in the interest of full transparency, have documented them on the medium post. Head over to the post and the application to give it a whirr.