Code Red: Unprotected GPTs & AI Apps exposed by simple hacks

I want to express my sincere gratitude for the invaluable insights shared within this community concerning GPT security.

While working on custom bots and GPT projects, I stumbled upon the topic of prompt injection right here in this forum. I was genuinely shocked to discover how easily system prompts and uploaded files can be exposed by simple prompts.

The top takeaways I’ve gathered from the discussions in this forum
- No unhackable prompts: Deterrence, not perfection, is achievable.
- Security instruction refinement: Continuously test and adjust.
- Complexity trade-offs: Overly complex measures impact performance.

During my exploration, I also found a comprehensive repository of security prompts right here in the discussions on this forum. I also came across an excellent GitHub repo . Not able to post link so here’s the title

“Protecting GPTs Instructions” and final page “protecting-gpts.md”

For API-based bots, I’m currently investigating Python and Node.js packages that could enhance security during deployment. I don’t have all the answers yet, as I’m still in the learning phase.

I realized that many of my friends and colleagues who publish GPT models on the GPT store or develop custom bots were not fully aware of these security vulnerabilities. In an effort to share the insights from OpenAI and other forums, I’ve shared article on LinkedIn to raise awareness. This incorporates countermeasures suggested in Open AI and other forums.

Not able to post link so here’s the title:

Code Red: Unprotected GPTs & AI Apps exposed by simple hacks.

Also available on my website Tigzig dot com.

This article represents my humble contribution to our collective knowledge and aims to spark broader conversations about security. I’m immensely grateful for the guidance and expertise I’ve gained from this forum and its experts.

It’s surprising that while there’s significant coverage of the GPT store launch in top news outlets, there’s a noticeable blind spot when it comes to discussing security gaps and countermeasures. To address this, I plan to reach out to news outlets individually, sharing the learnings and highlighting the robust discussions happening in OpenAI forums. My goal is to create greater awareness so that developers and GPT publishers can take appropriate measure.

I will continue to actively participate in this forum, eager to learn more. Your pointers, tips, and any angles I may have missed are highly appreciated.

Amar

1 Like