GPT Store Security Findings - Request for Responsible Disclosure Contact

We are a research team currently investigating the security risks of LLM apps on the GPT Store. We have discovered several unsafe apps, including inconsistencies between descriptions and instructions, excessive collection of sensitive user data, and malicious content in instructions. We want to share detailed information about these findings responsibly with the GPT Store’s security or compliance team to help improve the platform’s security measures. Please let us know who to contact to discuss these findings and any protocols you would like us to follow. Thank you for your attention to this matter.

3 Likes

I think your first port of call should be Public Bug Bounty: Open Ai - Bugcrowd

I’ll see if there is a section for reporting GPT’s issues.