I’m currently developing an application that allows users to upload PDF files, which are then processed using the OpenAI Assistant API. I use the Assistant API to interact with the file and also to rephrase some of the content in the file.
I’m concerned about the possibility of users uploading PDFs with inappropriate or malicious content. I want to ensure that my app remains compliant with OpenAI’s policies and handles such cases responsibly.
The image generation API allows to provide an (optional) ID of the end user so that abuse could be tracked and reported, but i’m not seeing something similar for files.
Specifically, I’m looking for guidance on the following:
- Content Moderation: How does OpenAI handle potentially inappropriate content in file uploads? Are there built-in safeguards within the API that flag or block such content?
- Compliance Guidelines: Where can I find detailed information on OpenAI’s compliance requirements related to file uploads? I’ve reviewed the API docs, terms of service and usage policies, but couldn’t find anything specific.
- Best Practices: What are the recommended practices for scanning and filtering files before sending them to the API? Are there any tools or services that the community recommends to handle this?
Thank you in advance for your help!