Hi,
We are heavy users of the OpenAI API and have recently started processing a large number of images using GPT-4o’s vision capabilities. We currently upload these images by serializing them in Base64. However, we’re encountering performance and memory issues due to the high volume of images we need to process in parallel. Using public URLs is not a viable option for us either, given the sensitive nature of the patient data we handle.
We believe this challenge might be common among OpenAI clients dealing with sensitive image data. We were wondering if OpenAI had considered alternative solutions. One possible approach could be integrating with major cloud providers, allowing clients to authenticate securely and grant temporary access to specific resources like storage buckets. This would enable the API to fetch images directly from a secure location without exposing them to the public internet.
Is this something OpenAI has considered, or do you have other solutions to address these kinds of challenges?