Regarding:
https://openai.com/index/response-to-nyt-data-demands/
So the company I’m working for is located in Sweden (EU) and our customers will get compliance problems if data is stored longer then 30 days.
I currently see no other option with this lawsuit than migrating everything to Azure hosted OpenAI instead. I would really don’t want to do this because we will lose functionality like web search etc…
I assume the only way is to have a zero-data-retention agreement because then you are exempted from the lawsuit. Is it possible to be that if you don’t have an API Enterprise plan?
Are there any other options here?
If you aren’t an enterprise customer then OpenAI is non-compliant. Azure could be an option, or you could move to a competitor.
do something like this and add it onto your search engine, you can tag the files that get uploaded to your data base from the start and delete after a certain time, just dont use this directly this was a lazy attempt to get the point accross that i got an ai to make more coherent, please check it first.
# Initialize time
current_time = 0
filename_location = 'your_file_name.ext' # Replace with actual filename
# Example of how you might track age and file age
def get_file_handle(filename):
# Placeholder for file handle retrieval
return hash(filename) # Just an example
while True: # Infinite loop, equivalent to 'while time = time'
current_time += 1
age = current_time
file_handle = get_file_handle(filename_location) # Get file handle
file_age = age + file_handle
# Do something with file_age
print(f"Time: {current_time}, File Handle: {file_handle}, File Age: {file_age}")
# For demonstration, break after 10 loops
if current_time >= 10:
delete file
break
this is of course assuming that you have a system running long enough for the files to do this, an easier way now that i think more about it would simple do a difference in time of file creation and current time. either are simple solutions, you’ll have to tie it into your code though
Ok, thank you for your answers, I’m not sure @liam.dxaviergs that this only includes files I assume all requests are logged as well and you cannot delete chat history etc. We are only using responses API with Store set to False.
Now I found this:
https://openai.com/index/introducing-data-residency-in-europe/
From the link:
“API requests initiated through these Projects will be handled in-region by OpenAI with zero data retention, meaning model requests and responses are not stored at rest on our servers.”
If the project is setup with region Europe will this lawsuit demands then be a non-issue?
i mean if your not recording the responses or the searches then wouldnt the data deletion poilcy not kick in ? and you dont have to go through the api to do this, this would simply be a looping container watcher seperate to the api. you cant trust generalistic pograms to delete stuff its not how they function. youll need a seperate algorithm to make sure that works as you intend, for searches and summarising content, ais great, but for hard limits like data retention laws you’ll need something to seperate, or make some tools that the agent can use.
Hmmm, seems like we don’t have an option to select region when creating projects, so this might not work for us anyway.