What steps can organizations take to ensure data security and privacy when fine-tuning ChatGPT to improve its natural language generation abilities, and how can they ensure ownership and control over the resulting model? Additionally, are there any limitations or considerations organizations should be aware of in terms of OpenAI’s rights to the fine-tuned model?
Was the training data saved by OpenAI during the fine-tuing process of the ChatGPT model?
Does OpenAI retain exclusive ownership of the fine-tuned model, or are there other parties who may have ownership rights?
Does OpenAI have the right to use the fine-tuned model to improve future versions of ChatGPT, or are there limitations on its use?
This is question for OpenAI, not for a community of software developers. Try:
support@openai.com
Yes, if you read the terms of service on the OpenAI web site, OpenAI and their contractors, etc, have full rights; and you, the user, have no warranty, implied or otherwise, your data will be private.
You would get better answers, I think, reading the OpenAI Privacy Policy:
(c) Use of Content to Improve Services . We do not use Content that you provide to or receive from our API (“API Content”) to develop or improve our Services. We may use Content from Services other than our API (“Non-API Content”) to help develop and improve our Services.
We know that data privacy and security are critical for our customers. We take great care to use appropriate technical and process controls to secure your data. We remove any personally identifiable information from data we intend to use to improve model performance. We also only use a small sampling of data per customer for our efforts to improve model performance.
We understand that in some cases you may not want your data used to improve model performance. You can opt out of having your data used to improve our models by filling out this form. Please note that in some cases this will limit the ability of our models to better address your specific use case.
if a form is submitted, how long till we get an ack that the opt out has been confirmed?
if an org signs up on platform (API content I think) - there is no link to access the chat console (non API content) directly. However you can create an account using the login from platform - so are the opt out policies maintained on non API content too if using the same logins?
Found this under “Can I fine-tune OpenAI models using my own data?”
Your fine-tuned models are for your use alone and never served to or shared with other customers or used to train other models. Data submitted to fine-tune a model is retained until the customer deletes the files.