I’m currently planning the development of a Custom GPT for a friend who has raised concerns about the level of privacy capable of being employed when working with the theoretical model following its creation and proper tuning. Ideally, any information that may be confidential or restricted to an organization should not be able to be retrieved by users who are not approved to have access to or derive information about those documents or pieces of data.
Initially, my obvious first course of action to achieve this was to turn off the “Chat History & Training” toggle switch found within the “Settings” → “Data Controls.”
How do I turn off chat history and model training? [Web]
I then discovered that you can gain access to use specific GPTs even after turning off this setting (which prevents sidebar navigation access to Conversations or GPTs) by utilizing the direct “Share Link” associated with a GPT Model you can indeed visit and interact with the model as you normally would.
After the interaction with this model, maintaining the “Disabled” status of “Chat History & Training” in my user preferences, I noticed that the Chat Bubbles icon with the Number\Count appearing beside that number did increase after my usage/interaction with the model.
My question is: Noticing this counted instance of the interaction that occurred even while being in the state of “Chat History & Training” -->“Disabled,” the User (me in this case) was still logged as an instance, which makes me wonder about the possibility of this method being legitimately viable.
Any feedback, alternative methods, or workarounds provided are greatly appreciated and helpful in dispelling end-user fears of potential risks associated with the utilization of OpenAI GPT models.