Cybersecurity Student Who Needs Help with Section Two (2) of the New Privacy Policy

Here’s the link: Privacy policy

HERE ARE THE LAST TWO SENTENCES OF SECTION TWO (2) OF THE PRIVACY POLICY

" As noted above, we may use Content you provide us to improve our Services, for example to train the models that power ChatGPT. Read our instructions on how you can opt out of our use of your Content to train our models."

I want to take a poll and determine the best outcome for anyone involved with helping Super Intelligence become more real.

Back story: I see cybersecurity in many of my daily behaviors.

  1. Should I OPT OUT, therefore, not use my content to train OpenAI’s models?

Example: Let’s say that I am looking to build a red-team threat hunting application and I need help with solving a Zero day problem.

  1. If I OPT-OUT, will providing information to OpenAI models benefit OpenAI’s ability to be more Constitutional for better Super Intelligence?

In other words, does blabbing about cybersecurity woes to a Chatbot improve the model for the average user, or make it worse? (For example, will the average user see how to destroy their school?)

  1. Metrics for OpenAI: For a moment, let’s disregard the OpenAI Bug Bounty program on BugCrowd. Does OpenAI provide avenues for people who want to inform OpenAI about flaws (not just bugs)?

  2. Is it better for a person focused on cybersecurity to OPT OUT?

  3. Is there an easy way to undo the OPT OUT and OPT IN?

You are discussing interactions with ChatGPT.

All conversations are possible candidates for retention and training, but more so if you are being useful and pressing the thumbs buttons and rating conversation variations that sometimes pop up.

The information may go to knowledge workers, other reviewers, be retained by OpenAI for unknown times (and with unknown quality of personal information sanitation.) and ultimately be a small part of an AI learning reward training set. How to usefully employ the data is OpenAI’s challenge.

You alone will have little impact. You don’t think that it is exclusively the world’s top minds inputting cutting-edge frontiers of knowledge into stimulating ChatGPT, anyway?

So you can therefore use the provided toggle “Chat history & training” within ChatGPT data controls to disable retention of conversations on a browser while it is selected and feels right. There is no need to go to an overarching policy form. Then you can ask questions with personally identifying information and trade secrets without the highest levels of exposure where they would be used for training.

Let me start off by saying, thank you for your input. Present day, my stance regards how OpenAI has produced invaluable resources.

If the final thought in the third paragraph is not rhetorical, my opinion does not matter.

• What constitutes as a top mind in this world?
• Could this only be in the Royal Society’s prestigious Charter Book?
• Or is it some Nobel Laureate?
• But, would a little person with a reduced access to enlighten civilization be ousted as not having one of the world’s top minds?

It is not wise to assume; however, merely using the toggle “Chat history & training” within ChatGPT’s data controls to disable retention of conversations on the clients’ front-end does not mean that fingers-crossed OpenAI is not vulnerable to being targeted by some threat.

The back-end data controls are the crux of this issue. Will the data that is provided to OpenAI ever be overlooked by AI or man? Both are fallible.

So, let’s assume this is never some reality: OpenAI’s trained data uses bad data (I.e., harm inducing data) and spreads it to produce output for a naive minor who is disgruntled at their classmate(s).

Can better governance prevail?

I refer to the training quality and the type of users creating that, because long-time users of OpenAI AI models (me) have seen more evidence of “good enough to make a layperson happy in ChatGPT” being the more typical answering quality now.

With API users automatically excluded from training, more advanced programmatic AI uses are also being trained against by chats.


Anything transmitted over the wire is susceptible to interception, whether it be NSA data collection centers that have tapped underwater cables, or FBI subpoena fishing expeditions, or foreign hackers. Or just a curious snooping staff member.

Your passing chats with an AI about it taking over the world or giving you sewing tips likely aren’t of much interest though.

Security concerns, whether directly related or not, can influence decision-making regarding an organization’s reputation and approach.

While a majority vote may seem like a good way to resolve this, there are many factors that warrant careful consideration.

1 Like