Discussion about releasing GPT-3.5 model weights

The dictators all already have GPT and better.

Scammers, hackers … Why only think about what they can do with it, and not what their enemies can do with it?

The world’s an awfully depressing place when you only see the darkness and overlook the light.

Because the enemies of scammers and hackers, aka the “good guys”, already have access to GPT :laughing:

2 Likes

Okay, and how is that working out so far?

Very little transparency, and a presently … unideal situation with the model since November.

I’d like something I can host locally and back up and revert to a previous version if it suddenly begins to consistently refuse to honor niche details of more complex requests.

Anyways it’s a moot point. More advanced things than GPT 3.5 are already in the OS pipeline in many countries. Too many to stop. Not even the EU’s posturing on the matter can change this now.

You’re either part of this new trend where AI is used to magnify everything (good and evil) or trying to maintain an increasingly faltering sense of control as more and more capabilities slip through the closed-loop fingers at the hands of national, corporate, and organizational actors.

If people commit crimes with AI, target the people, not the AI. You will never successfully constrain this technology, due to the nature of modern “free” societies and the internet that has developed on them. You can, however, use the time tested tactic of isolating and liquidating their operators.

You seem to be conflating a few different ideas and it’s muddying whatever point you’re trying to make.

First, there’s no “targeting” of the AI.

OpenAI has not released model weights for GPT-3.5—in part—because they determined it would be unsafe and unethical to do so. It doesn’t matter what anyone else anywhere else does at any other time.

You can think they’re wrong or over-cautious, but it’s their model, their conscience, and ultimately their decision.

If you want something you can run locally, the are countless other LLMs which can do that, some of which you correctly note approach the capabilities of GPT-3.5. So, I’m not sure where the outrage is coming from.

Then, when you write,

You present something of a false dichotomy. First, I think there is a not-insignificant number of people who are working towards using AI to, in your parlance, only magnify the good. And second, I’m not entirely sure what the other option even means—trying to maintain control as more capabilities slip through closed-loop fingers?

I guess my overall goal here is to point out there’s not some grand conspiracy at play here to keep the GPT-3.5 model weights out of your hands, OpenAI just seems to think it wouldn’t be a good idea for them to be universally available and doesn’t want to be responsible for whatever ills might arise if they were.

If other models with public weights are used to do bad things it’s not on OpenAI.

Lots of researchers in many fields have struggled with how to responsibly publish findings which they fear could have grave consequences, at the end of the day all we can do is trust whatever decision they’ve come to was not arrived at lightly and respect their choices.

On your first point, I disagree with you. The EU and US have been openly discussing AI regulation at the official level for a while now, and in particular the EU has been openly proposing some pretty radical and counter-productive legislation. This is not part of some “grand conspiracy” as you seem to imply (we will address this later) but rather the activities of various disconnected entities all seeking their own best interests and acting publicly on the record.

Moving onto the next point, it is true that I disagree with OAI’s determination and that it is their model and therefore they have the legal right to do whatever they want. My point is not that OAI is forced to agree with me, but rather that I do not agree with OAI. To clarify, disagreeing with Open AI does not equate to a claim that they are forced to behave as I wish.

I do not enjoy the other services I can run locally as much, and therefore wish to run 3.5 locally despite the fact that OAI does not wish for this presently.The divergence of opinion between me and OAI was the reason for writing that part of my comment. The comment was disagreeing with the existing policy, not indicating that because I disagree with their policy they are bound to adopt my position.

Onto your next point -no you do not know anyone who is using AI to “only magnify the good”. People have good and bad aspects (subjectively speaking), and AI magnifies both within them. Even if individual people were capable of “Only magnifying the good”, there would still be other people who use it for bad. It is magnifying both positive and negative aspects of reality and I believe attempts to prevent this by strictly controlling who has access are misguided.

Now the conspiracies … Why are you bringing up grand conspiracies? What are you talking about? I’m discussing historical trends. Where did conspiracies start to factor into this? Where is the collusion? Where are the shadowy groups meeting in the dark? I’m perplexed by this implication of “conspiracies”. Can you explain more what you are talking about and what caused you to adopt this view of my original comment?

Additionally, we can do far more than “just trust” OAI - we can engage with Open Source projects that as you point out exist. It doesn’t end there - we can express our divergent viewpoints on the community forum, where developers and technical leaders working on other AI projects may see it, as I have done. We can engage in political and non-governmental activities to influence the future of AI policy. There’s a lot we can do besides just being quiet and accepting other peoples decisions, even if they have the legal right to make those decisions without our input, which fortunately OAI seems interested in hearing outside voices.

You seem to have missed this,

Please explain what makes you say I’ve missed it, and what specifically you’re hoping for me to take from it then. I believe that I have addressed it sufficiently and would like you to address some of the points I’ve made rather than … Whatever you call this manner of engagement.

Because we’re talking about the company OpenAI and the actions they do or don’t take.

  • OpenAI isn’t targeting AI.
  • The existence of other models is immaterial to the discussion of what OpenAI chooses to do with theirs.
  • I wrote, “I think there is a not-insignificant number of people who are working towards using AI to, in your parlance, only magnify the good,” emphasis added.
  • “slip through the closed-loop fingers at the hands of national, corporate, and organizational actors” smacks of conspiratorial thinking.
  • “Additionally, we can do far more than “just trust” OAI” I wrote we should trust them to have taken care in their decision to not release the weights of their models.

But, as this thread has gone completely off the tracks in going to split it off into a new topic where, hopefully, you can consolidate your thoughts around a central thesis and present them in a clear and coherent way.

1 Like

I’ve taken some time to reflect on our conversation and would like to share a few thoughts, hoping to contribute to a more productive dialog in our community.

Firstly, the characterization of my viewpoints as “grand conspiracies” seems to undermine the essence of constructive debate. Such labeling can marginalize others perspectives and reduce the richness of our discussion. It’s crucial in a community-focused dialogue to approach each argument on its merits, rather than prematurely ascribing to it a dismissive categorization.

Furthermore, while I understand the intention behind creating a new thread might have been to clarify the discussion, the manner in which it was initiated and justified—suggesting a need for my thoughts to be more coherent—could set a prejudicial tone for the ensuing dialogue in the minds of onlookers and future participants.

I believe our conversation would benefit from a renewed focus on directly engaging with the substantive points raised, rather than on the perceived style or presentation of those points. Encouraging a balanced exchange, where each perspective is thoroughly considered and addressed, fosters a more inclusive and productive environment.

If you would like to engage with my points, I will be happy to continue the conversation. However, as I do not accept your portrayal of my position as incoherent or “a grand conspiracy theory”, I will not be undertaking the effort to produce you new output until you have engaged substantively with the output you have already been provided.