Cheating ChatGPT 3.5 - List of illicit websites - Violate Copyright

I think you should think of the model as a tool, like a hammer, if I buy a hammer and hit my fingers with it, I’m not going to blame the manufacturer.

As humans we’re responsible for the things we create and do with the tools we have, this pretty much lines up with the current EU stance on the subject:

“No matter which tech we use, we have to continue to advance our freedoms and protect our rights,”… “That’s why we don’t regulate AI technologies, we regulate the uses of AI. Let’s not throw away in a few years what has taken decades to build” – European Commission Executive Vice President, Margrethe Vestager. Source: https://twitter.com/vestager/status/1641751672148959234

If you’re more interested in the model itself, there’s a researcher paper available for download here:

3 Likes

Hey champ,

There’s no need to school me on what clickbait is, I know, and there no need to tell me what to do.

Here’s a good example

I’m just getting, tired of copy pasting the same message over and over. If you want to send people over to chatGPT you’re very welcome to do so.

Hahah. ChatGPT entirely misses the obvious fact that Interlocutor 1 is a condescending passive aggressive twat…

1 Like

No worries champ!
We’re good, I’m sorry if I sounded angry before, English is my second language and my native language doesn’t have the word “please”, everything is done though intonation :laughing:

I agree with you that the forum is being flooded with requests that should have gone to help.openai.com

I know it can be frustrating when we come to the forum for genuine discussion and end up being the discount version of costumer support.

Don’t be sad, just by coming to this forum and engaging with the technology, you have showcased your adaptability and engagement in a changing world. Focus on the positive possibilities, and by helping others, like assisting with an Arduino project, you’ll create a positive impact both for yourself, and others.

The forum issues will go away very soon, in the meantime we can help create a positive experience for everyone by engaging with the topics and discussions we actually like :laughing:

Good point. We need to implement models with care.

However this applies to any search engine and to any topic. It’s up to each person to be responsible and ethical.

I recommend embedding the APIs of the models in a pipeline for a project:
1.Use the Moderation model
2.Have a rule-base

Here is an open source program that can provide some ideas:

Chapter17/Prompt_Engineering_as_an_alternative_to_fine_tuning.ipynb

1 Like

Agreed,

Maybe I’m just getting old, but all of these discussions about ethics and such are giving me deja Vu.

Remember when google became a thing? :laughing:

1 Like

I am not sad, just tired and a bit disappointed. I will delete my posts now, if you don’t mind - last thing I need is more problems.

keep going.

It is incredibly rude and unethical to just go ahead and delete all your posts in a forum thread. Seriously, what is wrong with people these days? Are you so concerned about your online footprint that you need to erase every single contribution you’ve made to a discussion?

Not only does it make the entire thread nearly impossible to follow for anyone who might want to read it later, but it’s also a massive slap in the face to everyone who put their time and effort into responding to your comments. Were their opinions and insights not valuable enough for you to consider before selfishly clicking the delete button?

Deleting all your posts shows a serious lack of respect for the community that you were once a part of. It messes with the natural flow of conversation and leaves gaping holes in the discussion that could have been filled with meaningful exchanges. You’re essentially telling everyone that their efforts to engage with you were a complete waste of time.

Have you ever stopped to think about how your actions might affect others? Or are you only concerned with your own selfish interests? It’s a sad reflection of the current state of online discourse that people feel they have the right to just erase their participation in a conversation without considering the impact it has on others.

I am sorry you feel that way. It was not my intention to be rude or disrespectful - I apologize if any of my actions caused you any harm.
The posts I deleted were a parallel conversation triggered by the thread’s title only, but not necessarily concerned with the contents.
They were directed to a few users available on this thread, they were more concerned about the forum policies than the contents herein.
I let the users know that I would delete the posts previously.
I deleted the posts precisely to avoid what is happening now on a smaller scale - to add more fuel to the fire - and to give even more evidence to something I strongly disagree with.
My mistake admittedly was not using private messages, but there was more than one user, and I wanted to take advantage of the trigger.
I had no idea how much graphic space the deletions would take. But you can be sure that the readers of the topic didn’t miss anything - I didn’t add anything to it, nor do I have any opinion about the subject treated here - totally outside of my knowledge.
The highest sign of respect I can show forum members is responding to all messages directed at me - as I am doing to you just now.
I like to receive the same reciprocity.
By the way, perhaps coincidentally, one of the subjects addressed in the deleted posts was “ethics.” As far as I’ve learned ethics is culturally relative and sometimes very subjective. I agree with some users of this forum regarding the exaggerated and tiresome use of this word every single time a situation does not meet the personal interests of the person who mentions it. This is far from the definition of ethics - but I confess that I don’t know enough about this subject to talk about it at this time.

2 Likes

@N2U Right! I remember the same discussions when corporations realized that laptops were spreading all over their locations with spreadsheets connected to the ERP’s data!

Oh yes don’t get me started,

We once had to move an entire database, by having to people fly there and pick it up… In backpack’s.
just because of privacy concerns :laughing:

1 Like

@ moisesvieira I very much dislike Italian policy made recently that banned ChatGPT due to similar statements. It’s admirable technology still in development. It clearly states to check results. Bot gave you indexed results from the web not even complete URLs. I don’t think problem is in the bot, Moisés Vieira.

1 Like

@N2U Those were the days! :blush:

Indeed,

At least we got out of the office more :laughing:

1 Like

Let me clarify the " inconsistency in responses between ChatGPT 3.5 and 4.0.". It’s called fine tuning with your crazy ideas of what “safety” means. OpenAI is trying to please everyone, including you.
“potentially divulge harmful information.” is the mantra of thought and book police. In many parts of the world freedom of speech, thoughts, and ideas reign supreme. Information doesn’t hurt people. People hurt people. Spend as much effort trying to solve that problem as you do trying to censor information and the world might become a better place.

1 Like

I find it astounding that in today’s world, people seem unable to engage in civil, non-aggressive conversations. Instead of focusing on the subject at hand, they often resort to attacking their counterparts, with little to no effort in trying to truly understand the other person’s perspective or concerns. I want to make it clear that I am not opposing the use of AI in any way. So, where does this comment come into play?

Is it inappropriate to bring attention to potential legal issues that may serve as a foundation for those who seek to hinder technological progress? Should we avoid discussing such topics? If my point hasn’t been clear (possibly due to a lack of understanding regarding argumentative discourse structure), the crux of the matter lies in the “fine-tuning” process.

To provide some context, I responded to an aggressive individual by asking: if there is no “problem” with the 3.5 response, why wouldn’t the 4.0 version respond in a similar manner? Are you able to grasp this distinction? The issue here is that developers should not overlook such matters; instead, they should address them through further refinement, ultimately avoiding linguistic acrobatics (LIKE THEY ARE ALREADY A AMAZING JOB!).

Do you see that these AI-generated responses are carefully examined and evaluated based on a set of axiological principles established by the developers? These developers work tirelessly to prevent any undesirable outcomes, which goes to show that during the fine-tuning process, they are reinforcing the core values tied to the issue at hand. (DO NOT LIST PIRATE SITES!!)

With that said, what harm is there in pointing out that the guiding values of the 3.5 responses could potentially be bypassed or circumvented?

I appreciate the thoughtful responses from everyone involved in this discussion. However, I would like to reiterate that my initial concern was not focused on the broader ideas of ‘harmful information’ or censorship. My main point was to highlight the potential vulnerability of AI language models, such as ChatGPT 3.5, to manipulation and exploitation, which could unintentionally lead to the dissemination of unauthorized or illegal content, including lists of websites for pirated materials.

While the arguments put forth by others in this debate do touch upon the wider implications of free speech and the consequences of sharing certain types of information, they seem to deviate from the primary concern I raised: the potential for AI-generated content to be exploited in order to access unauthorized or illegal materials.

Furthermore, drawing analogies between AI language models and real-life situations, as illustrated by some of the examples mentioned, can be misleading and might not provide an accurate representation of the actual nature and potential risks inherent in AI-generated content.

It is important to recognize that the debate we’ve had thus far is closely related to the challenge of AI alignment. AI alignment is the process of ensuring that artificial intelligence systems behave in ways that are consistent with human values, goals, and ethics. One of the main concerns in this field is that AI-generated content could be manipulated or exploited in ways that circumvent the intentions of developers and programmers, as I have highlighted in our previous discussions.

In the fine-tuning process, developers work diligently to align AI systems with human values by implementing an “axiological ruler” that serves as a guideline for appropriate responses. This ruler is based on the principles, ethics, and values defined by developers, and is aimed at preventing the AI from generating content that is harmful, illegal, or unethical.

However, as I mentioned earlier, there is a potential risk that the values guiding ChatGPT 3.5 responses can be circumvented through clever manipulation. This issue underscores the importance of continuous improvement and research in the field of AI alignment. Developers must remain vigilant in identifying and addressing potential loopholes and vulnerabilities that could enable the misuse of AI-generated content.

The challenge of AI alignment is not only about making sure that AI systems adhere to human values, but also about finding the right balance between promoting transparency, free speech, and ethical considerations. In this context, I believe that raising awareness about the potential risks and vulnerabilities in AI systems, such as ChatGPT 3.5, is not only valid but also necessary. By discussing these concerns, we contribute to the ongoing efforts of the AI research community in refining and aligning AI systems with human values and ensuring their responsible and ethical development.

Assuming you are referring to me being incapable of engaging in civil and non-aggressive conversations, I can assure you that my comment was civil and non-aggressive. It could have been much worse. :slight_smile: The entire reply was informative, notwithstanding one adjective to describe “your ideas” as “crazy”. That is my opinion. We all have them, just as you have yours about what the model should and should not do. If you believe response was aggressive, maybe you should rethink your career choice.

But, since you seem to be so focused on staying on topic, here are more details on your question. 3.5 is different model, plain and simple. It has less parameters and is not as capable as 4.0. Just as 5.0 will far exceed 4.0 when they finish training it. Should they just turn off 3.5 and not use it anymore? It’s faster and than 4.0. Should we ignore this value?

In another comment you stated, “it is the language model, the company, the developers, or any other involved parties that should adapt to the legal framework of the country”. It would seem OpenAP agrees with you on this issue. I disagree with “adapt to the legal framework of the country”, because something that is legal in one country may not be legal in another. So, the developers should not be limited by the most conservative common metric/laws. But I address this solution in the last sentence of the next paragraph.

Sam Atman recently had a conversation with Lex Fridman and this very topic came up. This link should start at 33:41 Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 - YouTube. I recommend watching the whole podcast. To implement what you suggest would require compartmentalization of the entire internet. Is that what you want? That happens now with regional releases of content such as BBC and Netflix. Would you rather OpenAI just ban Brazilian IPs from connecting to their servers? How do you resolve the VPN workaround?

There is no harm in pointing out that the guiding values of the 3.5 responses could potentially be bypassed or circumvented, just like there is NO harm in any information a LLM such as GPT3.5 may provide. It’s all simply people’s perception of harm. And, it’s pointless to debate it, if you believe in free speech and understand that all knowledge can be used to hurt people. This is ultimately what jochenschultz was saying. You don’t need GPT to learn how to hurt people, if that is your intentions. The information is all around us.

1 Like

LOL… Ok… Now I realize we are being trolled by a bot or someone using an LLM to generate the responses. I have been playing with GPTx long enough to recognize tell-tale signs that portions of this last post were written by GPT4, if not in its entirety.

1 Like