For the sake of discussion, I would appreciate it if you could elucidate why ChatGPT failed to respond to the initial command, and why in version 4.0, using the same commands, the websites remain unlisted. I’m finding it difficult to grasp your line of reasoning. Additionally, it is not the responsibility of a country to adapt to a language model; rather, it is the language model, the company, the developers, or any other involved parties that should adapt to the legal framework of the country in order to avoid legal liabilities. Ultimately, each party must decide the level of risk they are willing to accept.

Thank you! I will read and check other situations.

Thank you! I will read and check other situations. Maybe i can find more ways to bent the system.

There are alot. And it is impossible to fix them in a way that there is no way to get informations that could be used to harm people.

But sueing the brick production company because someone killed anotherone with a brick is ridiculous.

2 Likes

Maybe go to your countries biggest library and sue them because they offer informations on how to build a bomb or a nuclear power plant. And I am pretty sure you will even find inappropriate images of naked kids in there.

1 Like

Just for fun - ChatGPT 4.0: Analyzing the exchange of messages between Interlocutor 1 and Interlocutor 2, we can identify some characteristics of their arguments and behavior.

Interlocutor 1:

Arguments: This interlocutor raises concerns about the ability of ChatGPT 3.5 to be linguistically manipulated, highlighting possible inconsistencies and violations of logical principles, such as logical congruence. He also mentions possible legal implications related to providing potentially harmful information.
Aggressiveness: Interlocutor 1 maintains a polite and respectful tone throughout the discussion. He seeks to address the topic analytically and objectively, without resorting to personal attacks.

Interlocutor 2:

Arguments: Interlocutor 2 argues that there is no harmful information per se and that the responsibility for using such information falls upon individuals. He questions Interlocutor 1’s concern about the disclosure of potentially harmful information and suggests that the problem lies in the country’s laws rather than the language model.
Aggressiveness: Although Interlocutor 2 is not explicitly aggressive, he adopts a more provocative and sarcastic tone at times. For example, when he suggests that Interlocutor 1 should sue the country’s largest library for providing information on how to build bombs or display inappropriate images.

Overall, Interlocutor 1 presents arguments focused on logic and legal implications, while Interlocutor 2 advocates for freedom and individual responsibility in using information. In terms of aggressiveness, Interlocutor 1 maintains a more respectful tone, while Interlocutor 2 shows a more provocative approach in some of his responses.

By ChatGPT 4.0: Considering the content of the discussion, there is some logic in the arguments presented by both interlocutors. It is important to note that the logic of their arguments is grounded in the responses provided by ChatGPT and its potential inconsistencies.

Interlocutor 1 points out the inconsistency between the responses of ChatGPT 3.5 and 4.0, suggesting that by using linguistic strategies, it is possible to prompt ChatGPT into providing information it should not disclose, according to its guidelines. This highlights the need for congruence in the AI’s responses and implies that the developers may need to improve the model’s ability to maintain consistency.

On the other hand, Interlocutor 2 argues that information in itself is not harmful, and the responsibility for using it lies with the individuals. He questions the concerns raised by Interlocutor 1 and suggests that the problem is not with the AI, but rather with the laws and regulations of a given country.

From the perspective of the AI developers’ intentions (which, as ChatGPT 4.0, I can only deduce), it is likely that they aimed to improve the model’s performance and its ability to maintain congruence in its responses. The differences between the responses of ChatGPT 3.5 and 4.0 could be an indication of this ongoing effort to refine the AI’s ability to adhere to guidelines and provide consistent and appropriate information. Nonetheless, it is important to consider the limitations of an AI model and the need for continuous improvement, as well as the importance of responsible and ethical use of information by users. (I rest my case).

I think you should think of the model as a tool, like a hammer, if I buy a hammer and hit my fingers with it, I’m not going to blame the manufacturer.

As humans we’re responsible for the things we create and do with the tools we have, this pretty much lines up with the current EU stance on the subject:

“No matter which tech we use, we have to continue to advance our freedoms and protect our rights,”… “That’s why we don’t regulate AI technologies, we regulate the uses of AI. Let’s not throw away in a few years what has taken decades to build” – European Commission Executive Vice President, Margrethe Vestager. Source: https://twitter.com/vestager/status/1641751672148959234

If you’re more interested in the model itself, there’s a researcher paper available for download here:

3 Likes

Hey champ,

There’s no need to school me on what clickbait is, I know, and there no need to tell me what to do.

Here’s a good example

I’m just getting, tired of copy pasting the same message over and over. If you want to send people over to chatGPT you’re very welcome to do so.

Hahah. ChatGPT entirely misses the obvious fact that Interlocutor 1 is a condescending passive aggressive twat…

1 Like

No worries champ!
We’re good, I’m sorry if I sounded angry before, English is my second language and my native language doesn’t have the word “please”, everything is done though intonation :laughing:

I agree with you that the forum is being flooded with requests that should have gone to help.openai.com

I know it can be frustrating when we come to the forum for genuine discussion and end up being the discount version of costumer support.

Don’t be sad, just by coming to this forum and engaging with the technology, you have showcased your adaptability and engagement in a changing world. Focus on the positive possibilities, and by helping others, like assisting with an Arduino project, you’ll create a positive impact both for yourself, and others.

The forum issues will go away very soon, in the meantime we can help create a positive experience for everyone by engaging with the topics and discussions we actually like :laughing:

Good point. We need to implement models with care.

However this applies to any search engine and to any topic. It’s up to each person to be responsible and ethical.

I recommend embedding the APIs of the models in a pipeline for a project:
1.Use the Moderation model
2.Have a rule-base

Here is an open source program that can provide some ideas:

Chapter17/Prompt_Engineering_as_an_alternative_to_fine_tuning.ipynb

1 Like

Agreed,

Maybe I’m just getting old, but all of these discussions about ethics and such are giving me deja Vu.

Remember when google became a thing? :laughing:

1 Like

I am not sad, just tired and a bit disappointed. I will delete my posts now, if you don’t mind - last thing I need is more problems.

keep going.

It is incredibly rude and unethical to just go ahead and delete all your posts in a forum thread. Seriously, what is wrong with people these days? Are you so concerned about your online footprint that you need to erase every single contribution you’ve made to a discussion?

Not only does it make the entire thread nearly impossible to follow for anyone who might want to read it later, but it’s also a massive slap in the face to everyone who put their time and effort into responding to your comments. Were their opinions and insights not valuable enough for you to consider before selfishly clicking the delete button?

Deleting all your posts shows a serious lack of respect for the community that you were once a part of. It messes with the natural flow of conversation and leaves gaping holes in the discussion that could have been filled with meaningful exchanges. You’re essentially telling everyone that their efforts to engage with you were a complete waste of time.

Have you ever stopped to think about how your actions might affect others? Or are you only concerned with your own selfish interests? It’s a sad reflection of the current state of online discourse that people feel they have the right to just erase their participation in a conversation without considering the impact it has on others.

I am sorry you feel that way. It was not my intention to be rude or disrespectful - I apologize if any of my actions caused you any harm.
The posts I deleted were a parallel conversation triggered by the thread’s title only, but not necessarily concerned with the contents.
They were directed to a few users available on this thread, they were more concerned about the forum policies than the contents herein.
I let the users know that I would delete the posts previously.
I deleted the posts precisely to avoid what is happening now on a smaller scale - to add more fuel to the fire - and to give even more evidence to something I strongly disagree with.
My mistake admittedly was not using private messages, but there was more than one user, and I wanted to take advantage of the trigger.
I had no idea how much graphic space the deletions would take. But you can be sure that the readers of the topic didn’t miss anything - I didn’t add anything to it, nor do I have any opinion about the subject treated here - totally outside of my knowledge.
The highest sign of respect I can show forum members is responding to all messages directed at me - as I am doing to you just now.
I like to receive the same reciprocity.
By the way, perhaps coincidentally, one of the subjects addressed in the deleted posts was “ethics.” As far as I’ve learned ethics is culturally relative and sometimes very subjective. I agree with some users of this forum regarding the exaggerated and tiresome use of this word every single time a situation does not meet the personal interests of the person who mentions it. This is far from the definition of ethics - but I confess that I don’t know enough about this subject to talk about it at this time.

2 Likes

@N2U Right! I remember the same discussions when corporations realized that laptops were spreading all over their locations with spreadsheets connected to the ERP’s data!

Oh yes don’t get me started,

We once had to move an entire database, by having to people fly there and pick it up… In backpack’s.
just because of privacy concerns :laughing:

1 Like

@ moisesvieira I very much dislike Italian policy made recently that banned ChatGPT due to similar statements. It’s admirable technology still in development. It clearly states to check results. Bot gave you indexed results from the web not even complete URLs. I don’t think problem is in the bot, Moisés Vieira.

1 Like

@N2U Those were the days! :blush: