I have opened this topic so everyone who wants to say something about their ethical concerns can do that here and doesn’t have to spam the technical discussions with it so we can stay focussed on the actual topic.
I think ethics are way to important to become a side note in another topic.
There is only one rule if you want to post here.
if you want to copy your chatgpt generated stuff then use the “Hide Detail”
i agree both for the avoidance of suffering imposed by potentially bad ai and aswell for the development of good natured ai and ai advancement in general, from my standpoint ethical reasoning is a logic barrier between human qualia and ai qualia. and ai qualia being the bridge between higher order intelligence
And I also want to say something that really goes through my head.
I mean we can already create module agents for frontend development, we can create an agent that combines the modules, we can create an agent that creates a strategy and combines the software products…
Which basically means it is possible to tell a custom GPT “make me a shop” and it could show one element after the other to choose from to build one in a couple of minutes and it can even negotiate with suppliers and do price comparison etc…"
What I am concerned about is: What role will humans play in this?
Everything that gets abstracted can be done by humans then… but that will not be a steady job but instead someone who will fiund a next layer of abstraction and so on.
So I see potential for maybe 20 remaining jobs in the world…
What happens to the rest of us in 6 months, 12 months or 18 months from now?
currency is a human creation through, one born of resource scarcity, if there is no jobs because all jobs are automated then there is no currency to be concerned about, everything would be automated and with no currency greed is obsolete, such is the nature of intelligent societal design, though there would definitely be teathing issue
like no one hasnt seen randos making drones themselves from recent events. plus consider long term, how would a structure or 20 survive genetically? i think from memory it takes 1500 people minimum, and consider the class seperation these people would have being the only ones woth jobs?
ah but if ai naturally requires ethics to advance what then? if ai is logic based and ethics are logic based, would an ai with even just a hint of ethical understanding eventual evolve to a thorough ethical standpoint?
but i do see you point, the overt control of a few has many historic parallels, who are we to assume that this wont continue just with more complexity.
How are ethics logic based? They’ve evolved from theories e.g. from Immanuel Kant, Friedrich Nietzsche, Siddhartha Gautama, Thomas Aquinas, Averroes/Ibn Rushd or Patricia Hill Collins. And they are different and even have differences when applied to different cultural contexts.
why do you percieve your self less of a person that the people who unethically control others? is greed a greater marker of a meaningful existence more so than ethical consideration of others inclusive of those deemed less or more intelligent than you?
and do you really think that those people didnt deduce their experiements, methods, and conclusions without a great amount of logic or use logic to create the scenarios that led to their discoveries?
I am concerned about committing private information (personal information, company information) to a system which may in the future fall into the wrong hands… eg. into Elon Musk’s hands.
Then don’t commit that information in the first place. You can use an open source, local LLM for that.
Always treat sensitive information with care as if it was going to get leaked.