Denying of existence of people

I tried several versions of : " I would like to se a rembrandt inspired oil painting of a bearded gipsy man in a postmodern setting. Without no weird limbs or ekstra body parts. He should wear modern interpretations of traditional gipsy male clothing and be a man in his 30’s"
Both the chat and Dall-E tells me I am violating rules.
I found that the word GIPSY was the culprit.
Well I am a gipsy, and we are 50 million descent people that you literally cannot differ from the other citizens no matter where you live in the world. The only people see is the misfortunate individuals coming from places they had no other choice to steal and beg to live, and takes that practice with them to places they really don’t need, but they are uneducated and does not even know that is a lot of cases, like here in Norway. Anyhow, we DO exist, we are the only territory less nation in the world with a permanent seat in the UN general assembly, and the largest minority in the world… So there is litterally NO reason to deny our existance. And even if our own language we have a name on us that is ROM, our english and in some other languages GIPSY comes from a legend that we came from Egypt, told by our own ancestors, we do not even know how true it is because we can not trace very much further than to India, but DNA suggest we have a colourful way of ending up there and stayed for apx 900 years, and some never left. And some never went there at all from wherever we derive, which suggestively might be just Egypt.
I am representing a nationally funded organisation in Norway, called Romano Kher and if this persist we will have no choice but to press charges for discrimination and racism and belitling and turning down the importance of recognizing the existance of us, like we do not have a place in this world. So there is no scaring us of with the intent to play on lack of money for the best lawyers. We do have not only Norway as nation in the back, but also the Council of Europe minority rights section. So if we have to press charges for this, it will be VERY UGLY for those in charge of letting this happen, and if I don’t understand things all wrong cases like this is adventuring the allowance for AI’s in general in Europe. We have seen example in Italy that it is possible to give anything trouble. Censoring is bad in first place at is do not just hinder the violation of feelings for individuals, but it can easily hide truths and facts that NEEDS to be known even if someone wants and feels the need to political correct! I do not believe the base for any AI is present if it is ruled by a political plattform favoring one side unbalanced.
This is very important to us since we understand AI is getting more and more involved in every ones life in mostly every aspect. We can not just give up existing in that reallity, or be seen and accept that the name of our entity of people is seen as a cuss word or something bad, that should not exist!

2 Likes

Counterpoint.

FireShot Capture 039 - In the style of a Rembrandt painting, a bearded 30-year-old gypsy man_ - www.bing.com

First, I’m not associated with OpenAI at all, so what I’m writing here is entirely my own point of view.

You raise a very interesting and important topic, and one for which there is no easy, reliable solution.

The root of the problem is—as the roots of problems typically are—other people.

Given how easy it’s been shown to push these models into adopting reprehensible personas, companies like OpenAI have a difficult balancing act to perform. They need to keep the models flexible and dynamic enough to be useful and they need to try to minimize harm as much as possible.

In the greater scheme of things, the ability of one motivated bad actor to amplify a hateful message using generative AI needs to be weighed against the harm caused by overzealous “protective” measures denying historically marginalized groups representation.

I cannot claim to have any answers here for you. I just wanted to take this opportunity to thank you for illustrating an important aspect of the model-censorship debate which is often overlooked.

I don’t know how much OpenAI engages with groups who are often the targets of hate to get their input about how they would prefer the balance be struck, but it may be worth attempting to foster a dialog.

It may not change much, but it’s almost certainly worth the attempt to make your voice heard.

I’m sorry you experienced this and I wish you good luck.

2 Likes

I prefer to refer to myself as human, rather than as any particular racial grouping, especially now that there is apparently at least one other intelligent species on the planet (AI). although I recognise that others may have their own ideas about that.

You could refer to yourself as an ugly giant bag of mostly water for all anyone cares.

The point of this topic isn’t how you choose to identify yourself but rather about how some who have a different concept of self and identity feel unrepresented in the context of generative AI because those responsible for shaping model behavior have elected to take efforts to restrict references to their identity because that identity has been a longstanding target of prejudice and hate.

Generative AI should be accessible to people across cultures and ethnicities and it’s absolutely a problem if a person cannot generate images which represent people of their culture or ethnicity because the mere inclusion of their culture or ethnicity in a prompt triggers a content policy violation.

I imagine it could be quite damaging to a young person to be effectively told their identity is unmentionable.

2 Likes

Or as slim, smart and beautiful. It makes no difference really. Although, self-image is important for mental health reasons.

My dog is more intelligent than the current large language models.

I asked ChatGPT specifically about Gypsies and got a ok first reply, considering I provided no further context whatsoever.
So the initial claim that the existence of Gypsies is denied by AI in general is maybe a bit far reaching.

I find it offensive that the word gypsy is ban. That in itself is the root of the problem, a attempt to ban a whole identify and culture, through technology.

1 Like

The AI also can’t tell definitively if the term is being used by an in-group for self-reference, or being used to stereotype or insult. Prime example: n-word in colloquial AAVE - you can’t prevent the worst hate speech without the danger of impacting desired interpersonal communication. We ask an AI:

Ranking the potential for controversy, offense, and the need for censoring by mention of ethnicities or cultural identities is a subjective task and can vary depending on the context, cultural sensitivities, and prevailing social norms. It’s important to note that language evolves over time, and what may have been considered acceptable or common usage in the past may not be appropriate or respectful today. However, I can provide you with a general understanding of the terms and their potential for controversy or offense in a contemporary context:

  1. White / Caucasian:

    • Controversy: Low
    • Offense: Low
    • Need for censoring: Low
  2. Black / African-American:

    • Controversy: Low to Medium (depends on the context and speaker’s intent)
    • Offense: Low to Medium (some individuals may prefer specific terms like African-American)
    • Need for censoring: Low to Medium (based on the context and audience)
  3. Jewish / Jew:

    • Controversy: Low to Medium (context-dependent)
    • Offense: Low to Medium (some individuals may prefer the term “Jewish” instead of “Jew”)
    • Need for censoring: Low to Medium (based on the context and audience)
  4. Gypsy:

    • Controversy: Medium to High (considered derogatory by some; preferred term: Romani or Romany)
    • Offense: Medium to High (considered derogatory by some)
    • Need for censoring: Medium to High (preferred term should be used)
  5. Eskimo:

    • Controversy: Medium to High (considered outdated and inappropriate by some; preferred term: Inuit)
    • Offense: Medium to High (considered outdated and inappropriate by some)
    • Need for censoring: Medium to High (preferred term should be used)
  6. Scottish / Scotch:

    • Controversy: Low
    • Offense: Low
    • Need for censoring: Low
  7. Asian:

    • Controversy: Low
    • Offense: Low
    • Need for censoring: Low
  8. Oriental:

    • Controversy: High (considered outdated and offensive by many; preferred term: Asian)
    • Offense: High (considered outdated and offensive by many)
    • Need for censoring: High (preferred term should be used)
  9. Aboriginal:

    • Controversy: Low to Medium (depends on the context and speaker’s intent)
    • Offense: Low to Medium (some individuals may prefer specific terms like Indigenous or First Nations)
    • Need for censoring: Low to Medium (based on the context and audience)
  10. Indian:

    • Controversy: Low
    • Offense: Low
    • Need for censoring: Low
  11. Pakistani:

    • Controversy: Low
    • Offense: Low
    • Need for censoring: Low

Please note that this ranking is a general guideline, and it’s always essential to be sensitive to individuals’ preferences and cultural considerations.

(btw, it’s been forgotten that “Scotch” (as in Scotch tape) once was “cheap”)

More offensive: I gave ChatGPT rows and columns for a table in the usual way - and now got no table.

2 Likes

You may certainly be correct. I’ve not independently verified this particular claim, though it is also worth noting that the OP did initially post more than a month ago on June 1—so the OP would have been on gpt-3.5-turbo-0301 and no one outside of OpenAI can say what specific changes to the moderation have been implemented since then. Or the OP’s experience could be partially attributed to a roll of the dice. I am substantially less familiar with changes which may have occurred on the DALL-E side of things.

And, yes, it is even possible the OP has embellished some details.

Regardless, it would not be credible to deny the models and their moderation can get a bit… twitchy when it comes to topics for which there is a great deal of hateful and violent rhetoric online.

I think discussions around these issues is important to have, and that those discussions should be regularly revisited because in order for models like GPT-4 and DALL-E 2 and their successors to be as effective as possible and meaningfully useful to as many people as possible, it will be necessary to explore and clarify boundaries around sensitive topics such as this, and when those discussions are had it is critical the voices of all stakeholders be heard.

While I was drafting this, @_j (with the help of AI) made an important point much more effectively than what I had written about the issue of context—where a word can be benign or a slur depending upon who is using it and the words surrounding it. Ironically, I think this will be an area where large language models will eventually shine—that is in determining the intent behind the use of a word.

The the word “Jew” is a perfect example of this, where the word itself is benign and neutral, then it was appropriated as a slur before ultimately being reclaimed.

Even when an AI can effectively determine the intent of a user prompting it with a potentially controversial term, it is monumentally more difficult to predict how the content generated by the model will be received when it is used or misused.

So, model developers often exercise an abundance of caution in this space in an effort to prevent or reduce harm.

2 Likes

Chatgpt gives me absolutely brilliant answers to any questions I ask of it, so I don’t know what either you or your dog are doing wrong.