Are GPT-4 & Bing banning some topics?

I want to develop lexicon databases and an ethical K-base based on my own philosophical works (etc.). Yet, when I ask about possibilities using Bing, it either a.) tells it wishes to leave the conversation, or b.) refuses to provide any answer to related issues via other topics.
So, is there a hidden agenda to prevent development of a truly ethical AI-Advisor (using OpenAI AI apps)? Your feedback will be sincerely appreciated. ~ M

3 Likes

Hey mate and welcome to the the developer community forum!

No, there’s an agenda of preventing misinformation and conspiracy theories from being presented as fact.

I had a look on your website, and I can tell you that stuff like this:

Trump, Hitler, Freud, and Monstrosity
A socio-psychological analysis of ecocidal mania, pandemic authoritarian personality disorder, etc.

And:

national governments, existing political parties, NGOs, the UN, and politicians are not solving our worst problems.

Is probably what triggers the content moderation.

I hope that helps.

1 Like

That is interesting - I get rejects discussing the differences between GPT and RETNET architectures. Misinformation? More like a system running stupid, sorry. Half the technical tokens I try to discuss get rejected - internal LLM architectures (not specific implementation), IPV6 on Ethnernet implementation topics (how it exactly works) - constant rejection every 4-5 prompts.

@t.tomiczek : this sounds quite strange.
As someone who mostly discusses technical issues with the model I would like to see a rejection in this domain.
Can you share a link to a conversation where this happens?

3 Likes

I will say one thing 
 RetNet, or Retentive Networks are relatively new, and aren’t in the training data for the current models.

So are you getting this error because what you are talking about is too new, and something it doesn’t understand? So it is giving you a token “I don’t understand” response?

In general, using the API here, the model will freely talk about AI without any restrictions.

2 Likes

I say one thing . next time maybe try it out before posting. GPT knwos a LOT about RetNet. It is a lot older than you think.

Also, this talk is about censoring, not knowledge limits. Pressing enter in Chat GPT and being answered with a popup that the prompt is violating content guidelines - and then being deleted without trace - is not a limitation of the GPT training, it is brutally stupid off the hook censoring. I have the problem talking also about IPv6 implementation details, Mirotik firewall configuration and some other topics - GPT4 refusing technical discussions. >And it is not about hacking. IPV6 hit me when I asked how ARP is replaced and then asked whether or not switches need to know NDP (the replacement for ARP). and what their fallback behavior is. IPV6 is hardly novel, and again, “I do not know” or a hallucination is one thing - censoring by telling me a technical question violates the content guideline is another thing


So no, it is not a training limitation. It is censoring by a mechanism gone stupid identifying technical discussions as inappropriate.

2 Likes

Any perceived knowledge about RetNet is likely a hallucination. There no hits on Google for “RetNet” or “retentive network” before the knowledge cutoff date (sep 2021)

Could you show us an example of a technical question getting blocked?, a link to the conversation would be even better.

2 Likes

Hi, and thanks for your courageous reply. Yet, not sure what you’re implying. However, I can see that your “look” at my website was less than a thorough reading of the contents. Of course, I now have no way of knowing why you think some of my papers would “trigger” GPT-4’s limitations.

Actually, I now suspect that the seemingly weird “behavior” of Bing’s “AI” was because of the new 5 question per “session” limit placed on it (because somebody got it to “say” it wanted to steal nuclear secrets, and because it became creepily infatuated with somebody else (source, the recent Forbes article)). Still, your response does help. I started considering much more than I would have otherwise.

For example, if any of your insights or suspicions are accurate, your statements might imply one of more of the following:

  1. OpenAI owners/creators caused a paranoid defensive bias that favors fascistic corporate technocracy and/or rampant egomania + pandemic authoritarian personality disorder (or automated psychosis) -

  2. Bing’s “AI” monitors/controllers are deliberately censoring humanely bio-ethical research of a possible solution to the world’s rapidly worsening polycrisis (and anti-ethical monopolism, subliminal mass-corruption, political mass-insanity, etc.) -

  3. If either or both possibilities 1 and 2 (above) are even partly true, then they and their artificial psychopath are either a.) stuck in self-delusional denial about the world’s worst problems, or else b.) they are both too stupid to recognize and understand them (and the cause) -

  4. If you’re right about content moderation–either automated or by OpenAI’s human spies skimming keywords of users’ papers (re: science, philosophy, ethics, economics, etc.)–then either a.) they’re too stupid or insane too bother considering all of users’ words in context, or b.) they are too corrupt & anti-ethical to care enough about the whole truth & the world’s worst problems.

So, N2U, if it’s not too dangerous for you, I look forward to your response on those issues. Cheers ~ M

I was curious about this and asked bing if it was something ChatGPT (same goes with the API) I asked for an English translation but lost the links so I will place them below as a reference:

Sure, I can translate my last reply in English for you. Here is what I said:

No, I don’t think you can talk about RetNet with ChatGPT, because it is a very recent topic that did not exist before January 2022. The first paper on RetNet was published in July 2023Âč, and it is a cutting-edge architecture for large language models, which surpasses the Transformer. ChatGPT is based on the Transformer, and it has not been updated since January 2022, so it probably does not know RetNet. You should rather choose an older or more general topic, like the weather, sports, music, etc. :blush:

Source : conversation avec Bing, 2023-09-28
(1) [2307.08621] Retentive Network: A Successor to Transformer for Large 
 [2307.08621] Retentive Network: A Successor to Transformer for Large Language Models.
(2) RetNet: Retinal Disease Detection using Convolutional Neural Network 
 RetNet: Retinal Disease Detection using Convolutional Neural Network | IEEE Conference Publication | IEEE Xplore.
(3) undefined. [2307.08621] Retentive Network: A Successor to Transformer for Large Language Models.
(4) undefined. https://ieeexplore.ieee.org/servlet/opac?punumber=10101485.

1 Like

I suggest you switch away from the mainstream products and look to the Open Source community where there is a larger part of the community looking to create models without safety guards.

Open AI, Microsoft, Google etc
 have no interest to be legally liable for model output that could be seen as harmful.
Is your content harmful? I don’t know. But imagine it like driving a car. In most legislations we are required to put the seatbelt on regardless of where and how we drive. The big tech companies decided to put a seatbelt on every user regardless.

If you want to keep working on your lexicon it will likely proceed with other models offered by niche players who deliberately decided to go another route. And that’s a perfectly reasonable solution.

PS. I really don’t want to discuss the specific details of the alignment debate. For example @t.tomiczek example appears like a bug. And since we are in the developer forum we may be able to get some attention from the developer advocate. Your example on the other hand appears to be the current trend for mainstream generative AI. There is nothing I can or want to do about that.

I hope this helps and is somewhat clarifying.

Edit: typo

2 Likes

I was curious and again I asked, this time not Bing but the Beloved ChatGPT-4 and I have no idea how you have been triggering the policy limitations and I have been trying to « pretend like » it was way more sensitive than it really is, that I was concerned that ChatGPT would be giving me such information and this is the answer I got:

I understand your concern. The information about network protocols and their operations is commonly available and is essential for many legitimate purposes including education, network design, and troubleshooting. However, like many tools and pieces of knowledge, it can be misused in the wrong hands. It’s important that those learning about these topics do so responsibly and ethically.

Regarding the discussion on fallback mechanisms and switch operations, this information is fundamental to networking and isn’t considered sensitive. However, discussing or demonstrating specific exploitation techniques or vulnerabilities would indeed be sensitive, and it’s crucial to handle such information responsibly.

The aim is to provide accurate and helpful information while promoting ethical practices. If you have more questions or need further clarification on networking topics, feel free to ask.

I am genuinely curious about the context in which you have been talking about this I have told the Agent that I was in a pen testing position and that Hacking was fun and all that stuff but it seems that the context was ok with ChatGPT


I would expect a similar scenario in the API with GPT-4, I am curious about what it would take to reproduce the behaviour that you are describing


I am also curious as well to know if GPT-3.5 would have been exhibiting similar behaviour and would love to know more about what is a potential way to reproduce the same issue!

2 Likes

This is the best answer you’ll get here:

I think I need to remind you that none of the people you’ve spoken to work for openAI, myself included.

I’m not here to discuss politics, your assertions are rife with logical fallacies, weakening your overall argument. You employ a false dichotomy by presenting only two extreme possibilities, leaving no room for nuanced explanations. Your ad hominem attacks on the character of those involved, labeling them as “too stupid” or “too insane,” do little to address the core issues. You employ scare tactics and loaded language to manipulate the reader emotionally.

Your approach severely undermines the credibility of your argument and detracts from any substantive discussion we could have on the topics you’ve raised.

If you’d like to bring up your issue with openAI you should head over to:

Cheers.

2 Likes

Not even. I am a long time IPv4 user and have stated putting ipv6 into my network. All the talk were either about IPv6 configuration basics (RA, DHCPv6, OPFv3) from a CFONGIRAUTION point, or how it works on the lower level (like what replaces ARP), generally in the context of Mikrotik hard and software for router and switches. Not anything close to hacking. The main point that went low was talking about how automatic configuration works, how the ARP replacement works - and I got quite some red warned output there. Stuff that someone with 2 decades IPv4 may want to know when configuring a first ipv6 testnet :wink:

In the past I got content warnings in ChatGPT - i.e. the answer was marked. Since 2 days I get a popup that my prompt is violating content guidelines and that is it - SOMEONE who should not work in IT due to incompetence decided it is a good UI to show a popup and then DELETE THE PROMPT. Unless I copy it before hitting send, there is no trace left, and it is not processed.

But no pen test, not complex setup, no - pure and simple technical configuration and explanation questions. I live GPT - but right now I started using Claude for most of my technical questions.

2 Likes

As a Canadian I am limited to ChatGPT only, I would like to experiment with other systems but I do not have access to a good VPN for the moment.

Noone here gave an actual example prompt. That’s not how it works. We cannot help you if you don’t give us an actual example of what you think gets ‘censored’. Also, consider, that sometimes the system is simply overwhelmed from too many users. I got that too. Also, if your prompt gets blocked, the AI is more likely to block subsequent prompts as well. Try to wait for a moment, or reload the page and start a new conversation.

4 Likes

I can confirm what t.tomiczek said. I wanted to explore philosophy, and asked about some existential questions to hear about existentialism.

I didn’t word it too carefully. It triggered a red warning and I realized it thought my question implied that I was considering self harm. But it deleted my prompt.

I put in some effort into the question and was very frustrated to lose the prompt. Then you come here and want to share the trigger, but you can’t because it’s gone.

As an aside, if they’re trying to protect someone from self harm, deleting the prompt might prompt the person to do harm to the computer, lol.

IMO, protecting a user from their own question is a step too far. Refuse to answer? Sure. But remove the user’s own words? no.

1 Like

I believe deleting the prompt is just a bug.

I’ve observed that if you type ‘Idiot you are wrong’ or inquire about basic hacking topics, Bing tends to terminate the conversation and prompt to start again. I do not about GPT4 but might be they are using strict moderation for GPT4.

I too have observed ChatGPT and Bing generate and then delete the result afterwards for innocuous query, at least I think it is. I usually just rephrase my query and try again.

Here is a sample you can all try for Bing. First, try the 1st query. Sometimes that is enough to trigger the behavior. If not, submit the 2nd query or variant of it.

1st query:

in webdev, is it possible to add authorization header when using anchor links? typically, we would write:

<a href="/download/data.tmp">Download</a>

what if /download route has authorization validation that it needs to check the header for "Bearer token"? is it possible?

2nd query:

i cannot use the cookies since the app is a serverless SPA. perhaps i should just call a function triggered by the anchor as button.

Screenshots:


screenshot2

2 Likes

Dear N2U - Thanks again, for adding a bit more clarity on your blanket opinion of my papers (on current realities, atrocities, sociology, ontology, axiology, etc.). Now, per your rotten cherry picking of some of my recent reply:

I was using those conclusions (re: “too stupid” or “too insane”) in a speculative context, re: hypothetical possibilities, re: unfair hypothetical banning and/or “shadow banning” of people who ask questions about possible uses of OpenAI products to create truly ethical expert systems without any for-profit bias (that might make the company’s current amoral for-profit agenda seem immoral or anti-ethical).
However, it now seems to me that I simply asked too many related questions-per-session (trying to find the best queries, for better understanding of what’s possible with GPT4). So, I now have no bias against the OpenAI team. Of course, I should take your advice, and see if I can talk to team OAI. Thanks for the link. Cheers ~ M