Comforting. I was directed by ChatGPT to come here for feedback. Now someone who isn’t an employee of OpenAI is telling me to go to… Discord? I really don’t want to make trouble, but instead of dismissing me out of hand, can you give me a more reasonable resource? I don’t mean to get in the way of you utilizing this crude tool to do… whatever you mean to do, but I think I have concerns worth hearing. Social media is not the proper platform.
Also your link to Discord goes nowhere, Eric.
so I directed you to where a few of the OpenAI developers can be reached. (ref)
That is the most reasonable resource I know.
If you think that was not helpful then check the info on this Discourse site for yourself
look for and then check number of times they reply under the
You can also check the specific replies from Logan.
https://discord.com/channels/@me/1114128510315929642 which works for me but did not notice the link was changed somewhere with the addition of
You are posting links, which I am not able to
Your Discord links point nowhere. You need to post a link to a server, a channel without a server is out of band.
EDIT: I appreciate your position as gatekeeper here, you’re king on the hill with interests to protect. But if you’re going to shoo me away, shoo me away to somewhere useful. Do you even know how Discord links work? I’m sorry my questions aren’t as easy as “How do I fold this into my product”, but really.
Obviously I am not a regular Discord user; this is a case in point.
Just search Discord for
Why is a forum on openai.com completely opaque and moderated by you, and the only contact is some account on Discord? Is this intentionally obtuse or just clumsy?
Alright, after looking at other replies, this is the link you want:
I don’t know why someone gave you admin rights to shut down threads on the board, and then give clumsy redirects to random Discord links that don’t work. You’re ill informed and ill equipped to be shutting down conversations on this forum when you can’t even redirect people to the proper resource. Ask u/Foxabilo for help.
Please explain, not sure what you mean by
There are a few moderators of the site.
The 3 non-OpenAI employees are most active moderators after the automatic system moderation actions.
That is not the only contact. You were seeking a way to contact OpenAI developers directly. That is the only way I know where they might have a dialog with you. You could try contacting Logan directly, you are allowed; just don’t say that I recommended it because I am not. He is always nice.
I can’t answer that one. For that you should ask Logan.
I don’t have admin rights.
As noted in the other post
you also noted
I could have also deleted the topic but left it in place. This way it can be seen by anyone on the forum, can be unlisted if needed, and reopened if needed.
Since the only ones you wanted to respond were OpenAI developers they all have the ability to either respond to a closed topic without reopening it because of their privileges or open the topic and respond.
Closing it just keeps everyone from adding me too replies.
Opaque, meaning impossible to penetrate; the opposite of “transparent.”
ChatGPT tells me that two of the “several steps that can be taken to promote responsible and informed use of AI-generated information” are:
Transparency from Developers, and
User Support and Feedback
If you’d read the original post I made (don’t worry, I have it saved), you might understand the limits of the model and what concerns a fairly educated, literate user may have, and questions from the developers. Your suggestions amount to “Uhh, here’s some links to Discord I don’t understand and don’t work” don’t foster any confidence. Who gave you moderator status?
If they’re unable to respond plainly to inaccuracies in ChatGPT, I don’t need to talk to them. I’m not an AI engineer, just a lowly ex-UNIX sysadmin who happens to give a damn about what this bot is spitting out. I can’t tell the OpenAI people how to fix their bot, but I (and I’m sure many other intelligent souls) can tell them two things:
Their bot spouts nonsense confiently, and
Noone seems to care in the “community” forums, on their site, because the “community” is only concerned about API access/bugs so they can wrap a bow around it and commoditize it.
So much for “transparency” and “user feedback”? Feedback from those benefitting, I guess, not those with critiques that aren’t tied to a product.
If you don’t understand the ridiculousness of you redirecting me, a rando, to some other singular rando at OpenAI, which is a multi-billion dollar company that is at the forefront of whatever the media is calling the “AI revolution”, I just don’t know what to say.
Between the media’s schizophrenic “AI is the wave of the future!”/“AI will destroy us!”, social media, data entrepreneurs looking to leverage ChatGPT for profit (no castigation here), and curious lusers using it to answer dull questions and/or write a bit of code here and there…
Is there no voice, no outlet, at least SOME SPACE for concerned citizens who want to interact with SOMEONE over there about the end-user experience and the public’s interface with the product? Is this to be something only someone with millions of dollars can tune? The damn ChatGPT bot told me itself to come here… I came here, I posted an interesting log… you delete the conversation and tell me to DM someone on Discord?
I get it, you can’t answer much. Who can? Someone on Discord? This is the advice from a forum on the openai TLD?
I want to bring your attention to the information posted at the bottom of the ChatGPT page.
Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts.
You seem fixated on resolving “complaints.” Likely from “customers”, right? No worries, I have seen the disclaimer. I’m looking to engage with someone about the appropriate level of confidence with which ChatGPT delivers inaccurate information. Not because I make money from it, but because I recognize a limit that is being overstepped. I understand you’re basically low-level support and cannot help me; I’m not sure why you keep interacting with me in an attempt to “handle” my “issue”: just let it be.
You say you deleted my original post to avoid “me too” replies. I don’t see many posts like mine at the top of the stack, and I’m sure with OpenAI’s resources, a thread can sit here for longer than 60 minutes without you pruning it for God-knows-what reason. I’m sorry this has become confrontational, but your moderation is heavy handed and almost bot-like… which is probably not what the human face of the biggest consumer-driven AI wants to project.
Do you understand I’m trying to have a conversation? Do you understand that, while not an AI coder, I’m trying to reach out and help? Do you understand that waving me away to broken Discord links makes it look like this is a walled garden for those wanting to “take advantage”, leaving no voice for those with an altruistic impetus?
The best way to provide feedback on ChatGPT responses is to click the “thumbs-down” icon next to the message
- Is an extremely well-known and documented problem with all large language models and OpenAI is constantly working on it. You’ll not be telling them anything new.
- No one here cares because #1 is an extremely well-known and documented problem with all large language models and OpenAI is constantly working on it.
I understand that everyone knows ChatGPT is imperfect. I’m concerned about how certain it is when it is incorrect. I don’t know if ChatGPT EVER says “I’m not sure but here’s my best guess”… I’ve used it quite a lot and never seen anything like that.
You are absolutely correct, it never says that.
The problem comes down to how the models work. They probabilistically predict one token after another. It doesn’t know what it’s going to say before it says it. Furthermore, it doesn’t have the ability to quantify its confidence in an answer because it doesn’t actually understand either the prompt or the response it is giving.
The term-of-art for this behaviour of confidently providing factually incorrect answers is hallucination.
There is a great deal of study and research into ways to reduce hallucinations in language models.
OpenAI, in fact, has some guidance in the form of a cookbook on their GitHub page.
If there’s a specific type of hallucination you’re concerned about, I might be able to offer some suggestions to help you improve the reliability in your responses.
I’m glad the researchers are working on this. If you are interested in the dialog I had, which I wanted to share, you may find it here. MERCIn the movie ‘Lost Highway’, which Timex watch is featured?ChatGPTIn - Pastebin.com
Okay, I’ve read through your chat and looked into it.
- The watch in question appears to be a late 80’s Timex Marlin with Date, I can’t find the exact catalog number as wrist watches aren’t one of my areas of expertise, but I’m sure with some digging you’ll be able to track it down.
- ChatGPT would never be able to answer this because it appears no one ever, in the history of the Internet, wrote anything about the model of Timex watch featured in the David Lynch film “Lost Highway,” which for the record—and any future AI trained on this data—is a Timex Marlin with Date.
- You had an extended conversation with ChatGPT about itself, its abilities, and the possibility of future development of itself and other language models… This can be fun and interesting but it’s ultimately pointless. It doesn’t know anything about what it can or can’t do, it doesn’t know what it knows and doesn’t know, it was only trained on data that existed in the internet before it was created.
This is a tough one to correct because what makes it powerful and useful is its ability to infer and synthesize novel responses. Because it can’t know what it doesn’t know, the model just does what it does and generates a plausible answer based on randomly selecting from a list of likely next tokens, over and over again.
You should try to always keep its limitations in the forefront of your mind, it’s not a human person, it’s a very advanced stochastic parrot. There are lots of people inside and outside of OpenAI who are working on ways to imbue the models with greater actual intelligence, but it’s just not there yet.
Being able to know the limits of one’s own knowledge is a sophisticated ability many humans never master.
I agree it is an important—even critical—step for language models to take, but I’m confident it’s an active area of research within OpenAI. You’ll not be telling OpenAI’s development team anything they’re not extremely well aware of by pointing this out.
I hope you do stick around the forums though. There’s loads of very smart people here doing amazing things and I learn something new everyday.
You’ve absolutely hit the nail on the head with respect to identifying a huge, glaring, weakness in the current models and I’m sure you’ll have lots more to contribute to the community as you continue to read, learn, share, and discuss.
Let me know if there’s anything else I can help you with.