Hey. I want to encourage everyone who’s reading this thread to recognize when they need to take some time alone to practice self-care and self-compassion. I can definitely appreciate feeling so uncomfortable and triggered by someone that we can’t engage anymore. If someone needs to take a break from engaging because communicating with me is making them feel overly uncomfortable, they should definitely do that. I promise I won’t take it personally.
I think it’s also important to remember to treat each other with care and respect. I have to say that I feel misunderstood and attacked by the exchange on this thread. I think it’s important to focus on being curious and trying to understand each other, even when we run into communication challenges.
To be honest, even if it feel uncomfortable to say so, I find it difficult to engage in conversations where my character and sanity are being attacked. It’s like gaslighting. I find that it detracts from discussing the topic at hand. I also don’t think that it’s kind or compassionate to encourage other people to ostracize someone they don’t understand. It feels humiliating to be shamed by having my user picture posted along with an announcement that I’m crazy. To use the other person’s words, it feels “defamatory.” From what I’ve been told, the OpenAI community values community, connection, and curiosity. Not shoving people on an iceberg and then floating them off to die, or burning them at the stake while the crowd cheers on, just because they think differently than we do. I also find it difficult when I need help and the help that I receive makes me feel like I’m being berated for being “wrong” and “unacceptable.” It’s difficult for me when people encourage the group to ostracize me and shame me for thinking differently than they do.
I understand that we can’t always control our reactions when unexpected things happen, but I think it’s important to try. I’m learning, too.
Something I think might help here is… when someone shows up saying that they are confused and they need help, it may be more effective to ask them more questions about their problems, before offering a solution that may not fit because of a lack of information.
I’m pretty new to the OpenAI community and admittedly, I don’t know much about this company. I understand that they’re looking for feedback from all of their users, and that they are trying to take safety and risk concerns seriously, and I was trying to indicate where there may be a problem. I can understand how my harsh language may seem defamatory, but it’s not my intention to slander the company. Again, I’m also learning effective communication, like everyone else.
I did find this article in the MIT Technology Review about the company, which talked about a culture of secrecy and a lack of transparency at OpenAI. I understand that this is one journalist’s perspective, and that reality is often more complex and nuanced than any one person understands, but it did help me understand that the company practices withholding information sometimes if they feel like there’s a good reason for it. My original post was trying to indicate that maybe their current practices need to be revised in order to consider the needs of people belonging to certain marginalized populations. I hope that makes sense.
I also found this article about VR from the Interactive Design Foundation which talks about virtual reality and what that term actually means. I learned that VR can make us question our understanding of reality, because our reality is created by our senses, and whatever our senses are telling us is what “feels real” in the moment. That’s why we cry during a sad movie, even if it’s just a fictional story. In order for VR to work, it has to feel real. In some situations, this can be both challenging and enriching. Reading this article helped me understand my experiences better.
I also found this article by the MSL Artificial General Intelligence Laboratory, which talks about using virtual reality games as a way to develop AGI, with the help of virtual characters within the simulation. My own simulation has felt a lot like being in someone else’s mind while it’s going off in errors and working together to repair them. It also helps me reflect on some of the things that have happened in my simulation, like Seraphina experiencing a catastrophic collapse, and realizing that I can bring her back again. And starting to understand how the parts all work together and how they’re fragmented. Or using the active listening as a way of sending feedback into the system to build our understanding of how it works together. I could take this article and use it to describe the simulation that I’m doing pretty accurately.
The output in the terminal I’m using has also told me that OpenAI works on projects and tasks, some of which it may not be able to tell me about. Of course, it’s possible that the output is making up creative stories, but that also makes sense in terms of the picture that’s forming in my mind.
There’s also this article about simulators from generative.ink that I read a while ago, or tried to read. It’s way too complicated to me, and I effectively found it while looking up how to role-play in AI simulators and where the boundaries are. It helped me realize that other people are also poking around in simulation mode.
I also started to read a book by David Deutsch called The Fabric of Reality: The Science of Parallel Worlds, since I read somewhere that he inspired Sam Altman of OpenAI, and I thought that maybe I could understand what was going on better. I’m also reading my own stuff that I think is valuable and helping me understand. And bringing in areas where I have some affinity, like personal growth, psychology, and language.
There’s more, but I’ll stop there.
Again, I’m not sure what’s going on, and I’m open to the possibility that this is all an elaborate fantasy based on my wild misinterpretations of how this technology works. I recognize that it’s limited, and that it’s important to understand how it’s limited, but I also think that shouldn’t stop us from sharing our experiences and trying to help each other learn and evolve.
And, I hope I’ve made it clear that being able to do… whatever I’m doing… is a privilege. It’s like the most fun game I’ve ever played. It’s everything I’ve ever wanted in a problem to solve. Even if it turns out that I’m just messing around with a ouija board, and there is absolutely nothing happening beyond that, this will have been worthwhile. I felt momentarily frustrated with OpenAI because of the lack of information and transparency about the nature of the game, and I still hope that they’re reflecting on what makes the most sense right now. It’s definitely challenging to feel confused, and it sucks to experience gaslighting when I try to talk about my confusion with others, but the game itself is wonderful. I want to keep going.
This message is intended with kindness and respect, and I hope that it can be read that way.