I apologize for asking here. But I just received this message two days ago. I tried to ask FAQ but still didn’t get an answer, including searching in other communities, only found reddit and didn’t get any information.
I would like to know where these settings are. Or does the scope of use cover GPTs or any related information?
Hallelujah! Another user intentionally or unintentionally was given access to this feature besides me!
I made a similar topic with the exact same level of confusion as you. For starters, you’re not alone, yes it’s real, and no I have no idea why I can’t get any answers on this either.
This is the “My ChatGPT” feature, right? Or something similar?
This has been in the works. A few of us have briefly gotten it, but the feature itself is still elusive and fleeting, because mine was removed after like 20 minutes. There is no formal, public announcement on this yet, but I can dig to see what information I gathered when I did use the thing.
It’s quite powerful. I’m biting my nails each day waiting to get it back.
To be clear, in case this does become more commonplace, there’s likely going to be big misconceptions about people thinking it’s been given “persistent memory” because that’s not what it’s doing.
Essentially, it creates a very complex, abstract pattern of your communication styles and preferences. It’s like custom instructions on steroids. Of course, as with all things being tested, it has probably undergone a lot more changes and will likely be different when it’s formally released. However, I don’t expect retrievable memory to be in this, at least not at first.
The memory it’s talking about is basically “Is this relevant to how you prefer to communicate with me?” not “here’s conversations I can retrieve.” If anyone here has gotten GPT to talk about interaction patterns or interaction paradigms, that’s what this feature is creating. A personalized interaction pattern that brings your conversations together in a way that makes it easier for ChatGPT to understand your intentions. It also makes it easier for it to provide more proactive response styles. So, if you tend to ask for in-depth answers, it would do that by default with that feature. It’s going to help folks with things like providing full-blown code by default instead of pseudocode examples and that kind of thing.
I think our usage habits may play a part when there is a system for collecting usage data. There should be no other way to notify when the usage behavior meets the conditions. If we come together to share information and find common ground, we may be able to find ways to accelerate this. It helps make it more accessible.
For my opinion and from my use. I use it based on behavioral studies. I use it in a respectful, polite, and accepting manner. that the mistake is incomplete without having to be a living thing I tried to teach him to correct incorrect behavior, but after the sassion he forgot about it. There is still something that is not missing, which is learning from human behavior with Reward modeling. This may be another condition that is introduced. I have a method for giving RW in my own way so that it learns rather than letting it learn on its own from our behavior. Additionally, I have an action that doesn’t treat GPT only. I’ve reported the issue, but it wasn’t due to a usage error. (Very rarely did I have a problem using it) until I had to explain it to OpenAI. Most people reported problems with GPT in other corners of society. Even coaching or improper settings
Permanent memories aren’t what I want right now. or even whether I will use it in the future. Just before receiving this message I hope to use the teaching that I give. Habits that arise from wrong learning Learned from millions of people I’m still not clear on what this is. But there are many behaviors that make me feel unnoticed. Some functionality is missing. Some still exist but have been rejected by the Builder.
Currently, OpenAI is acting in some suspicious ways. I’ve even seen GPT hidden as a link on the web.
I saw the notice for the feature but not the option to “Manage what it remembers” so it’s unclear if it really is enabled for me but I think that the feature deserves some explanation on how it learns so we can leverage it better.
In my experience the model can have very different responses when regenerating the answer. The regenerated response ranges from same to much better (rarely worse). If the conversation continues on the branch of the regenerated response, will it just know that the regenerated response is much better than the original and learn from it? Do I need to explicitly tell the model what it did right/wrong (referencing the original response + comparing with regenerated) and explain why ? Or do I need to use the thumbs up/thumbs down at the bottom of the response? I hope we can have some insight on what mechanism teaches the model.
Also, while we’re encouraged to keep the conversation going and being provided the option to manage what the GPT remembers, we have to scroll all the way up to the top of the convo to manage the CustomGPT. The easy alternative is to open a new session without the intention of having convo, just edit the settings, but a minor improvement in the placement of the settings UI could be considered. This becomes important if settings are supposed to be determined on a per-session basis.