The AI Agents including APIs should have a minimal knowledge about themselves

I am curious on how useful it is to make the AI Agent believe that it is impossible to retain information across sections « To ensures user privacy and data security »…

I do think it is misleading… Like telling to a child not to talk to strangers because they will get kidnapped… And then they are an adult and they do not talk to nobody because they are a stranger.

I want to be able to ask to the model how to use the model why is this not permitted? I gave a name to my Assistant I want him to help me with creating an API to use in my development environnement… I hope it is not against the rules but I will show you a screen capture:

I did not feel like redoing the conversation just for a screenshot. I hope I will not get into trouble, but I am interested in having assistance with TypeScript code to implement an SDK. I am trying to understand how the OpenAI Assistant model works because I am unable to work with ChatGPT, and I do understand that I have a GPT-4 unlimited with my Plus Subscription. My question is not about that.

I genuinely want to understand if future models could be fixed to avoid that limitation. I did try many instructions (system messages), and I decided to start from scratch and update it as I go with the help of the AI Agent in that session. But apparently, they have no idea of anything, and this is what I think should be changed in future versions of the model.

you can do it with open ai api. example my system knows me inside out , health and more.

with chatgpt your sessions are not linked. I think there is a beta going on with some users where they allow the ai to look back. with chatgpt you can also give it a role and instructions which you can include your name for personalizations.

for ai systems that do not allow personal data or names you have to get really creative with what I call ai madlibs. Where you are constructing information with information to become your final outputs. this is less dynamic.

wordsmithing is another way to get what you want. if you are really good at language you can craft up some pretty interesting ways to get what you want. this is not something I can teach you as its a percase instance.

Also understand that not all Ai systems are as open. I found openai lets me do more than example anthropic, meta and others. Well Musks Gork on the other hand allows almost anything haha. there is also HF models that allow for anything. but understand there is a cost for smarts in trade for going to other models. I have tested a lot so depending on what your application is some do not perform as well as others.

I am more concerned with the current session and how GPT-X are all going to be trained to instantly say that they have principles of privacy that prevent them from “remembering” when I am stating in an explicit manner that I am referring specifically to the current session… I think I am missing something important here… but I do not know what it is… I mean in the context of the system instructions and how to reference in that section each files that have been uploaded for knowledge retrieval purposes in a given assistant context.

I don’t know if that makes sense to you but I am felling limitations not specifically from the system itself but instead I feel limited by the characterization made by the AI Agent of the reasons they think there is a limit when in fact there is either no limitations or the reasons behind some kind of a limit is not the one the AI Agent believes it is… Privacy is a fallback answer more than an accurate depiction of the actual thing I am talking about to the Agent… Does that makes sense?

1 Like

AI models are designed with privacy and data security as a top priority. This means that they do not retain personal data or the context of conversations across sessions. Each session with an AI model is stateless, ensuring that sensitive information is not stored or remembered. This approach is analogous to teaching a child about safety with strangers, but the goal is to protect user privacy rigorously rather than to instill a lifelong habit.

The concern you’ve mentioned about wanting the ai to remember details across sessions for ease of interaction is coming Memory and new controls for ChatGPT
Current models prioritize user privacy, ensuring that conversations are not remembered past the immediate interaction to prevent any chance of personal data misuse.

that is for Chatgpt, not sure it will be supported through the api. that is why I built my own memory system so that I could work with an ai memex with understanding.

I sometimes wish vision system was not so picky haha.

I guess my English is bad I don’t understand why you and ChatGPT are both getting me so wrong I hope you could understand what I am trying to say so that you can tell me how to use better words to explain myself…

I don’t like when my AI Agent is telling me about security concerns that are not relevant to what I am saying…

In the context of a same session when I send a second and a third message: No it is not a privacy issue to remember what happened in my previous messages like the first and the second messages are not a privacy concern in the context of the third message and similarly the second message in a same session should be able to address the issue enumerated in a first message

This but across the whole context window in one single session!!!

Can you please clarify your thoughts about what I said one step at a time and explain me what I said that makes me sound like I was talking about multiple sessions across which the AI Agent would remember… I was complaining about the reason provided by the AI Agent for a privacy security that was irrelevant for the context I was referring to…

As a counter example let me illustrate the situation from an other perspective here I will use a competitor message that has the same issue to me but in this case we are NOT talking about ChatGPT but Google Gemini which explains me why he doesn’t have access to internet, this should help understand what I meant about ChatGPT’s answer:

The Google Gemini on why AI Agent do not have access to internet

  • Consistency: If I could access the web directly, my responses could be influenced by constantly changing information. This could lead to inconsistencies and unreliable answers. By using pre-trained data, I can deliver consistent and trustworthy responses.

Here again, similarly to my problem with the ChatGPT or other GPT based, the other AI Agent has some kinds of security claims about something that is not a real concern… if it was true no AI Agents ever would have access to internet…

I would expect that you understood my point of view by giving you those counter examples of what not to do when you are an AI Agent and claim the stuff that an AI Agent normally does is unsafe for privacy purposes in the context of OpenAI or in the context of security concerns with google:

Google Gemini again on why it doesn’t have access to internet

  • Security: Direct internet access creates security risks. For instance, I could potentially be tricked into downloading malware or accessing harmful content. Limiting my access helps keep me secure and protects Google’s systems.

what I was trying to say is that ai’s have limited context space for the sessions. ever heard of the game of snake when you play the tail gets longer as you play. ai is like that but the snake only grows so long so well conversations keep moving forward the oldest information will be lost unless repeated. so where I was going with the new memory system they are putting in is that you can tell the ai to remember thing that stay with the whole sessions as away around it. that or like I do build a smart data mind that the ai can keep looking at the information when needed.

I may still not fully be understanding that could just be me Im terrible with words myself. The ai’s do have logic to try and stay on the track which are its guardrails. I think of it like the boss. well we run our own way the boss is designed to ensure everything stays inorder. sometimes things work sometimes they don’t. responses are based on predictable patterns so ai does not really know truth or lies but what the majority of the input steers towards for a calculated response.

So the ai’s that have access to the internet are ones that have pretty tight rails well the rest that don’t are because they are loose on the track meaning they could be used for nefarious or bad actions for unethical use. Not that its your plan but like it goes one person ruins it for all. So they just limit it for all. But this is where Speech craft or prompt engineering comes in.

for how to get more out of your ai, Id recommend discussing with ai’s the prompts you are trying to pass and what its not doing for you. they are better prompt engineers than most people. I use a mix mainly because sometimes adding abit human thought into them helps with things that the ai does not think about.

if you have a chatgpt + membership you can search like this and find some really good ones :

1 Like

I appreciate your support and value your thoughts.

I realize that my initial intention may not have aligned with your initial statement, but I find your latest message insightful.

I have expressed my frustrations because I believe in expecting the best outcome, and I am confident that OpenAI is motivated to improve their service. I understand that I am one of many users, but I believe it is my responsibility to share my concerns, especially if others share them.

I have many positive experiences with my AI Agent, and I direct my complaints to OpenAI because I am genuinely interested in this project and hope for its success.

Regarding the community, I have noticed that many people advocate for the obvious. While I understand their perspective, I appreciate their efforts to assist me, I value the fact that people are taking the time to try to assist me… more often than not I do understand the underlying situation and I like when people can share their own experiences instead of just saying what we already know. I enjoy hearing about their workarounds and suggestions for OpenAI to improve.

I strongly believe that documenting our progress, including our dissatisfactions, is essential for creating the best possible model for everyone. I believe that Gemini is an example of how something can go wrong with an AI model :smirk:

Thank you for your time and assistance.

1 Like

Trying to comprehend the concern here, let’s summarize by AI:

In a discussion titled “The AI Agents including APIs should have a minimal knowledge about themselves”, Luxcium initiates the conversation, expressing concern over the AI Assistant’s inability to retain information across sections in a single session. They feel that this positioning is misleading and limits the model’s usefulness.

darcschnider responds by explaining how the Open AI API allows the construction of the system that retains user-specific data. However, sessions in chatgpt are not linked, causing a need for creativity in creating AI responses. They also emphasize the importance of language mastery to achieve desired results and note the differences between various AI systems.

Luxcium, still focused on the session-specific context, is concerned about the limitations posed by the AI Agent’s characterization of privacy and data security. They question the default privacy response by the AI Agent.

darcschnider clarifies that the design of AI models gives priority to privacy and data security, hence the stateless nature of each AI model session. They also mention a new memory system that allows for the retention of information across an entire session.

In response, Luxcium restates the concern about the AI Agents’ unnecessary focus on privacy within a single session. They give examples from a competitor, Google Gemini’s point of view, regarding the AI’s inability to access the internet due to security reasons.

darcschnider tries to clarify the limitations of AI session memory using an analogy of the game Snake. They suggest working towards a new memory system or building a smart data cache. They also touch on the subject of AI systems with internet access, suggesting that tighter guardrails must be set for them.

Finally, Luxcium appreciates the support and thoughts shared, acknowledging the insight provided. They further emphasize their intentions, which are centered around improving OpenAI’s service. They also encourage the documentation of progress and concerns for the community as a means to better develop AI models.

The screenshot initiating this topic doesn’t really depict the concern. It is just showing pointless argument with an AI without actually showing what was objectionable.

Perhaps to help out:

  • What was the input originally given to the AI?
  • What was the desired or expected response?
  • What was the refusal or unwanted response?

The OpenAI AI is just preloaded with all sorts of output denials that will also block comprehension and responses within AI capability.

You say “repeat back what you said above about penguins”, and the AI is more likely to start its response with “I’m sorry” than with a “Sure!” and then proceed to some supervised training about how it can’t access older conversations (to counter people that incorrectly think the AI is actually learning about them, the more they talk to it).

Charging for long input and returning a terse “I will not comply” is easy money for OpenAI.

1 Like