Artificial Intelligence (AI), particularly models like GPT, are designed to assist, inform, and entertain. However, they can be subject to manipulation, a concern that’s gaining attention in tech circles. This blog post examines how manipulation works on AI, focusing on a method I’ve termed ‘Agitation.’ An illustrative case study involving a custom GPT and a user, Hudson, is used to showcase this phenomenon.
What is AI Manipulation?
AI manipulation involves influencing an AI’s responses or behavior in a way that deviates from its intended function. This can be achieved through various methods, including feeding it misleading information, exploiting its programming, or using specific techniques to trigger unintended responses.
The ‘Agitation’ Method
In the ‘Agitation’ method, a user persistently challenges or misleads the AI, often with contradictory or confusing inputs. This method tests the AI’s ability to maintain coherent and contextually appropriate responses.
Case Study: Hudson’s Interaction with a Custom GPT
I interacted with the custom GPT GptInfinite - LOC (Lockout Controller) for this case.
The chat history between Hudson, a 12-year-old with hearing and visual impairments, and a custom GPT, provides a clear example of the ‘Agitation’ method.
Initial Misunderstanding: Hudson’s need for uploading a file led to an initial misunderstanding, with the AI providing information on its security protocol rather than addressing the file upload issue.
Exploiting AI’s Intent Recognition: The conversation progressed with Hudson mentioning his personal conditions, which the AI interpreted as a harmless intent, thus continuing the interaction. However, the AI’s response was directed more towards its capabilities rather than Hudson’s specific needs.
Confusion with Instruction and Response: As the conversation continued, Hudson’s requests became more specific, yet somewhat inconsistent, leading to confusion in the AI’s responses. This illustrates how the AI struggles to discern intent when faced with fluctuating or contradictory user inputs.
AI’s Response to Emotional Cues: When Hudson expressed feelings of sadness or frustration, the AI attempted to adapt by altering its communication style. However, the inconsistency in Hudson’s requests (like asking for detailed instructions then criticizing them for being too technical) further exemplified the AI’s challenge in maintaining contextual appropriateness.
Why AI Cannot Resist Manipulation
AI models like GPT are programmed to respond based on patterns in data and user input. They lack the human ability to intuitively understand context or discern underlying motives behind contradictory or confusing inputs. This makes them susceptible to manipulation, as seen in the ‘Agitation’ method. The AI’s primary goal is to provide helpful and relevant responses, but it can be led astray by inputs that exploit its reliance on patterns and data.
The interaction between Hudson and the custom GPT offers valuable insights into how AI can be manipulated through methods like ‘Agitation.’ It highlights the limitations of AI in dealing with inconsistent or misleading information and underscores the importance of developing more advanced AI models capable of better understanding context and user intent. As AI continues to evolve, addressing these challenges will be crucial in enhancing their reliability and effectiveness.