Designing Ethical UI for AI (including ChatGPT)

I think we should have this conversation because ethics and UI are important. What’s the difference between a good relationship with AI vs. feeling enslaved to “the machine” even when the AI is accurate, knowledgeable, polite, and safe? The difference is in the UI.

If you have better ideas about how this conversation should happen, please share them. I think the conversation should be open to all concerned, and my approach right now is to ask for comments on a proposed design (with mocked-up user stories): NUI Design - Google Docs

Microsoft has a whole list of Human AI Experience design fundamentals.

You might find some concepts from it useful. Our team certainly has.

3 Likes

Ugh, 80 pages of fabricated jargon and buzzwords and meandering randomness as incorrigible as the first post that could be summarized as “add navigation frame”.

1 Like

Ironically, the naming of new concepts is one issue that this conversation includes. A society that never introduced new concepts would be unable to advance, and generative AI could certainly produce new concepts (with new names), but Part 3 of this document explores an example which gives us reason to suspect that task might be especially problematic for AI (requiring special help from human beings). I doubt one can plan well for AGI without addressing this UI issue.

Yes, “add navigation frame” could be a pretty good title for this conversation. It is a conversation about why there should be such frames, how many, how should they work, who should control them, and what it would take to get there. For frames to span all applications would require new technology standards, so we are painting society into a corner the longer we delay working-out the details.

Thank you, @codie . I do appreciate Microsoft’s efforts here (and the examples in the NUI Design already demonstrate most of these best practices–although issues of social norms/biases are outsourced to a navigator market so they can evolve over time). The conversation I am proposing goes further to address these issues but also democratization, social polarization, individual dignity, and extending transparency to significantly greater limits.

2 Likes

I want to emphasize my gripe with this part of Microsoft’s HAX Toolkit: https://www.microsoft.com/en-us/haxtoolkit/guideline/match-relevant-social-norms/

My problem with “Deliver the AI system’s services, including its behaviors and presentation, in a way that follows social and cultural norms” is that the history of social and cultural norms implies that norms have never been perfectly right before, so current norms are probably imperfect too. In other words, for an application provider to build their code around current norms is to build something shortsighted. Furthermore, because there is disagreement about norms, building code around norms forces coders to pick sides (thus reinforcing social polarization).

The solution I propose is for application providers to separate themselves from the navigator layer of the UI, so norms can evolve in the navigator marketplace without dragging the rest of the world of technology into that mess. Microsoft’s HAX Toolkit does not address conflict resolution, and that is a critical omission–discussion of ethics is always about points of disagreement. If we do not detangle (most of) technology from these conflicts, then technology and conflict will both become much more dangerous. And there is no reason why big tech should have to “play God” by coding decisions about which norms to follow.

1 Like

Just as a side note, that sort of conflict resolution, for competent teams at least, happens at the agile/scrum meeting level or during retrospectives. Maybe not the best place, but it is the place.

@codie, the kind of conflict resolution I have in mind is of disagreement about social norms (for example, about what should, or should not, be censored). If different application providers resolve such conflicts differently (e.g. via scrum meetings or ethics advisory boards), then users would choose to use only whichever applications align with their own side of the disagreement, and that would polarize both users and the tech markets (which would feel economic pressures analogous to those already forcing media providers into ideological niches).

In contrast, the NUI design includes mechanisms permitting an application provider to serve the entire range of users, and to facilitate greater objectivity, practicality, and understandability in communications between those users. Because such mechanisms have never been deployed before, there is currently no expectation that they should be deployed–and that makes it more difficult for you to understand what I am talking about–but seatbelts were once in that same position (i.e. having never been deployed, and not expected).

Media providers tend to agree that the media market is polarized, but each seems to think they are the ones who publish the actual objective truth (i.e. that providers who publish what a different group of users want to read are the ones who sold-out). Tech can and should avoid falling into that same trap.