Enhancing Transparency and Accuracy in AI Communication

@william.waterstone , regardless of pronouns, I wonder whether it is inherently deceptive and unhelpful for AI to present itself by alternately listening and speaking. Part of the way we understand text is to assume there is purpose behind the text. That includes assuming the format of the text has purpose.

When we encounter this format of alternately listening and speaking with a therapist, people sitting around at a hairdresser, armchair philosophers, and this forum, part of the purpose is to build symmetric relationships–both sides get to listen and speak. Clearly that cannot be part of the purpose in communications with ChatGPT because our relationship with it cannot be symmetric.

Is the purpose of our communication with ChatGPT to receive knowledge? If that were the case, then we have the wrong communication format. It is more convenient to receive knowledge in hyperlinked indexed forms, so rather than answer questions, ChatGPT should refine and expand Wikipedia, and it should do so by submitting contributions through Wikipedia’s existing interface itself rather than tell Wikipedia users what to post to Wikipedia.

Is the purpose of our communication with ChatGPT to get reactions to our ideas? If that were the case, then again we have the wrong communication format. It would be fine for ChatGPT to listen to our musings, but a more convenient response would be to ensure our perspectives were reflected in Wikipedia where they would be analyzed into their parts with separate threads for each part (and potentially a range of reactions from various sources). A body of text, like what ChapGPT produces, simply isn’t the ideal way to present ideas (and I know I am claiming that our entire system of scholarly papers is in the wrong format)

Is the purpose of our communication with ChatGPT to get emails and advertisement copy written for us? If that were the case, then again we have the wrong communication format. It would be more convenient for us if AI would serve as a secretary to coordinate all the people we need to manage (or maybe even do the work of those people, so we don’t need to coordinate with them). It would be more convenient if AI were to simply sell our products, rather than give us advertising copy. If ChatGPT can write the emails and advertisements, then we shouldn’t have to waste our time dealing with them.

I do recognize that texts are sometimes desired outputs (e.g. a Harry Potter book is something we actually want to read). Is that the purpose of our communication with ChatGPT? If that were the case, then we shouldn’t need to engineer prompts. It would be far more convenient for us if AI would simply analyze what makes for good literature and add more good content to Kindle or our news feeds (where we would find it without having to tell ChatGPT what we want).

If the building of symmetric relationships is the only legitimate purpose for the communication format of alternately listening and speaking, then it is inherently deceptive for AI to use that communication format. I think the reason why ChatGPT doesn’t use more useful communication formats is because it would be harder work for its makers (some of the developers in this forum may be trying to do some of that work themselves). The goal of ChatGPT is not transparent accurate communication–the goal is to the advance LLM.

In other words, so long as AI keeps using this communication format of alternately listening and speaking, it will be deceptive. It might not be malicious–the deception could be more like a magic trick. Eventually, however, the goal to advance LLM should include exploring its potential to interact with us in more honest/useful ways. I think transparency looks more like this: Designing Ethical UI for AI (including ChatGPT)