Bugs, bugs, bugs... and the basic kind that should be solved

A few issues to address please if I may?

  • Firstly, I do not believe that ChatGPT Team can be called a corporate product when there is no formal support channel to report issues. Paying a premium implies premium support.

  • Secondly, the mobile app and networking infrastructure is unreliable. I’m not sure exactly how the app maintains its connection to the API; however, it often happens that the app loses its connection when in voice mode. Such connection losses are disruptive to the conversation and ruin flow.

  • Thirdly, the mobile app often loses its conversation control state (holding down the round circle) when in voice mode, which results in the AI jumping in mid-flow to provide an answer to an incomplete question or comment.

  • Fourthly, the AI has an annoying habit of rewriting text when it is asked to provide a review.

The first three items above are not appropriate for an established, market leading business. Providing good support to paying customers, particularly those on corporate plans is a given in the modern era. Networking and state management are also straightforward to address and with the collective intellectual capital at OpenAI, should be straightforward to resolve. Item 4 may be less easy to solve.

Moving onto something a bit more controversial but no less easy to solve, the significant bias in the training data.

  • It is evident that there is a significant American far-left bias in the training data. It would be nice to have the option to opt-out especially if the user is not American and is thus put off by being imposed upon. By enabling an opt-out, please don’t give us a far-right bias or far-XYZ bias—same issue different dogma. To wit, extreme views of all kinds are typically infested with cherry picking, irrelevant appeals to emotion, bare assertions (aka ipse dixit), fallacies of composition, mind projection fallacies, ad populum, all variety of ad hominem (particularly poisoning the well), and ultimately ad baculum.

  • In other words, when I speak with an AI, I’d like to deal with an AI and not an instantiation of the worst traits of intellectual dishonesty inherent to human nature. Please give us an AI that: honours hard empirical data as well as rational arguments based thereupon; enshrines critical thinking; avoids mealymouthed sophistry; avoids apologetics or proselytising for the religions of the left or right; ensures no contradictions or incongruences; maintains good internal consistency; does not promote advocacy, activism, interventionism, or any other intrusive conduct; avoids “research” that suffers from the reproducibility crisis or “research” that relies on the misapplication of statistics to get a desired result; never presumes to moralise; and honours human dignity by not gaslighting or coddling us or otherwise using manipulative tactics to avoid inconvenient or uncomfortable facts. As in, respect the fact that an AI that must serve all humans can only do so based on objectively, verifiable facts free from political baggage in line with Crocker’s Rules.

Solving bias is a training data issue that is entirely within the control of the team. An empirical approach is more sustainable because it is universal even if uncomfortable at times. It’s the best compromise because it is immune from bias although not from mistakes or gaps in knowledge. For example, algebra (an Arabic concept) has stood the test of time not because of any emotional or political view, but rather because it works whether you speak Arabic or English or Xhosa or Mandarin or {{YOUR-LANGUAGE}}. Call it the Algebra Doctrine (if you know what “al-jabr” means—which is the etymological root of algebra—you’ll get the elegance).