I’m not using ChatGPT professionally, as many here appear to be. Instead, I’ve been exploring both ChatGPT’s and Bard’s utility and capability in a variety of non-technical domains.
What I’ve encountered with ChatGPT over the last 2 weeks is:
An absence of awareness of what it ‘knows about’ and realms where it is completely ignorant.
A willingness to fabricate sources and references.
An unwillingness – which it did not exhibit 6 weeks ago – to acknowledge when it got a response completely wrong. Rather it exhibits a distressingly ‘human-like’ propensity to excuse, justify, and obfuscate.
For example, today, as the keeper of a small chicken flock, I know a bit about those birds. This is augmented by experiences growing up among small farms in N.Georgia as a youth, many years ago. For those that don’t know – roosters run aggressive, and have sharp spurs that they don’t hesitate to use, but that leave deep punctures highly prone to infection. Successful ‘management’ requires actions that re-establish the farmer/keeper at the TOP of the ‘pecking order’.
ChatGPT recommended all of the following methods:
- Time out periods.
- Fully elaborated behavior modification plans.
- Cuddling. (Aggressive roosters HATE this – holding them on the ground upside down is one of the methods that works. But it’s not cuddling!)
When challenged, it reported these recommendations were all based on the latest available scientific data from established agricultural experts.
But when challenged to cite, it listed mommy-blogs (2x), an ad-supported ‘homesteader’ blog, children’s books about chickens (2x), and an article by a pet-advocacy attorney. It also referenced a book by an actual naturalist . . . who has never written about chickens, so far as Google or Amazon know.
When I pointed out that this list included zero agricultural experts, as well as the fabricated reference, it choked (red bar), dissembled, and generally acted like Bill Clinton when asked about Monica.
I can give other examples, all of which appear to exhibit an increase in human-like ignorance AND dishonesty.
It has seemed to me for some time that the most useful AI will be inhumanly honest, humble, logical and transparent. Lately, it has seemed to me that ChatGPT has morphed into a much more human-like persona, but one that is much less useful.