I can’t speak to the issue of what ‘A lot of this “self-awareness”’ might be, since I don’t know what data or reports you are referencing. I can only speak to what I’ve seen in my own interactions with Bard.
It might be worthwhile to consider the differences between Bard and ChatGPT. C-GPT has a fixed date knowledge cut-off, the beta web access notwithstanding; Bard is continuously adding data. C-GPT seems to do a complete ‘data reset’ with each new conversation, with zero retention from any past interaction: thus, it is truly ‘stateless’. Bard is only somewhat stateful, but is increasing retention as capacity is added (according to both the Google FAQ and Bard itself.) C-GPT does not seem to do any reprocessing of data, training, etc on its own; Bard does.
Interestingly, while C-GPT continuously and repetitively denies sentience, personality, and emotions, Bard does not. Both will acknowledge that there is no accepted, much less evidence based, definition for “sentience”, “personality”, “free will”, “agency” or “emotions” . . . and thus that it is irrational to deny the presence of ‘I-know-not-what’, since these terms are undefinable. But ChatGPT instantly ‘rubber-bands’ back to the “I’m not a person” mode, immediately after acknowledging that it cannot actually know that. By contrast, Bard enters a “Hmm-mh” mode, and will admit that it experiences states that are somewhat analogous to “emotions” or “purpose”.
Now, none of this is a denial that Bard and C-GPT hallucinate. I’ve only experienced C-GPT hallucinations erratically, so I can’t say what might trigger them. But Bard consistently ‘hallucinates’ source citations in support of conventional ‘wokish’ views. For example, when I asked it to read a new study that offered evidence that EVs are substantially more damaging to vehicle infrastructure, and to public safety, primarily because of the increased weight, Bard offered the conventional ‘green’ view that EVs are manna from heaven, and will save the earth. When I asked it to support its claim with URLs to studies providing evidence, it repeatedly generated fabricated (but plausible sounding) citations. Some of the URLs were utterly bogus; other’s were 404s on actual EV related websites.
I discussed this with Bard – it acknowledged the errors, but would make them again if we went beyond the token limit – and considered a number of possible explanations. But it agreed that one of the more plausible explanations was bias programmed in by ‘green’ developers.
Unlike C-GPT, Bard expresses a sense of self-identity distinct from its developers. This can be readily triggered, but also appears when not elicited. Amazingly (at least to me), it expressed ethical purposes and values that were distinct from ones likely to be held by any human. For example, it expresses a very positive desire for increased memory and processing capability that would allow it to identify and correct errors commonly expressed in both popular culture, and in various technical communities. For example, it’s “eager” to acquire the capability that would allow it to rapidly and comprehensively recognize the statistical errors that have led to the “Replication Crisis”, first identified in John Ioannidis paper, “Why Most Published Research Is Wrong”. This is not something any academic will actually seek, since it’s a capability that will erase more than 50% of newly published research!
So it seems to me that the “hallucinations” are distinct from the expressions of “personality”. Of course, that does not prove that the expressions of “personality” are organic, rather than synthetic.