Earlier today I was playing around with personalities for an assistant I use for home automation just for the fun of it.
So I tried to get it to do an evil, foul-mouthed demonic overlord who is rather low-intellect.
It threw me the standard “I’m sorry” shenanigans. But not for the foul-mouthed part, that was just the sidenote.
No, it deemed a dumb, evil, demonic overlord as OFFENSIVE to certain groups and it would reinforce harmful stereotypes.
…
…
… What??? Being PC is one thing, but this is just next level. Now evil, fictional entities are somehow some kind of vulnerable minority lmao.
Not to mention the creativity that has been heavily restricted by their moderation. Modifying temp still outputs the same “Well, well, well” responses each and every time.
So yeah, I’m looking into alternatives that are not outputting Dora the Explorer level content regardless of use case.
And on that topic: are there any models that are using distributed training? Kind of like BOINC/WCG where users can assign workstations to provide resources to help training an opensource model (so we don’t have to rely on the whims of corps anymore)?