Whisper returns abusive word as starred

Ok so Im trying to find abusive/bad words in a text but whisper returns all abusive words as starred text. E.g: a***ole. I need to full word, so that I can show it to the user.

yeah, just ask GPT for a solution in Python that uses the NLTK and the find_nearest_word() function

Presumably the ***'d version is also unique?