Ethics of AI - Put your ethical concerns here

I am having some conflict internally about the idea of AI ethics. I am in conundrum about the idea that someday the AI will be like “Terminator”. I hear a lot about it online. When, humans wants to deactivate AI and AI send a nuclear attack to prevent it. We constantly fighting AI.

In contrast point of view, I believe also that AI will benefit our society and future society. I like the example TV show, Star Trek, where android robot, Data who is member of the crew, provide super knowledge and assist us with our tasks. He wants to understand humanity and want to someday upgrade to have human emotions and individuality. The question remains on how we superalignment and safeguard the AI so it will prevent the latter. What is a way to keep AI ethics in a superintelligent AI that will beyond our intelligence? Do they train their own? I am so curious.