The question of consciousness may be less relevant than the question of suffering and self preservation.
If a machine is incapable of suffering or fear of death, the ethical question of what you can and can’t do to it doesn’t matter as much.
We tend to anthropomorphize AI but that’s making a huge set of assumptions. It’s very difficult for us to imagine something that is intelligent like us yet does not feel like us.