Humans may have one thing that advanced aliens don’t: consciousness. Super intelligence may only be ethical without consciousness and may not need it.
“Even if silicon can give rise to consciousness, it might do so only in very specific circumstances; the properties that give rise to sophisticated information processing (and which AI developers care about) may not be the same properties that yield consciousness. Consciousness may require consciousness engineering—a deliberate engineering effort to put consciousness in machines.
Here’s my worry. Who, on Earth or on distant planets, would aim to engineer consciousness into AI systems themselves? Indeed, when I think of existing AI programs on Earth, I can see certain reasons why AI engineers might actively avoid creating conscious machines.
Robots are currently being designed to take care of the elderly in Japan, clean up nuclear reactors, and fight our wars. Naturally, the question has arisen: Is it ethical to use robots for such tasks if they turn out to be conscious? How would that differ from breeding humans for these tasks?
Further, it may be more efficient for a self-improving superintelligence to eliminate consciousness…. On cosmological scales, consciousness may be a blip, a momentary flowering of experience before the universe reverts to mindlessness.”