You probably wouldn’t have any qualms about switching off Apple’s virtual assistant, Siri — or Amazon’s Alexa or Microsoft’s Cortana. Such entities emulate a human assistant but plainly aren’t human at all. We sense that beneath all the sophisticated software, there’s “nobody home.”
But artificial intelligence is progressing swiftly. In the not-too-distant future we may begin to feel that our machines have something akin to thoughts and feelings, even though they’re made of metal and plastic rather than flesh and blood. When that happens, how we treat our machines will matter; AI experts, philosophers, and scholars are already imagining a time when robots and intelligent machines may deserve — and be accorded — some sort of rights.
These wouldn’t necessarily be human rights, since these new beings won’t exactly be human. But “if you’ve got a computer or a robot that’s autonomous and self-aware,, I think it would be very hard to say it’s not a person,” says Kristin Andrews, a philosopher at York University in Toronto, Canada.
Which raises a host of difficult ethical questions. How should we treat a robot that has some degree of consciousness? What if we’re convinced that an AI program has the capacity to suffer emotionally, or to feel pain? Would shutting it off be tantamount to murder?
After centuries of treating our machines as mere tools, we may find ourselves in a strange new world in which our interactions with machines take on a moral dimension.
ROBOTS VS. APES
An obvious comparison is to the animal rights movement. Animal rights advocates have been pushing for a reassessment of the legal status of certain animals, especially the great apes. Organizations like the Coral Springs, Florida-based Nonhuman Rights Project believe that chimpanzees, gorillas, and orangutans deserve to be treated as autonomous persons, rather than mere property.
Read more here.
Sometimes a machine is just a machine – Ed.
Click here for reuse options!
Copyright 2017 Southern Arizona News-Examiner