Do robots have rights?
We don’t have to wait for the the arrival of the first self-aware artificial general intelligence (“AGI,” which is sometimes called “strong AI”) to ask the question, because that threshold will not give us rock-solid guidance to follow.
Our ideas about rights aren’t dependent on consciousness, but rather on variable externalities of time, culture, and circumstance. As such, we human beings have been horribly inconsistent and often quite cruel in how we’ve recognized or ignored the rights of others. Sliding scales of what’s required or prohibited have blessed one community with blessings and cursed another. Race, gender, age, economic status, and geopolitics are only some of the criteria that have been used to decide if someone deserves one right or another.
Slaves as property, workers as machines…we still debate these definitions today, and people suffer them. It doesn’t help that science has no finished conclusion on what constitutes consciousness in the first place. “I think, therefore I am” tells us everything and nothing. We have no way to ascertaining what others think, or how, beyond what we believe they do when they talk and act.
It’s just as complicated when considering our perceptions of other living things, whether animal or plant. Bear-baiting was once considered funny, experimenting on monkeys a necessity, and still today we consign certain species to entire lifetimes spent in little cages before we eat them. Animals are conscious of their surroundings and one another, even though we have no reliable insight into what they think or feel about things. Intelligence emerges from colonies of ants and plants react to sunlight and sounds much as we do. There’s no single model to define it, but rather endless categories of nuanced detail.
Do they have rights? It seems the answer, like beauty, rests in the eyes of the beholder.
So how do we treat the machines that already possess many of the attributes we normally associate with humans and other living things, like recognition of objects and environments, the ability to learn, recollect and infer, and the capacity to take novel action? Are they simply tools to be used and discarded when we’re done? Do our opinions evolve as their skills grow?
Do our ethics toward robots impact our societies and ourselves? Put more melodramatically, what if the robot rebellion imagined in sci-fi movies like The Terminator isn’t the product of maniacal desires for control but revenge for years of oppression?
This blog is intended to help prompt that conversation. A podcast will follow shortly, on which I hope to interview experts in science, politics, history, and religion.
Jonathan Salem Baskin
October 1, 2019