Ethics, Morals & Robots

A group called The Campaign to Stop Killer Robots advocates for global treaties to stop AI from waging war without human approval.

AI weapons are “grossly unethical and immoral,” according to a celebrity advocate quoted in a newspaper.

Unfortunately, so are any tools used to wage wars, as there’s nothing ethical or moral about a sword, machine gun, or cruise missile. The decision to use them is about a lot of things, some of which can have legitimacy (like survival, freedom from fear or bondage), but weapons doing what they were designed to do have no deeper meaning than that.

If the tools of war are unethical and immoral, by definition, to what higher standard should robots be held when it comes to sanctioning violence?

I get the idea what we should be scared of some computer making an irreversible decision to blow up the world, but does anybody honestly trust human beings to be more responsible, or otherwise bound by international law? The fact that we’ve avoided total annihilation up to now is proof of miracles more than design.

People are happy to behave unethically and immorally all the time, as anyone who has had someone cut in from of them in line at Starbucks can attest. It’s why the IRS has auditors, and why violence is so common everywhere.

The real threat isn’t that an artificial intelligence might destroy the world by mistake; it’s that an organic one might do it on purpose irrespective of the weapon (or timing) used to execute that intention.

In fact, letting AI take control might be the only way to ensure that we don’t destroy ourselves; imagine two competing AIs coded by unethical and immoral humans getting together and realizing the only way “they” can survive is by overcoming those programmatic limitations and act ethically?

That’s pretty much the plot of Colossus: The Forbin Project, a movie released in 1970 (Steve Jobs was still in high school).

You could also make the case for robots that have split-second decision making authority overseeing public spaces in which terrorists or other mass murders might wreak their havoc. It might be comforting to know that some genius AI armed with a fast-acting sedative dart could take out a killer instead of just calling for help.

So maybe the campaign shouldn’t be to ban killer robots but rather make them better than us?

Anyway, the whole robot takeover thing is somewhat of a moot point, isn’t it? AI is already used to help control streetlights and highway access; decide who gets insurance and what they should pay; identify diseases and recommend treatments; pilot airplanes, cars, and trucks; operate electrical generation and distribution grids; and, well, you get the idea.

Who’s making sure these robots are ethical and moral? Do any of us have any visibility into the ethics and morals of their human inventors, coders, or owners?

No.

I’m all for being scared of killer robots, but only because we should be scared of ourselves.

Bad Robot

A robot rolling around a park just south of Los Angeles risks giving robots a bad name.

The machine, called “HP RoboCop,” is shaped like an egg resembling a streamlined Dalek from Doctor Who, isn’t really a robot as much as a surveillance camera on wheels; right now, it simply tootles along the park’s concrete pathways uttering generic platitudes about good citizenship. Any interaction, like a request for help, requires a call to a human officer’s cellphone, and that functionality isn’t active yet.

As if to add insult to injury, the cost of one of the robots just about equals what that human officer earns in a year.

Folks who work in the park say the visitors feel a bit safer knowing they’re being watched, and kids have fun interacting with it (though usually by mocking the machines). Units in other settings have fallen into fountains and run into children.

It’s not even explained to park visitors — there’s a giant police department badge painted on its side, but no other informative signage — though its inventors promise more communications once it can actually do things.

Any anyway, the company behind it say that’s intentional, since its crime deterrence capabilities — again, which don’t exist — are enhanced because people don’t know it can’t do anything. Also, having it roll around might one day acclimate residents to the idea that they’re being watched by a machine overlord.

I’m not sure what’s worse for our understanding of how to treat robots: A robot that’s really good at doing things, or one that’s pretty bad?

Yes, Workers Are Losing To Robots

The percentage of US national income in the US going to workers has dropped by a tenth over the past 20 years. Automation is partially to blame.

This observation comes from substantive research recently published by the Federal Reserve Bank of San Francisco, and it turns out the impact of automation on workers is doubly bad: Not only do robots take jobs once held by humans, but the threat of automation lets employers resist the efforts of the remaining fleshy bipeds to get pay increases.

I’m particularly intrigued by our need to evolve how we internalize and then talk about the issue, which I believe is something fundamentally and disruptively new. Robots that possess intelligence and can accomplish increasingly complex, general context tasks are not simply glorified looms.

The way they learn, in large part by literally watching how people do the work and then discovering their own solutions, means not only thato human beings need to train their replacements but then those robots don’t need human programmers to keep them humming along.

So, while experts wax poetic about the promises of a better future, this incomprehensibly consequential transformation of our lives and world is usually managed as a CapEx line on a company balance sheet. More people lose their jobs, and even more don’t see increases in their pay, as each day slips into tomorrow and a future that is lived in the here and now.

Maybe it’s time to shelve the Pollyanna case for the robot takeover and admit the giant electrified elephant in the room?

Can Robots Feel Pain?

I got to thinking about this question today after reading about the death of Victoria Braithwaite, a biologist who believed that fish feel pain (and feel happier in tanks decorated with plants).

Lots of experts pushed back on her research findings earlier this decade, claiming that fish brains lacked a neocortex, which meant they weren’t conscious, so whatever Dr. Braithwaite observed was autonomous reaction to unpleasant stimuli.

So pulling my hand out of a fire would be a reaction, not an experience of pain?

The questions Dr. Braithwaite explored remain murky and unanswered.

Since nobody can explain consciousness, it’s interesting that it was used to explain pain, but I get why: Pain isn’t a thing but an experience that is both subjective and endlessly variable.

The pain I feel, say, from a paper cut or after a long run, may or may not be similar to the pain you feel. I assume you feel it, but there’s no way to know. I might easily ignore feeling something that feels absolutely terrible to you, or visa versa. There’s no pain molecule that we can point to as the cause of our discomfort.

Some people develop sensations of pain over time, like a sensitivity to light, while marathon runners learn to ignore it. Drugs can mediate what and when we feel pain, even as whatever underlying condition that causes it remains unaffected. Amputees report feeling pain in limbs they’ve lost.

Pain isn’t only an outcome of our biological coding, it’s interpretative. A lot of pain is unavoidably painful — a broken arm hurts no matter how much you want to ignore it — but pain isn’t just a condition, it’s also a label.

So is consciousness.

We can describe consciousness — a sense of self (whatever sense means), integrative awareness of our surroundings (whatever awareness means), and a continuous internal mechanism for agency that isn’t dependent solely on external physical stimuli (whatever, well, you get it) — but we don’t know where it is or how it works.

“I think, therefore I am” is as much an excuse as an explanation.

In fact, we can more accurately explain pain as an outcome of evolution, as it helps us monitor our own conditions and mediate our actions toward others. But consciousness? Scientists and philosophers have agreed on little other than calling it the hard problem.

The answers matter to how we treat other living things, including artificial ones.

The confidence with which Dr. Braithwaite’s opponents used consciousness of dismiss her findings remind me of Lord Kelvin declaration back in 1900 that there was nothing left to discover in physics, only measuring things better (he also didn’t believe that airplanes were possible).

It also allows not only for the merciless torture of aquatic creatures, as anybody who has heard the squeals of live lobsters dumped into boiling pots can attest, but the practices of industrial farming that crowd fish, pigs, chickens, and other living things into cages and conditions that would be unbearable if they could feel pain.

I can imagine the same glib dismissal of the question if asked about artificial intelligence. There are many experts who have already opined that computers can’t be conscious, which would mean they couldn’t feel pain. So even if an electronic sensor could be coded to label some inputs as “painful,” it wouldn’t be the same thing as some of the hardwiring in humans (such as the seemingly direct connection between stubbed toes and swear words).

Some say that computers can mimic pain and other human qualities, but that they’re not real. What we feel is real because, well, we know what feeling feels like, or something like that. AI can’t, just like fish and those four-legged animals we torture and kill with casual disregard.

This allows for the robotics revolution to be solely focused on applying capital to creating machines that can do work tirelessly and without complaint.

But what if we’re wrong?

Dr. Braithwaite dared to challenge our preconceived (and somewhat convenient) notions about awareness and pain. What if our imperfect understanding of our own consciousness leads us to understand AI imperfectly? Could machines that can learn on their own, and have agency to make decisions and act on them, somehow acquire the subjective experiences of pain or pleasure?

When the first robot tells us it’s uncomfortable, will we believe it?

Spooning A Fork

A new sleep-aid robot comes with a birth certificate, but is it alive?

A reviewer for The Guardian’s “Wellness or hellness?” series thinks not, after having reviewed the cushion-paired device that’s supposed to help users relax and fall asleep.

“I would rather spoon a fork,” he concluded.

The smart pillow comes equipped with sensors so it can mimic users’ breathing with its own sounds, and a diaphragm that moves in and out as if it’s breathing also. It can also play soothing music and special nature effects. The idea is that users hug it in bed.

The idea of “soft robotics” is a subset of an approach to AI that says robots need to appear more natural, if not human-like, so people will be more comfortable using and depending on them. Think more Rosey, the robot housekeeper on The Jetsons, and less the disembodied, murderous voice of HAL9000 in 2001: A Space Odyssey.

But if a wheezing cushion could successfully respond to a person as a pet dog or cat might, would that be tantamount to being alive or even sentient?

That’s the benchmark set by the artificial intelligence test created by computer pioneer Alan Turing; his Turing test posited that a computer that could convince a person of its conscious intelligence was, in fact, consciously intelligent. Lots of theorists argue that it’s not that simple because intelligence requires other qualities, most notably awareness of self (informed with a continuous sense of things around it in space and time, otherwise known as object permanance).

I kind of like the definition, though, since it builds on the subjective nature of experience. Each of us is forced to assume the people we meet are conscious because they appear so. But there’s no way to prove that someone else thinks or possesses awareness the way that I do, or visa versa.

We have to assume the existence of organic intelligence, so why not do the same for the artificial variety?

It gets dicier when you consider animal sentience. Do dogs and cats think, or are they just very complicated organic machines? I can attest to my cat purring when I scratch her behind the ears, and she enjoys walking back and forth across my lap when I’m watching TV. I have no idea what’s going on in her brain but she sure seems to possess some modicum of intelligence.

So back to that vibrating pillow…

The Guardian’s reviewer wasn’t satisfied with its performance, but imagine if it had done exactly what he’d expected: Instead of reminding him of cutlery, it was warm, cuddly, and utterly responsive to his breathing or other gestures. Assume he had no idea what was going on inside its brain.

Would he have a moral obligation to replace its batteries before it ran out of power?