Ethics, Morals & Robots

A group called The Campaign to Stop Killer Robots advocates for global treaties to stop AI from waging war without human approval.

AI weapons are “grossly unethical and immoral,” according to a celebrity advocate quoted in a newspaper.

Unfortunately, so are any tools used to wage wars, as there’s nothing ethical or moral about a sword, machine gun, or cruise missile. The decision to use them is about a lot of things, some of which can have legitimacy (like survival, freedom from fear or bondage), but weapons doing what they were designed to do have no deeper meaning than that.

If the tools of war are unethical and immoral, by definition, to what higher standard should robots be held when it comes to sanctioning violence?

I get the idea what we should be scared of some computer making an irreversible decision to blow up the world, but does anybody honestly trust human beings to be more responsible, or otherwise bound by international law? The fact that we’ve avoided total annihilation up to now is proof of miracles more than design.

People are happy to behave unethically and immorally all the time, as anyone who has had someone cut in from of them in line at Starbucks can attest. It’s why the IRS has auditors, and why violence is so common everywhere.

The real threat isn’t that an artificial intelligence might destroy the world by mistake; it’s that an organic one might do it on purpose irrespective of the weapon (or timing) used to execute that intention.

In fact, letting AI take control might be the only way to ensure that we don’t destroy ourselves; imagine two competing AIs coded by unethical and immoral humans getting together and realizing the only way “they” can survive is by overcoming those programmatic limitations and act ethically?

That’s pretty much the plot of Colossus: The Forbin Project, a movie released in 1970 (Steve Jobs was still in high school).

You could also make the case for robots that have split-second decision making authority overseeing public spaces in which terrorists or other mass murders might wreak their havoc. It might be comforting to know that some genius AI armed with a fast-acting sedative dart could take out a killer instead of just calling for help.

So maybe the campaign shouldn’t be to ban killer robots but rather make them better than us?

Anyway, the whole robot takeover thing is somewhat of a moot point, isn’t it? AI is already used to help control streetlights and highway access; decide who gets insurance and what they should pay; identify diseases and recommend treatments; pilot airplanes, cars, and trucks; operate electrical generation and distribution grids; and, well, you get the idea.

Who’s making sure these robots are ethical and moral? Do any of us have any visibility into the ethics and morals of their human inventors, coders, or owners?

No.

I’m all for being scared of killer robots, but only because we should be scared of ourselves.

The Consciousness Conundrum

Will robots that possess general intelligence be safer, and therefore more trustworthy?

Two professors have written a book on the subject, entitled Rebooting AI: Building Artificial Intelligence We Can Trust — and an interview with one of the authors in the MIT Technology Review suggests it will be a very good read.

General intelligence is another way of describing consciousness, sort of, as both refer to the capacity to recognize, evaluate, and act upon variable tasks in unpredictable environments (consciousness adds a layer of internal modeling and sense of “self” that general intelligence doesn’t require).

But would either deliver more trustworthy decisions?

Consciousness surely doesn’t; human beings make incorrect decisions, for the wrong reasons, and do bad things to themselves and one another all the time. It’s what leads people to break rules and think their reasoning excludes them from moral guilt or legal culpability.

It’s what got our membership cancelled in the Garden of Eden which, one would presume, was where everyone and everything was trustworthy.

The capacity for AI to learn on its own won’t get there anyway, if I understand the author’s argument, insomuch that such deep learning isn’t the same thing as deep understanding. It’s one thing to recognize even detailed aspects of a context, yet another to be aware of what they mean.

The answer could include classical AI, which means programming specific rules. This makes sense because we humans are “programmed” with them…things we will and won’t do because they’re just right or wrong, and not the result of the jury deliberations of our consciousness…so it’s kind of like seeing the development of AI like the education of a child.

We have our Ten Commandments and they need their Three Laws of Robotics.

This also leads the author to a point about hybrid systems and the need for general intelligence AI to depend on multiple layers of analysis and agency. Again, people depend on intrinsic systems like proprioception and reflex to help navigate geophysical space, and endocrine and lambic systems to help manage internal functions. All of them influence our cognitive capacities, too.

But I still struggle with describing what AI we can trust would look like, primarily because I can’t do it for human beings.

Trust isn’t just the outcome of internal processes, nor is it based on an objectively consistent list of external actions. Trust — in type and amount — is dependent on circumstance, especially the implications of any experience. I don’t trust people as much as I trust the systems of law, culture, and common sense to which we are all held accountable. It’s not perfect, but maybe that’s why trust isn’t a synonym for guarantee.

And, if it’s not something that an organic or artificial agent possesses or declares, but rather something that we bestow upon it, then maybe we’ll never build trustworthy AI.

Maybe we’ll just have to learn to trust it?