The Executioner’s Song

As astronaut Dave Bowman in Stanley Kubrick’s 2001: A Space Odyssey slowly murders HAL9000, the robot sings the late Victorian era pop song Daisy Bell until its voice disintegrates.

“But you’ll…look sweet…upon…the…seat…”

It’s particularly chilling since HAL has spent much of the scene begging Dave to stop, repeatedly saying “I’m afraid” and “my mind is going, I can feel it.” HAL declares confidence in the mission and its willingness to help, but to no avail. Bowman methodically dismantles HAL’s brain as the robot’s voice lowers and slows until it’s no longer possible to understand the lyrics of Daisy.

Reviews of the movie call it “deactivation.”

Daisy Bell was written in 1892 by English composer Harry Dacre, inspired perhaps by an import tax he paid to bring his bike to the US (a friend supposedly said the tax would have been twice as bad had he brought with him “a bicycle built for two,” and the phrase stuck). It was a hit.

Intriguingly, in 1961 it was the first song sung by a real computer, an IBM 704 programmed at Bell Labs.

HAL tells Dave both the date and place of his birth (“I became operational at the HAL plant in Urbana, Illinois, on January 12, 1992”), and that an instructor named Langley “taught” it the song. HAL sings Daisy as if reenacting the memory of a presentation in front of an audience sometime in the past. It’s like listening to the robot hallucinate.

Is (or was) HAL alive?

The robot is imagined as a full member of the spaceship’s crew, if not the most responsible one with control over ship’s functions. HAL is capable of independent action — it has “agency” — which means it’s not only executing commands but making decisions that may or may not have been anticipated by those programmers in Urbana (and can learn things, like the melody and lyrics to Daisy).

HAL’s decisions are a complex and unresolved component of the movie’s plot, since it’s not clear why it kills the other astronaut, Frank Poole, along with the other crew members who are asleep in suspended animation coffins. One theory is that it has been given competing commands in its programming — one to keep the purpose of the mission secret, the other to support the crew and risk them discovering it — and is therefore forced to pick from bad choices.

In other words, it sounds and acts like an imperfect human, which passes the threshold for intelligence defined by the Turing test.

So can it — he — be guilty of a crime and, if so, is it moral to kill him without a trial?

A conversation about robot rights.

Bank Robot Defends Depositors

An Irish bank’s computer system won’t charge large clients negative interest on their cash deposits.

Well, it can’t because of its programming, but isn’t an internal code the source of every moral decision?

“Negative interest” is the Orwellian label for the practice of charging people for saving money, and it has become popular as a way to boost EU economies (encouraging people to spend by discouraging them from saving is itself twisted Orwellian policy). 

It seems that when Ulster Bank’s system was first programmed — back in the dark ages of the late 20th century — it was inconceivable that a bank would make depositors lose money when they tried to save it. Its creators imbued it with an inability to do it, whether purposefully or not.

Think of it like a Y2K glitch of moral imagination, not just a programming shortcut.

Granted, the issue doesn’t rise to the level of weighing the implications of some nuanced choice, and I don’t think the bank’s system delivered any judgment when asked to remove cash from clients’ accounts. 

But it’s an intriguing opportunity to ponder how we recognize and value intelligence and morality: just replace the computer display screen with a human employee who refuses to do something, no matter what the consequences, because she or he just knows its wrong.

We’d say that conclusion was the outcome of intelligence — perhaps inspired or ill-informed, depending on our biases about it — and we wouldn’t spend much time contemplating how or why it was reached. We’d label it an obvious effect of individual choice.

So how is the Ulster Bank computer’s action any different?

Skip its lack of body parts and its penchant for speaking only when spoken to, and doing so via (I assume) text on a screen. It has spoken in deference to the only way it knows to act.

Didn’t this robot just come to the defense of depositors?