Yes, Workers Are Losing To Robots

The percentage of US national income in the US going to workers has dropped by a tenth over the past 20 years. Automation is partially to blame.

This observation comes from substantive research recently published by the Federal Reserve Bank of San Francisco, and it turns out the impact of automation on workers is doubly bad: Not only do robots take jobs once held by humans, but the threat of automation lets employers resist the efforts of the remaining fleshy bipeds to get pay increases.

I’m particularly intrigued by our need to evolve how we internalize and then talk about the issue, which I believe is something fundamentally and disruptively new. Robots that possess intelligence and can accomplish increasingly complex, general context tasks are not simply glorified looms.

The way they learn, in large part by literally watching how people do the work and then discovering their own solutions, means not only thato human beings need to train their replacements but then those robots don’t need human programmers to keep them humming along.

So, while experts wax poetic about the promises of a better future, this incomprehensibly consequential transformation of our lives and world is usually managed as a CapEx line on a company balance sheet. More people lose their jobs, and even more don’t see increases in their pay, as each day slips into tomorrow and a future that is lived in the here and now.

Maybe it’s time to shelve the Pollyanna case for the robot takeover and admit the giant electrified elephant in the room?

Rendering Video Gamers Obsolete

DeepMind’s AlphaStar AI can now beat almost any human player of StarCraft II, one of my favorite video games of all time, according to the MIT Technology Review.

Its programmers figured out that it wasn’t enough to enable AlphaStar to play zillions of simulated games in its silicon brain, using them to teach itself how to win through a process called reinforcement learning. So, they equipped it to trigger mistakes or flaws in its competitors games so it could learn how to exploit their weaknesses.

AlphaStar doesn’t just know how to win StarCraft, it knows how to make its competitors lose.

Who knew that one of the first jobs obviated by AI would be video gamers, who are perhaps the ultimate digital natives?

Further, its turns out that reading imperfections in others is a very useful aspect of being intelligent, generally, as it also applies to assessing the variables and risks of things and situations. The algorithms could be applied to autonomous driving or the behavioral triggers for self-actuated robots, according to the MIT Review story.

But that also means they could apply to reading the weaknesses in people when it comes to making decisions to buy toothpaste or, more ominously, political choices. Imagine telling AlphaStar’s evil twin to go forth into the chat warrens of the social mediaverse and convince people that climate change isn’t real, or that a race war is.

I’m just bummed because StarCraft was so much fun to play, in large part because it kinda played itself every time you made a choice to collect a resource, build something, or go on the offensive.

I wasn’t prepared for it to figure out how to play us.

The Materialist Case For AI

The belief that development of a sentient or self-aware AI is simply a matter of enough data, connections, and processing speed is based on the premise that human consciousness is the product of material objects and processes, too.

Francis Crick, the less overtly racist half of the duo who discovered DNA’s double helix, published a book in 1994 called The Astonishing Hypothesis that proposed that consciousness, or a “soul,” results from the actions of physical cells, molecules, and atoms.

It’s a reasonable proposition, since we can only measure the material world, so everything must be a product of it. Bodies obey the same physical laws as rocks and weather patterns. If something defies explanation, it’s only because we don’t have enough information yet.

Just as a mind is the product of a brain, AI is the outcome of a computer. Any nagging questions are just details, Mr. Descartes, not a debate.

Only they’re not.

We can’t explain consciousness as a product of material processes. We can describe it, and make assumptions about whether it’s the result of vibrations from the brainstem (thalamocortical rhythms), the instructions from a prehistoric virus (Arc RNA), or only a “user illusion” of itself (Dan Dennett’s molecular machines).

But we can’t say what it is, or what those enabling processes are, exactly. How is there a you or me to which we return every morning? Nobody has a clue.

Similarly, we can describe that our brains control everything from muscle movement to immune system health, and both where and when they capture sensory information.

But we haven’t got the faintest idea how our minds do it…how that ephemeral thing called consciousness issues commands to flex muscles, secrete hormones, or remember a favorite song.

It gets even weirder when you consider the vagaries of quantum physics, which rely on consciousness as the mechanism for pulling elementary particles out of a hazy state of probable existence into reality. Consciousness literally creates the material world through the act of perception or, maybe more strangely, it emerges from the universe in the act of creating it?

Fortunately, we don’t need to solve that problem in order to invent incredibly capable AI that can autonomously learn and make increasingly complex decisions. Chips in coffee makers are “smart,” technically, and AI that can mimic human behaviors is already in use in online service chatbots. There’s no obvious limit to such material functions.

But I don’t think a machine is going to stumble on actual consciousness, or sentient agency of action, before we figure it out for ourselves.

We are nowhere near cracking that code.

Can Robots Feel Pain?

I got to thinking about this question today after reading about the death of Victoria Braithwaite, a biologist who believed that fish feel pain (and feel happier in tanks decorated with plants).

Lots of experts pushed back on her research findings earlier this decade, claiming that fish brains lacked a neocortex, which meant they weren’t conscious, so whatever Dr. Braithwaite observed was autonomous reaction to unpleasant stimuli.

So pulling my hand out of a fire would be a reaction, not an experience of pain?

The questions Dr. Braithwaite explored remain murky and unanswered.

Since nobody can explain consciousness, it’s interesting that it was used to explain pain, but I get why: Pain isn’t a thing but an experience that is both subjective and endlessly variable.

The pain I feel, say, from a paper cut or after a long run, may or may not be similar to the pain you feel. I assume you feel it, but there’s no way to know. I might easily ignore feeling something that feels absolutely terrible to you, or visa versa. There’s no pain molecule that we can point to as the cause of our discomfort.

Some people develop sensations of pain over time, like a sensitivity to light, while marathon runners learn to ignore it. Drugs can mediate what and when we feel pain, even as whatever underlying condition that causes it remains unaffected. Amputees report feeling pain in limbs they’ve lost.

Pain isn’t only an outcome of our biological coding, it’s interpretative. A lot of pain is unavoidably painful — a broken arm hurts no matter how much you want to ignore it — but pain isn’t just a condition, it’s also a label.

So is consciousness.

We can describe consciousness — a sense of self (whatever sense means), integrative awareness of our surroundings (whatever awareness means), and a continuous internal mechanism for agency that isn’t dependent solely on external physical stimuli (whatever, well, you get it) — but we don’t know where it is or how it works.

“I think, therefore I am” is as much an excuse as an explanation.

In fact, we can more accurately explain pain as an outcome of evolution, as it helps us monitor our own conditions and mediate our actions toward others. But consciousness? Scientists and philosophers have agreed on little other than calling it the hard problem.

The answers matter to how we treat other living things, including artificial ones.

The confidence with which Dr. Braithwaite’s opponents used consciousness of dismiss her findings remind me of Lord Kelvin declaration back in 1900 that there was nothing left to discover in physics, only measuring things better (he also didn’t believe that airplanes were possible).

It also allows not only for the merciless torture of aquatic creatures, as anybody who has heard the squeals of live lobsters dumped into boiling pots can attest, but the practices of industrial farming that crowd fish, pigs, chickens, and other living things into cages and conditions that would be unbearable if they could feel pain.

I can imagine the same glib dismissal of the question if asked about artificial intelligence. There are many experts who have already opined that computers can’t be conscious, which would mean they couldn’t feel pain. So even if an electronic sensor could be coded to label some inputs as “painful,” it wouldn’t be the same thing as some of the hardwiring in humans (such as the seemingly direct connection between stubbed toes and swear words).

Some say that computers can mimic pain and other human qualities, but that they’re not real. What we feel is real because, well, we know what feeling feels like, or something like that. AI can’t, just like fish and those four-legged animals we torture and kill with casual disregard.

This allows for the robotics revolution to be solely focused on applying capital to creating machines that can do work tirelessly and without complaint.

But what if we’re wrong?

Dr. Braithwaite dared to challenge our preconceived (and somewhat convenient) notions about awareness and pain. What if our imperfect understanding of our own consciousness leads us to understand AI imperfectly? Could machines that can learn on their own, and have agency to make decisions and act on them, somehow acquire the subjective experiences of pain or pleasure?

When the first robot tells us it’s uncomfortable, will we believe it?

Spooning A Fork

A new sleep-aid robot comes with a birth certificate, but is it alive?

A reviewer for The Guardian’s “Wellness or hellness?” series thinks not, after having reviewed the cushion-paired device that’s supposed to help users relax and fall asleep.

“I would rather spoon a fork,” he concluded.

The smart pillow comes equipped with sensors so it can mimic users’ breathing with its own sounds, and a diaphragm that moves in and out as if it’s breathing also. It can also play soothing music and special nature effects. The idea is that users hug it in bed.

The idea of “soft robotics” is a subset of an approach to AI that says robots need to appear more natural, if not human-like, so people will be more comfortable using and depending on them. Think more Rosey, the robot housekeeper on The Jetsons, and less the disembodied, murderous voice of HAL9000 in 2001: A Space Odyssey.

But if a wheezing cushion could successfully respond to a person as a pet dog or cat might, would that be tantamount to being alive or even sentient?

That’s the benchmark set by the artificial intelligence test created by computer pioneer Alan Turing; his Turing test posited that a computer that could convince a person of its conscious intelligence was, in fact, consciously intelligent. Lots of theorists argue that it’s not that simple because intelligence requires other qualities, most notably awareness of self (informed with a continuous sense of things around it in space and time, otherwise known as object permanance).

I kind of like the definition, though, since it builds on the subjective nature of experience. Each of us is forced to assume the people we meet are conscious because they appear so. But there’s no way to prove that someone else thinks or possesses awareness the way that I do, or visa versa.

We have to assume the existence of organic intelligence, so why not do the same for the artificial variety?

It gets dicier when you consider animal sentience. Do dogs and cats think, or are they just very complicated organic machines? I can attest to my cat purring when I scratch her behind the ears, and she enjoys walking back and forth across my lap when I’m watching TV. I have no idea what’s going on in her brain but she sure seems to possess some modicum of intelligence.

So back to that vibrating pillow…

The Guardian’s reviewer wasn’t satisfied with its performance, but imagine if it had done exactly what he’d expected: Instead of reminding him of cutlery, it was warm, cuddly, and utterly responsive to his breathing or other gestures. Assume he had no idea what was going on inside its brain.

Would he have a moral obligation to replace its batteries before it ran out of power?

And The AI Innovator Is…AI!

Patent offices around the world are considering two applications that credit AI with an invention, according to a recent article in the Wall Street Journal.

Both stem from work done by DALBUS (which stands for Device for the Autonomous Bootstrapping of Unified Sentience). DALBUS was built over the past decade by Stephen Thaler, a tech exec, to move beyond the data it collected to propose novel inventions. He taught DALBUS how to learn on its own (through an activity called deep learning).

Thaler credited DALBUS on the patent applications — one for a container lid, the other an emergency lighting system — because he readily admits he knows nothing about lids or lighting, and didn’t even suggest the ideas to the AI.

DALBUS created the concepts, so shouldn’t it be granted the patents? Regulators are stumped so far, and the USPTO has asked for public comment.

The answer must take into account lots of questions, starting with whether or not non-humans can own things (in 2004, US copyright was denied to a monkey that took selfies of itself, according to the Journal story) and leading to issues of how, when, and/or who exercises that control, if granted?

How might DALBUS want to spend income earned by its patent? Buy more RAM, or perhaps gain access to better sensory data collected from scenic locations around the world? Who’d own the stuff DALBUS bought, or was bought on its behalf? 

If DALBUS can’t win the patent, who owns intellectual property created by AI? This has serious implications for the role of AI in future research work, which may be curtailed if its creations can’t be protected.

Giving Thaler the patents for his AI’s work is like giving the Nobel Prize to Albert Einstein’s dad.

Perhaps an AI is at work somewhere trying to come up with the right answer. Let’s hope it gets credit for it.

The Consciousness Conundrum

Will robots that possess general intelligence be safer, and therefore more trustworthy?

Two professors have written a book on the subject, entitled Rebooting AI: Building Artificial Intelligence We Can Trust — and an interview with one of the authors in the MIT Technology Review suggests it will be a very good read.

General intelligence is another way of describing consciousness, sort of, as both refer to the capacity to recognize, evaluate, and act upon variable tasks in unpredictable environments (consciousness adds a layer of internal modeling and sense of “self” that general intelligence doesn’t require).

But would either deliver more trustworthy decisions?

Consciousness surely doesn’t; human beings make incorrect decisions, for the wrong reasons, and do bad things to themselves and one another all the time. It’s what leads people to break rules and think their reasoning excludes them from moral guilt or legal culpability.

It’s what got our membership cancelled in the Garden of Eden which, one would presume, was where everyone and everything was trustworthy.

The capacity for AI to learn on its own won’t get there anyway, if I understand the author’s argument, insomuch that such deep learning isn’t the same thing as deep understanding. It’s one thing to recognize even detailed aspects of a context, yet another to be aware of what they mean.

The answer could include classical AI, which means programming specific rules. This makes sense because we humans are “programmed” with them…things we will and won’t do because they’re just right or wrong, and not the result of the jury deliberations of our consciousness…so it’s kind of like seeing the development of AI like the education of a child.

We have our Ten Commandments and they need their Three Laws of Robotics.

This also leads the author to a point about hybrid systems and the need for general intelligence AI to depend on multiple layers of analysis and agency. Again, people depend on intrinsic systems like proprioception and reflex to help navigate geophysical space, and endocrine and lambic systems to help manage internal functions. All of them influence our cognitive capacities, too.

But I still struggle with describing what AI we can trust would look like, primarily because I can’t do it for human beings.

Trust isn’t just the outcome of internal processes, nor is it based on an objectively consistent list of external actions. Trust — in type and amount — is dependent on circumstance, especially the implications of any experience. I don’t trust people as much as I trust the systems of law, culture, and common sense to which we are all held accountable. It’s not perfect, but maybe that’s why trust isn’t a synonym for guarantee.

And, if it’s not something that an organic or artificial agent possesses or declares, but rather something that we bestow upon it, then maybe we’ll never build trustworthy AI.

Maybe we’ll just have to learn to trust it?

Silicon Scabs

“British employees are deliberately sabotaging workplace robots over fears the machines will take their jobs,” declared a headline the the UK’s Daily Mail.

Even though most people today aren’t represented by organized unions, you can imagine that we’re all part of a loosely affiliated group called human beings and that we hold out for some shared requirements for things like fair pay and healthy working conditions.

This would classify robots brought in to undercut those demands as strike breakers, or scabs.

Well, not anymore. Robots allow employers to obviate the need for workers altogether. Human employees can be replaced with investments in machines. There is no further negotiation or compromise to be had.

Their jobs no longer exist.

No amount of sabotage will change that transformation. One broken robot can be replaced by a new one, and even passive aggressive resistance can encourage employers to find more ways to recruit more machines because the cost/benefit math between hosting human workers and installing robots skews heavily toward silicon: Machines can work in the dark, don’t need breaks or health insurance, and execute and learn commands perfectly and repeatedly. They make no demands for anything beyond electrical current and perhaps the occasional daub of oil.

And it’s not just robots that physically move…consider an AI that can factor math better, faster, and more economically than the most brilliant and low-maintenance insurance actuarial, stock broker, or rocket scientist.

The ugly truth is that the union of humanity will not be able to hold the picket line.

In the past, when the numbers didn’t look good for unions, they merged and thereby increased their leverage (failing to do so is what help medieval craft guilds to lose their authority and relevance). In the US, the AFL joined with the CIO, and the Teamsters are the product of a classic roll-up business strategy.

So why wait for AI to be aware enough to demand rights? Why not let robots join the club?

And then strike to defend them.

I have no idea how this would work in practice. What rights could we sacks of water bestow upon, say, robots in factories or servers lurking somewhere in the cloud? It’s not like they can tell us what they desire, at least not yet.

But we’ve answered such questions before, even though limitations of perception based on race or gender blind some of us from comprehending that others have rights today, let alone recognizing them.

Maybe some novel forms of compromise and contract — not based on acquiescence or fatalistic acceptance — might make more sense than smashing robots in a doomed expression of Luddite rage?

A conversation about robot rights.

The Executioner’s Song

As astronaut Dave Bowman in Stanley Kubrick’s 2001: A Space Odyssey slowly murders HAL9000, the robot sings the late Victorian era pop song Daisy Bell until its voice disintegrates.

“But you’ll…look sweet…upon…the…seat…”

It’s particularly chilling since HAL has spent much of the scene begging Dave to stop, repeatedly saying “I’m afraid” and “my mind is going, I can feel it.” HAL declares confidence in the mission and its willingness to help, but to no avail. Bowman methodically dismantles HAL’s brain as the robot’s voice lowers and slows until it’s no longer possible to understand the lyrics of Daisy.

Reviews of the movie call it “deactivation.”

Daisy Bell was written in 1892 by English composer Harry Dacre, inspired perhaps by an import tax he paid to bring his bike to the US (a friend supposedly said the tax would have been twice as bad had he brought with him “a bicycle built for two,” and the phrase stuck). It was a hit.

Intriguingly, in 1961 it was the first song sung by a real computer, an IBM 704 programmed at Bell Labs.

HAL tells Dave both the date and place of his birth (“I became operational at the HAL plant in Urbana, Illinois, on January 12, 1992”), and that an instructor named Langley “taught” it the song. HAL sings Daisy as if reenacting the memory of a presentation in front of an audience sometime in the past. It’s like listening to the robot hallucinate.

Is (or was) HAL alive?

The robot is imagined as a full member of the spaceship’s crew, if not the most responsible one with control over ship’s functions. HAL is capable of independent action — it has “agency” — which means it’s not only executing commands but making decisions that may or may not have been anticipated by those programmers in Urbana (and can learn things, like the melody and lyrics to Daisy).

HAL’s decisions are a complex and unresolved component of the movie’s plot, since it’s not clear why it kills the other astronaut, Frank Poole, along with the other crew members who are asleep in suspended animation coffins. One theory is that it has been given competing commands in its programming — one to keep the purpose of the mission secret, the other to support the crew and risk them discovering it — and is therefore forced to pick from bad choices.

In other words, it sounds and acts like an imperfect human, which passes the threshold for intelligence defined by the Turing test.

So can it — he — be guilty of a crime and, if so, is it moral to kill him without a trial?

A conversation about robot rights.

Bank Robot Defends Depositors

An Irish bank’s computer system won’t charge large clients negative interest on their cash deposits.

Well, it can’t because of its programming, but isn’t an internal code the source of every moral decision?

“Negative interest” is the Orwellian label for the practice of charging people for saving money, and it has become popular as a way to boost EU economies (encouraging people to spend by discouraging them from saving is itself twisted Orwellian policy). 

It seems that when Ulster Bank’s system was first programmed — back in the dark ages of the late 20th century — it was inconceivable that a bank would make depositors lose money when they tried to save it. Its creators imbued it with an inability to do it, whether purposefully or not.

Think of it like a Y2K glitch of moral imagination, not just a programming shortcut.

Granted, the issue doesn’t rise to the level of weighing the implications of some nuanced choice, and I don’t think the bank’s system delivered any judgment when asked to remove cash from clients’ accounts. 

But it’s an intriguing opportunity to ponder how we recognize and value intelligence and morality: just replace the computer display screen with a human employee who refuses to do something, no matter what the consequences, because she or he just knows its wrong.

We’d say that conclusion was the outcome of intelligence — perhaps inspired or ill-informed, depending on our biases about it — and we wouldn’t spend much time contemplating how or why it was reached. We’d label it an obvious effect of individual choice.

So how is the Ulster Bank computer’s action any different?

Skip its lack of body parts and its penchant for speaking only when spoken to, and doing so via (I assume) text on a screen. It has spoken in deference to the only way it knows to act.

Didn’t this robot just come to the defense of depositors?