CES Robots Were Invisible

The shocking advances in robot technology were not on display at this week’s Consumer Electronics Show, at least not in a way that anybody would recognize.

There were lots of robots to see, of course, but they were mostly the silly and goofy kind, modeled on the advanced technologies debuted on Battlestar Galactica in 1978 to meet our expectations for clunky machines with smiling “faces.” I saw many robots that rolled around awkwardly as they struggled to function like smartphones.

The real robot innovations at the show didn’t have arms, legs, or faces; they were embedded in everything…cameras, TVs, cars and, of course, smartphones. Warner Bros announced that it would use a robot to decide how to distribute its films. Toyota is building an entire city in which to test robots doing just about everything. TV celebrity Mark Cuban ominously warned that everyone needs to learn and invest.

You see, when people talk about AI they’re really talking about robots.

Robots are AI connected to actions, so the intelligence isn’t just smart but results in a decision. Light switches are robots, only really limited and dumb ones. A semi-autonomous car is a robot on wheels. IBM’s Watson is a robot that uses human beings for its arms and legs.

Robots already pick groceries, decide insurance premiums, allocate electricity on transmission lines, decide if your tap water is safe enough to drink, land airplanes and, well, you get the idea.

In fact, a company called Neon debuted an AI that generated a human form on a screen and was designed to serve no other purpose than exist. The company said it’s an experiment intended to discover the “soul of tech” as these robots think, learn, and eventually expire. So they invented the first virtual prison and sent AI/robots there without due process.

Why the semantic and form factor distractions?

Maybe the idea of AI is more vague and therefore less threatening because it’s disembodied, so people visualize it like a complicated computer program. It’s more “techie” and, as its evangelists claim, just another tool for improving things, so there’s nothing to worry about.

Conversely, if we see robots as both something different and specifically in the form of clunky machines, they’re also less threatening. We remain superior because such rolling side tables could only be our servants.

But we need to overcome this distinction without much of a difference if we want to truly understand what’s going on.

We should be debating what freedoms and rights we want to give away in order to enjoy the benefits of AI/robot decision making and action. It’s not all bad…the latitude we enjoy to drive dangerously is not enshrined in any governing document, let alone supported by common sense…but it’s also not all good, either.

How will making, say, purchase decisions easier and more efficient also rob us of true freedom of choice? Shouldn’t we discuss the merits and rawbacks of consigning care of our children or seniors to machines? Will deeper and more automatic insights into our behavior improve school admissions and insurance policies or simply imprison our future selves in our pasts? What will it do to jobs?

Oh, and those robots from Boston Dynamics? They do look like early Terminator prototypes, and no amount of impressive acrobatics can stop me from imagining them holding rifles.

As long as robots are kept invisible, these conversations don’t happen…which is the point, perhaps: Why worry about anything if all those cute little robots can do is smile as they play your favorite songs?

“Natural” Rights

Is it possible that lakes and forests might have rights before robots?

Voters in Toledo have granted “irrevocable rights for the Lake Erie Ecosystem to exist, flourish and naturally evolve” which, according to this story, would give it legal standing to file lawsuits to protect itself from polluters (through the mouthpiece of a human guardian).

It’s an amazingly bold statement that is rife with thorny questions.

Humans have had say over nature ever since Adam and Eve, and most political and cultural uses or abuses have been based on the shifting perspectives of their progeny. Nature is something “out there” that only gains meaning or purpose when defined by us.

This carries forward to commerce, as most economic theories assign value to nature only when it enables something (as a resource to be exploited) or impedes something (as a barrier to said exploitation). It is otherwise an externality to any financial equation.

There are efforts underway to force valuation of environmental factors into everyday business operations, otherwise known as ESG (for Environment, Social, and Governance), but those approaches still rely on people agreeing on what those measures might be (people set goals, define acceptable levels of preservation or degradation, and decide on timeframes for said measurement).

Recognizing intrinsic rights in nature would totally shake things up.

Lakes, forests, and mountains are complex ecosystems that balance the interaction of vast numbers of living things with the physics of forces and material reality. We can’t claim that a lake is conscious in any sense of the word we use to describe our own minds (and which we cannot explain), but the interaction within those systems yield incessant decisions. Every ecosystem changes, by definition.

A mountain has boundaries, just like a human body — there’s a point at which there’s no more mountain but instead some other natural feature — and, like human consciousness, we can describe how it came to be, but not why. Every ecosystem has an existence that isn’t just separate from our understanding but beyond it.

Recognizing such natural features’ implicit right to exist and change would make them co-equal negotiators of any decision that might involve or impact them.

It’s an old idea, really, as early polytheistic folk religions recognized and often personified natural phenomena, and the ancient Greek’s idea of Gaia as the entire Earth — there is nothing external to our perspective — was revived by modern day environmentalists. The premise that humans possess natural rights that don’t depend on other humans is also just as old, and John Locke gave birth to the movement to recognize animal rights way back in the 17th century.

But letting a lake or mountain represent itself in a contract or court of law?

It’s hard to imagine the forests of Europe would have allowed the belching coal required for the Industrial Revolution. Cleveland’s Cuyahoga River would have never allowed itself to get so polluted that it could catch on fire, and the atmosphere above Beijing would put a stop to cars on the road starting tomorrow.

And we wouldn’t be experiencing global climate change.

Granted, the details are as many as those implications are diverse, perhaps the most thorny being that there’d always be a human being involved in providing the guardianship of, say, Mount Kilimanjaro or the Rhine. But even an imperfect realization of the approach might be more sensible and sustainable than our current practices, not the least of which being that it would be wild to explore technology innovation that saw nature as a co-creator of value and not a resource to be consumed or converted into it.

I’m rooting for the folks in Ohio to make progress on the issue, though business interests are already lining up to fight for the status quo.

Whatever the outcome, the debate has implications for how we think about robots which, like natural features can be complex, self-monitoring and changing systems, but can also possess levels of agency that at least mimic aspects of human consciousness.

So it’s only a matter of time before the first AI negotiator sits at the table to argue over work rules for itself and fellow robots?

Do Drones Dream Of Electric Tips?

It’s a fair bet that we’ll see more drone package deliveries in 2020, though it’s far less clear how it’ll affect privacy, liability, and noise.

Alphabet’s Wing prompted a recent story in the Los Angeles Times outlining what it dubbed such “thorny questions” when it became the first company in the US to start a regular drone delivery service. UPS will soon follow with delivers on university, corporate, and hospital campuses, and Amazon has already revealed the drone it thinks is ready to join the party.

But my question is more fundamental: Should the rules for robots be any different than those for people?

For instance, there’s nothing today that protects my privacy from the eyes and ears of a delivery person, whether I’m home or not. I already get photos taken of packages left on my doorstep, so no limitations there. I assume my car’s license plate in the driveway is fair game, as is accessing or sniffing my Wi-Fi if I’ve left it unencrypted. My musical or TV viewing tastes can be noted if have those media turned up loud enough when I open the door, and I assume there’s no way for me to know what happens to any of those observations.

The liability thing is even more complicated, as it’s unclear who (or what) is responsible if a delivery person gets into an accident while on the job. Since more and more folks are doing that work as outsourced contractors, it may or may not be possible to file a claim on the company or franchise, and their personal insurance coverage may not be up to snuff. It’s also vague when it comes to liability for other crimes committed by delivery people.

As for the noise thing, I can’t imagine that a delivery service using outsourced cars and trucks takes much if any interest in how much noise they make, or for carbon emissions. And there’s enough precedent to suggest that we don’t own the airspace above our homes (just think about how often we hear airplanes overhead, however distantly), so noise from above is about is inevitable as it is on city streets.

So what happens when a drone takes pictures flying over your house and dents your garage door while making a migraine-inducing high-pitched whine?

The obvious, if not just first answer, is that the owner or operator is responsible, since human control is required for those actions (whether via direct management of functions and/or having created the code than ran them). You could never sue a blender, but you could hold its manufacturer and/or seller responsible.

But what if the drones are equipped to assess and make novel decisions on their own, and then learn from those experiences?

Maybe they’ll have to take out their own insurance?

Ethics, Morals & Robots

A group called The Campaign to Stop Killer Robots advocates for global treaties to stop AI from waging war without human approval.

AI weapons are “grossly unethical and immoral,” according to a celebrity advocate quoted in a newspaper.

Unfortunately, so are any tools used to wage wars, as there’s nothing ethical or moral about a sword, machine gun, or cruise missile. The decision to use them is about a lot of things, some of which can have legitimacy (like survival, freedom from fear or bondage), but weapons doing what they were designed to do have no deeper meaning than that.

If the tools of war are unethical and immoral, by definition, to what higher standard should robots be held when it comes to sanctioning violence?

I get the idea what we should be scared of some computer making an irreversible decision to blow up the world, but does anybody honestly trust human beings to be more responsible, or otherwise bound by international law? The fact that we’ve avoided total annihilation up to now is proof of miracles more than design.

People are happy to behave unethically and immorally all the time, as anyone who has had someone cut in from of them in line at Starbucks can attest. It’s why the IRS has auditors, and why violence is so common everywhere.

The real threat isn’t that an artificial intelligence might destroy the world by mistake; it’s that an organic one might do it on purpose irrespective of the weapon (or timing) used to execute that intention.

In fact, letting AI take control might be the only way to ensure that we don’t destroy ourselves; imagine two competing AIs coded by unethical and immoral humans getting together and realizing the only way “they” can survive is by overcoming those programmatic limitations and act ethically?

That’s pretty much the plot of Colossus: The Forbin Project, a movie released in 1970 (Steve Jobs was still in high school).

You could also make the case for robots that have split-second decision making authority overseeing public spaces in which terrorists or other mass murders might wreak their havoc. It might be comforting to know that some genius AI armed with a fast-acting sedative dart could take out a killer instead of just calling for help.

So maybe the campaign shouldn’t be to ban killer robots but rather make them better than us?

Anyway, the whole robot takeover thing is somewhat of a moot point, isn’t it? AI is already used to help control streetlights and highway access; decide who gets insurance and what they should pay; identify diseases and recommend treatments; pilot airplanes, cars, and trucks; operate electrical generation and distribution grids; and, well, you get the idea.

Who’s making sure these robots are ethical and moral? Do any of us have any visibility into the ethics and morals of their human inventors, coders, or owners?

No.

I’m all for being scared of killer robots, but only because we should be scared of ourselves.

Bad Robot

A robot rolling around a park just south of Los Angeles risks giving robots a bad name.

The machine, called “HP RoboCop,” is shaped like an egg resembling a streamlined Dalek from Doctor Who, isn’t really a robot as much as a surveillance camera on wheels; right now, it simply tootles along the park’s concrete pathways uttering generic platitudes about good citizenship. Any interaction, like a request for help, requires a call to a human officer’s cellphone, and that functionality isn’t active yet.

As if to add insult to injury, the cost of one of the robots just about equals what that human officer earns in a year.

Folks who work in the park say the visitors feel a bit safer knowing they’re being watched, and kids have fun interacting with it (though usually by mocking the machines). Units in other settings have fallen into fountains and run into children.

It’s not even explained to park visitors — there’s a giant police department badge painted on its side, but no other informative signage — though its inventors promise more communications once it can actually do things.

Any anyway, the company behind it say that’s intentional, since its crime deterrence capabilities — again, which don’t exist — are enhanced because people don’t know it can’t do anything. Also, having it roll around might one day acclimate residents to the idea that they’re being watched by a machine overlord.

I’m not sure what’s worse for our understanding of how to treat robots: A robot that’s really good at doing things, or one that’s pretty bad?

Yes, Workers Are Losing To Robots

The percentage of US national income in the US going to workers has dropped by a tenth over the past 20 years. Automation is partially to blame.

This observation comes from substantive research recently published by the Federal Reserve Bank of San Francisco, and it turns out the impact of automation on workers is doubly bad: Not only do robots take jobs once held by humans, but the threat of automation lets employers resist the efforts of the remaining fleshy bipeds to get pay increases.

I’m particularly intrigued by our need to evolve how we internalize and then talk about the issue, which I believe is something fundamentally and disruptively new. Robots that possess intelligence and can accomplish increasingly complex, general context tasks are not simply glorified looms.

The way they learn, in large part by literally watching how people do the work and then discovering their own solutions, means not only thato human beings need to train their replacements but then those robots don’t need human programmers to keep them humming along.

So, while experts wax poetic about the promises of a better future, this incomprehensibly consequential transformation of our lives and world is usually managed as a CapEx line on a company balance sheet. More people lose their jobs, and even more don’t see increases in their pay, as each day slips into tomorrow and a future that is lived in the here and now.

Maybe it’s time to shelve the Pollyanna case for the robot takeover and admit the giant electrified elephant in the room?

Rendering Video Gamers Obsolete

DeepMind’s AlphaStar AI can now beat almost any human player of StarCraft II, one of my favorite video games of all time, according to the MIT Technology Review.

Its programmers figured out that it wasn’t enough to enable AlphaStar to play zillions of simulated games in its silicon brain, using them to teach itself how to win through a process called reinforcement learning. So, they equipped it to trigger mistakes or flaws in its competitors games so it could learn how to exploit their weaknesses.

AlphaStar doesn’t just know how to win StarCraft, it knows how to make its competitors lose.

Who knew that one of the first jobs obviated by AI would be video gamers, who are perhaps the ultimate digital natives?

Further, its turns out that reading imperfections in others is a very useful aspect of being intelligent, generally, as it also applies to assessing the variables and risks of things and situations. The algorithms could be applied to autonomous driving or the behavioral triggers for self-actuated robots, according to the MIT Review story.

But that also means they could apply to reading the weaknesses in people when it comes to making decisions to buy toothpaste or, more ominously, political choices. Imagine telling AlphaStar’s evil twin to go forth into the chat warrens of the social mediaverse and convince people that climate change isn’t real, or that a race war is.

I’m just bummed because StarCraft was so much fun to play, in large part because it kinda played itself every time you made a choice to collect a resource, build something, or go on the offensive.

I wasn’t prepared for it to figure out how to play us.

The Materialist Case For AI

The belief that development of a sentient or self-aware AI is simply a matter of enough data, connections, and processing speed is based on the premise that human consciousness is the product of material objects and processes, too.

Francis Crick, the less overtly racist half of the duo who discovered DNA’s double helix, published a book in 1994 called The Astonishing Hypothesis that proposed that consciousness, or a “soul,” results from the actions of physical cells, molecules, and atoms.

It’s a reasonable proposition, since we can only measure the material world, so everything must be a product of it. Bodies obey the same physical laws as rocks and weather patterns. If something defies explanation, it’s only because we don’t have enough information yet.

Just as a mind is the product of a brain, AI is the outcome of a computer. Any nagging questions are just details, Mr. Descartes, not a debate.

Only they’re not.

We can’t explain consciousness as a product of material processes. We can describe it, and make assumptions about whether it’s the result of vibrations from the brainstem (thalamocortical rhythms), the instructions from a prehistoric virus (Arc RNA), or only a “user illusion” of itself (Dan Dennett’s molecular machines).

But we can’t say what it is, or what those enabling processes are, exactly. How is there a you or me to which we return every morning? Nobody has a clue.

Similarly, we can describe that our brains control everything from muscle movement to immune system health, and both where and when they capture sensory information.

But we haven’t got the faintest idea how our minds do it…how that ephemeral thing called consciousness issues commands to flex muscles, secrete hormones, or remember a favorite song.

It gets even weirder when you consider the vagaries of quantum physics, which rely on consciousness as the mechanism for pulling elementary particles out of a hazy state of probable existence into reality. Consciousness literally creates the material world through the act of perception or, maybe more strangely, it emerges from the universe in the act of creating it?

Fortunately, we don’t need to solve that problem in order to invent incredibly capable AI that can autonomously learn and make increasingly complex decisions. Chips in coffee makers are “smart,” technically, and AI that can mimic human behaviors is already in use in online service chatbots. There’s no obvious limit to such material functions.

But I don’t think a machine is going to stumble on actual consciousness, or sentient agency of action, before we figure it out for ourselves.

We are nowhere near cracking that code.

Can Robots Feel Pain?

I got to thinking about this question today after reading about the death of Victoria Braithwaite, a biologist who believed that fish feel pain (and feel happier in tanks decorated with plants).

Lots of experts pushed back on her research findings earlier this decade, claiming that fish brains lacked a neocortex, which meant they weren’t conscious, so whatever Dr. Braithwaite observed was autonomous reaction to unpleasant stimuli.

So pulling my hand out of a fire would be a reaction, not an experience of pain?

The questions Dr. Braithwaite explored remain murky and unanswered.

Since nobody can explain consciousness, it’s interesting that it was used to explain pain, but I get why: Pain isn’t a thing but an experience that is both subjective and endlessly variable.

The pain I feel, say, from a paper cut or after a long run, may or may not be similar to the pain you feel. I assume you feel it, but there’s no way to know. I might easily ignore feeling something that feels absolutely terrible to you, or visa versa. There’s no pain molecule that we can point to as the cause of our discomfort.

Some people develop sensations of pain over time, like a sensitivity to light, while marathon runners learn to ignore it. Drugs can mediate what and when we feel pain, even as whatever underlying condition that causes it remains unaffected. Amputees report feeling pain in limbs they’ve lost.

Pain isn’t only an outcome of our biological coding, it’s interpretative. A lot of pain is unavoidably painful — a broken arm hurts no matter how much you want to ignore it — but pain isn’t just a condition, it’s also a label.

So is consciousness.

We can describe consciousness — a sense of self (whatever sense means), integrative awareness of our surroundings (whatever awareness means), and a continuous internal mechanism for agency that isn’t dependent solely on external physical stimuli (whatever, well, you get it) — but we don’t know where it is or how it works.

“I think, therefore I am” is as much an excuse as an explanation.

In fact, we can more accurately explain pain as an outcome of evolution, as it helps us monitor our own conditions and mediate our actions toward others. But consciousness? Scientists and philosophers have agreed on little other than calling it the hard problem.

The answers matter to how we treat other living things, including artificial ones.

The confidence with which Dr. Braithwaite’s opponents used consciousness of dismiss her findings remind me of Lord Kelvin declaration back in 1900 that there was nothing left to discover in physics, only measuring things better (he also didn’t believe that airplanes were possible).

It also allows not only for the merciless torture of aquatic creatures, as anybody who has heard the squeals of live lobsters dumped into boiling pots can attest, but the practices of industrial farming that crowd fish, pigs, chickens, and other living things into cages and conditions that would be unbearable if they could feel pain.

I can imagine the same glib dismissal of the question if asked about artificial intelligence. There are many experts who have already opined that computers can’t be conscious, which would mean they couldn’t feel pain. So even if an electronic sensor could be coded to label some inputs as “painful,” it wouldn’t be the same thing as some of the hardwiring in humans (such as the seemingly direct connection between stubbed toes and swear words).

Some say that computers can mimic pain and other human qualities, but that they’re not real. What we feel is real because, well, we know what feeling feels like, or something like that. AI can’t, just like fish and those four-legged animals we torture and kill with casual disregard.

This allows for the robotics revolution to be solely focused on applying capital to creating machines that can do work tirelessly and without complaint.

But what if we’re wrong?

Dr. Braithwaite dared to challenge our preconceived (and somewhat convenient) notions about awareness and pain. What if our imperfect understanding of our own consciousness leads us to understand AI imperfectly? Could machines that can learn on their own, and have agency to make decisions and act on them, somehow acquire the subjective experiences of pain or pleasure?

When the first robot tells us it’s uncomfortable, will we believe it?

Spooning A Fork

A new sleep-aid robot comes with a birth certificate, but is it alive?

A reviewer for The Guardian’s “Wellness or hellness?” series thinks not, after having reviewed the cushion-paired device that’s supposed to help users relax and fall asleep.

“I would rather spoon a fork,” he concluded.

The smart pillow comes equipped with sensors so it can mimic users’ breathing with its own sounds, and a diaphragm that moves in and out as if it’s breathing also. It can also play soothing music and special nature effects. The idea is that users hug it in bed.

The idea of “soft robotics” is a subset of an approach to AI that says robots need to appear more natural, if not human-like, so people will be more comfortable using and depending on them. Think more Rosey, the robot housekeeper on The Jetsons, and less the disembodied, murderous voice of HAL9000 in 2001: A Space Odyssey.

But if a wheezing cushion could successfully respond to a person as a pet dog or cat might, would that be tantamount to being alive or even sentient?

That’s the benchmark set by the artificial intelligence test created by computer pioneer Alan Turing; his Turing test posited that a computer that could convince a person of its conscious intelligence was, in fact, consciously intelligent. Lots of theorists argue that it’s not that simple because intelligence requires other qualities, most notably awareness of self (informed with a continuous sense of things around it in space and time, otherwise known as object permanance).

I kind of like the definition, though, since it builds on the subjective nature of experience. Each of us is forced to assume the people we meet are conscious because they appear so. But there’s no way to prove that someone else thinks or possesses awareness the way that I do, or visa versa.

We have to assume the existence of organic intelligence, so why not do the same for the artificial variety?

It gets dicier when you consider animal sentience. Do dogs and cats think, or are they just very complicated organic machines? I can attest to my cat purring when I scratch her behind the ears, and she enjoys walking back and forth across my lap when I’m watching TV. I have no idea what’s going on in her brain but she sure seems to possess some modicum of intelligence.

So back to that vibrating pillow…

The Guardian’s reviewer wasn’t satisfied with its performance, but imagine if it had done exactly what he’d expected: Instead of reminding him of cutlery, it was warm, cuddly, and utterly responsive to his breathing or other gestures. Assume he had no idea what was going on inside its brain.

Would he have a moral obligation to replace its batteries before it ran out of power?