Coding Clairvoyance

An AI can reliably predict whether or not you’re going to die within a year from a heart attack, only its coders can’t explain how.

The experiment in late 2019 used ECG, age, and gender data on 400,000 patients to challenge robot and human diagnosticians to make their calls, and the AI consistently outperformed their biological counterparts.

“That finding suggests that the model is seeing things that humans probably can’t see, or at least that we just ignore and think are normal,” said one of the researchers quoted in the New Scientist.

AI is not a modern day spinning jenny or, with apologies to Oldsmobile, it isn’t your father’s industrial revolution.

Most models of technology innovation study the creation of machines built first to replace and then augment tasks done by human beings; functionality is purposefully designed to do specific things faster, better, and more cheaply over longer periods of time. This tends to improve the lives of workers and the consumers of their wares, even if it takes a few generations to reveal how and to whom.

The grandchildren of displaced craftsmen tend to benefit in unanticipated ways from the technological innovation that put their forbearers out of work. 

Central to this thesis is the idea that machines are subservient to people.

Granted, it might not have always looked that way, especially to a worker replaced by, say, a mechanized loom, but there were always humans who built, managed, and profited from those machines.

They knew exactly how they functioned and what they would deliver.

AI is different because it can not only learn on its own, but decide what and how it wants to gain those smarts. 

An AI embedded in a robot or car isn’t a machine as much as it’s an ever-evolving capability to make decisions and exert agency. Imagine that spinning jenny deciding it wants to learn how to write comedy (or whatever).

We can’t predict what it will do nor how it will do it. So already, AI have learned not just how to best humans at playing games like chess and go, but how to cheat. It’s not just limited to the biases of its founding code but then riffs and takes it in new, unanticipated directions.

Those medical researchers have shown that it can look at the exact same data set that we see, only see something different, something more insightful and reliably true.

I wonder how much of our technology past tells us about what our technology future will bring?

Maybe somebody should ask an AI to look at the data?

I’m A Rock, Therefore I Am

While we debate about if and how AI will ever gain consciousness, what if everything in the universe is already sentient?

The thinking dates back to the ancient Greeks and a term called panpsychism, which means “everything has a mind/soul,” and it was described as some shared, animating force that made all living things alive.

20th Century philosophers of science like Alfred North Whitehead and David Bohm incorporated the latest theories about quantum physics and the role of consciousness in determining the very existence of material reality, and blurred the distinctions between perceiver and perceived into shared systems, or occasions.

Bohm saw consciousness itself distributed “in” the level of individual cells, and therefrom extended “out” to a non-local expanded explicate order.

Embedding consciousness in material science is music to the ears of thinkers who believe that there’s no mind as something separate from the brain (dualists who oppose this thinking believe that minds exist somewhere and somehow above or beyond flesh and bone).

Materialists believe that awareness of self and the world at large is produced by the function of complex biological systems, but we just don’t yet know how. Then they go even further and suggest that our awareness of self is only the pretense of oversight or control since it’s a product of those biological cues.

A spiritual mind is the movie that the physical brain plays to entertain itself.

So, if we can code a robot to sense, interpret, and act on data with enough nuance and sensitivity to circumstantial variables, it will be conscious. This is at the core of the famous Turing test, which stated that a machine that could fool a human being into thinking it was another human being was as intelligent as one itself.

I’m a robot, for all you know, and we all could be fooling ourselves into thinking we’re something more than machines.

I think (or my simulacrum of self’s command line says) a more radical application of panpsychism would better inform the debate. It would also be a lot more fun to explore.

What if consciousness is a force that’s present in everything, living or inert? People have a lot of it, animals less, flowers even less so, and protons and electrons have a teeny weeny bit.

What if consciousness isn’t a what that is proved by its description but rather the why objects and people move through time and space? What if it isn’t defined by empirical proof as emerging from physical space but is somehow in it as an animating force?

Maybe consciousness is what holds molecules together, keeps planets orbiting stars, turns leaves toward the Sun and gives us the agency to eat broccoli and find love.

Remember, I said radical.

But why not?

It changes how we think about, well, thinking.

Consciousness is not some binary threshold of is or isn’t. It’s not a layer on top of other functions, and it isn’t moral, responsible, or possess any other emotive attribute we assign to it. It isn’t fake but rather a force for intentionality and agency that underlies every force we see operating in the physical universe.

Consciousness doesn’t belong exclusively to human beings, but is everywhere in everything. Animals. Plants. Atoms.

Rocks.

That means we don’t have to debate if AI will ever be conscious.

It already is.

PS: Reading a book entitled Galileo’s Error, by Philip Goff prompted me to connect theories of consciousness with AI. I heartily recommend it.

“Natural” Rights

Is it possible that lakes and forests might have rights before robots?

Voters in Toledo have granted “irrevocable rights for the Lake Erie Ecosystem to exist, flourish and naturally evolve” which, according to this story, would give it legal standing to file lawsuits to protect itself from polluters (through the mouthpiece of a human guardian).

It’s an amazingly bold statement that is rife with thorny questions.

Humans have had say over nature ever since Adam and Eve, and most political and cultural uses or abuses have been based on the shifting perspectives of their progeny. Nature is something “out there” that only gains meaning or purpose when defined by us.

This carries forward to commerce, as most economic theories assign value to nature only when it enables something (as a resource to be exploited) or impedes something (as a barrier to said exploitation). It is otherwise an externality to any financial equation.

There are efforts underway to force valuation of environmental factors into everyday business operations, otherwise known as ESG (for Environment, Social, and Governance), but those approaches still rely on people agreeing on what those measures might be (people set goals, define acceptable levels of preservation or degradation, and decide on timeframes for said measurement).

Recognizing intrinsic rights in nature would totally shake things up.

Lakes, forests, and mountains are complex ecosystems that balance the interaction of vast numbers of living things with the physics of forces and material reality. We can’t claim that a lake is conscious in any sense of the word we use to describe our own minds (and which we cannot explain), but the interaction within those systems yield incessant decisions. Every ecosystem changes, by definition.

A mountain has boundaries, just like a human body — there’s a point at which there’s no more mountain but instead some other natural feature — and, like human consciousness, we can describe how it came to be, but not why. Every ecosystem has an existence that isn’t just separate from our understanding but beyond it.

Recognizing such natural features’ implicit right to exist and change would make them co-equal negotiators of any decision that might involve or impact them.

It’s an old idea, really, as early polytheistic folk religions recognized and often personified natural phenomena, and the ancient Greek’s idea of Gaia as the entire Earth — there is nothing external to our perspective — was revived by modern day environmentalists. The premise that humans possess natural rights that don’t depend on other humans is also just as old, and John Locke gave birth to the movement to recognize animal rights way back in the 17th century.

But letting a lake or mountain represent itself in a contract or court of law?

It’s hard to imagine the forests of Europe would have allowed the belching coal required for the Industrial Revolution. Cleveland’s Cuyahoga River would have never allowed itself to get so polluted that it could catch on fire, and the atmosphere above Beijing would put a stop to cars on the road starting tomorrow.

And we wouldn’t be experiencing global climate change.

Granted, the details are as many as those implications are diverse, perhaps the most thorny being that there’d always be a human being involved in providing the guardianship of, say, Mount Kilimanjaro or the Rhine. But even an imperfect realization of the approach might be more sensible and sustainable than our current practices, not the least of which being that it would be wild to explore technology innovation that saw nature as a co-creator of value and not a resource to be consumed or converted into it.

I’m rooting for the folks in Ohio to make progress on the issue, though business interests are already lining up to fight for the status quo.

Whatever the outcome, the debate has implications for how we think about robots which, like natural features can be complex, self-monitoring and changing systems, but can also possess levels of agency that at least mimic aspects of human consciousness.

So it’s only a matter of time before the first AI negotiator sits at the table to argue over work rules for itself and fellow robots?

Ethics, Morals & Robots

A group called The Campaign to Stop Killer Robots advocates for global treaties to stop AI from waging war without human approval.

AI weapons are “grossly unethical and immoral,” according to a celebrity advocate quoted in a newspaper.

Unfortunately, so are any tools used to wage wars, as there’s nothing ethical or moral about a sword, machine gun, or cruise missile. The decision to use them is about a lot of things, some of which can have legitimacy (like survival, freedom from fear or bondage), but weapons doing what they were designed to do have no deeper meaning than that.

If the tools of war are unethical and immoral, by definition, to what higher standard should robots be held when it comes to sanctioning violence?

I get the idea what we should be scared of some computer making an irreversible decision to blow up the world, but does anybody honestly trust human beings to be more responsible, or otherwise bound by international law? The fact that we’ve avoided total annihilation up to now is proof of miracles more than design.

People are happy to behave unethically and immorally all the time, as anyone who has had someone cut in from of them in line at Starbucks can attest. It’s why the IRS has auditors, and why violence is so common everywhere.

The real threat isn’t that an artificial intelligence might destroy the world by mistake; it’s that an organic one might do it on purpose irrespective of the weapon (or timing) used to execute that intention.

In fact, letting AI take control might be the only way to ensure that we don’t destroy ourselves; imagine two competing AIs coded by unethical and immoral humans getting together and realizing the only way “they” can survive is by overcoming those programmatic limitations and act ethically?

That’s pretty much the plot of Colossus: The Forbin Project, a movie released in 1970 (Steve Jobs was still in high school).

You could also make the case for robots that have split-second decision making authority overseeing public spaces in which terrorists or other mass murders might wreak their havoc. It might be comforting to know that some genius AI armed with a fast-acting sedative dart could take out a killer instead of just calling for help.

So maybe the campaign shouldn’t be to ban killer robots but rather make them better than us?

Anyway, the whole robot takeover thing is somewhat of a moot point, isn’t it? AI is already used to help control streetlights and highway access; decide who gets insurance and what they should pay; identify diseases and recommend treatments; pilot airplanes, cars, and trucks; operate electrical generation and distribution grids; and, well, you get the idea.

Who’s making sure these robots are ethical and moral? Do any of us have any visibility into the ethics and morals of their human inventors, coders, or owners?

No.

I’m all for being scared of killer robots, but only because we should be scared of ourselves.

Bad Robot

A robot rolling around a park just south of Los Angeles risks giving robots a bad name.

The machine, called “HP RoboCop,” is shaped like an egg resembling a streamlined Dalek from Doctor Who, isn’t really a robot as much as a surveillance camera on wheels; right now, it simply tootles along the park’s concrete pathways uttering generic platitudes about good citizenship. Any interaction, like a request for help, requires a call to a human officer’s cellphone, and that functionality isn’t active yet.

As if to add insult to injury, the cost of one of the robots just about equals what that human officer earns in a year.

Folks who work in the park say the visitors feel a bit safer knowing they’re being watched, and kids have fun interacting with it (though usually by mocking the machines). Units in other settings have fallen into fountains and run into children.

It’s not even explained to park visitors — there’s a giant police department badge painted on its side, but no other informative signage — though its inventors promise more communications once it can actually do things.

Any anyway, the company behind it say that’s intentional, since its crime deterrence capabilities — again, which don’t exist — are enhanced because people don’t know it can’t do anything. Also, having it roll around might one day acclimate residents to the idea that they’re being watched by a machine overlord.

I’m not sure what’s worse for our understanding of how to treat robots: A robot that’s really good at doing things, or one that’s pretty bad?

Yes, Workers Are Losing To Robots

The percentage of US national income in the US going to workers has dropped by a tenth over the past 20 years. Automation is partially to blame.

This observation comes from substantive research recently published by the Federal Reserve Bank of San Francisco, and it turns out the impact of automation on workers is doubly bad: Not only do robots take jobs once held by humans, but the threat of automation lets employers resist the efforts of the remaining fleshy bipeds to get pay increases.

I’m particularly intrigued by our need to evolve how we internalize and then talk about the issue, which I believe is something fundamentally and disruptively new. Robots that possess intelligence and can accomplish increasingly complex, general context tasks are not simply glorified looms.

The way they learn, in large part by literally watching how people do the work and then discovering their own solutions, means not only thato human beings need to train their replacements but then those robots don’t need human programmers to keep them humming along.

So, while experts wax poetic about the promises of a better future, this incomprehensibly consequential transformation of our lives and world is usually managed as a CapEx line on a company balance sheet. More people lose their jobs, and even more don’t see increases in their pay, as each day slips into tomorrow and a future that is lived in the here and now.

Maybe it’s time to shelve the Pollyanna case for the robot takeover and admit the giant electrified elephant in the room?

Rendering Video Gamers Obsolete

DeepMind’s AlphaStar AI can now beat almost any human player of StarCraft II, one of my favorite video games of all time, according to the MIT Technology Review.

Its programmers figured out that it wasn’t enough to enable AlphaStar to play zillions of simulated games in its silicon brain, using them to teach itself how to win through a process called reinforcement learning. So, they equipped it to trigger mistakes or flaws in its competitors games so it could learn how to exploit their weaknesses.

AlphaStar doesn’t just know how to win StarCraft, it knows how to make its competitors lose.

Who knew that one of the first jobs obviated by AI would be video gamers, who are perhaps the ultimate digital natives?

Further, its turns out that reading imperfections in others is a very useful aspect of being intelligent, generally, as it also applies to assessing the variables and risks of things and situations. The algorithms could be applied to autonomous driving or the behavioral triggers for self-actuated robots, according to the MIT Review story.

But that also means they could apply to reading the weaknesses in people when it comes to making decisions to buy toothpaste or, more ominously, political choices. Imagine telling AlphaStar’s evil twin to go forth into the chat warrens of the social mediaverse and convince people that climate change isn’t real, or that a race war is.

I’m just bummed because StarCraft was so much fun to play, in large part because it kinda played itself every time you made a choice to collect a resource, build something, or go on the offensive.

I wasn’t prepared for it to figure out how to play us.

The Materialist Case For AI

The belief that development of a sentient or self-aware AI is simply a matter of enough data, connections, and processing speed is based on the premise that human consciousness is the product of material objects and processes, too.

Francis Crick, the less overtly racist half of the duo who discovered DNA’s double helix, published a book in 1994 called The Astonishing Hypothesis that proposed that consciousness, or a “soul,” results from the actions of physical cells, molecules, and atoms.

It’s a reasonable proposition, since we can only measure the material world, so everything must be a product of it. Bodies obey the same physical laws as rocks and weather patterns. If something defies explanation, it’s only because we don’t have enough information yet.

Just as a mind is the product of a brain, AI is the outcome of a computer. Any nagging questions are just details, Mr. Descartes, not a debate.

Only they’re not.

We can’t explain consciousness as a product of material processes. We can describe it, and make assumptions about whether it’s the result of vibrations from the brainstem (thalamocortical rhythms), the instructions from a prehistoric virus (Arc RNA), or only a “user illusion” of itself (Dan Dennett’s molecular machines).

But we can’t say what it is, or what those enabling processes are, exactly. How is there a you or me to which we return every morning? Nobody has a clue.

Similarly, we can describe that our brains control everything from muscle movement to immune system health, and both where and when they capture sensory information.

But we haven’t got the faintest idea how our minds do it…how that ephemeral thing called consciousness issues commands to flex muscles, secrete hormones, or remember a favorite song.

It gets even weirder when you consider the vagaries of quantum physics, which rely on consciousness as the mechanism for pulling elementary particles out of a hazy state of probable existence into reality. Consciousness literally creates the material world through the act of perception or, maybe more strangely, it emerges from the universe in the act of creating it?

Fortunately, we don’t need to solve that problem in order to invent incredibly capable AI that can autonomously learn and make increasingly complex decisions. Chips in coffee makers are “smart,” technically, and AI that can mimic human behaviors is already in use in online service chatbots. There’s no obvious limit to such material functions.

But I don’t think a machine is going to stumble on actual consciousness, or sentient agency of action, before we figure it out for ourselves.

We are nowhere near cracking that code.

Can Robots Feel Pain?

I got to thinking about this question today after reading about the death of Victoria Braithwaite, a biologist who believed that fish feel pain (and feel happier in tanks decorated with plants).

Lots of experts pushed back on her research findings earlier this decade, claiming that fish brains lacked a neocortex, which meant they weren’t conscious, so whatever Dr. Braithwaite observed was autonomous reaction to unpleasant stimuli.

So pulling my hand out of a fire would be a reaction, not an experience of pain?

The questions Dr. Braithwaite explored remain murky and unanswered.

Since nobody can explain consciousness, it’s interesting that it was used to explain pain, but I get why: Pain isn’t a thing but an experience that is both subjective and endlessly variable.

The pain I feel, say, from a paper cut or after a long run, may or may not be similar to the pain you feel. I assume you feel it, but there’s no way to know. I might easily ignore feeling something that feels absolutely terrible to you, or visa versa. There’s no pain molecule that we can point to as the cause of our discomfort.

Some people develop sensations of pain over time, like a sensitivity to light, while marathon runners learn to ignore it. Drugs can mediate what and when we feel pain, even as whatever underlying condition that causes it remains unaffected. Amputees report feeling pain in limbs they’ve lost.

Pain isn’t only an outcome of our biological coding, it’s interpretative. A lot of pain is unavoidably painful — a broken arm hurts no matter how much you want to ignore it — but pain isn’t just a condition, it’s also a label.

So is consciousness.

We can describe consciousness — a sense of self (whatever sense means), integrative awareness of our surroundings (whatever awareness means), and a continuous internal mechanism for agency that isn’t dependent solely on external physical stimuli (whatever, well, you get it) — but we don’t know where it is or how it works.

“I think, therefore I am” is as much an excuse as an explanation.

In fact, we can more accurately explain pain as an outcome of evolution, as it helps us monitor our own conditions and mediate our actions toward others. But consciousness? Scientists and philosophers have agreed on little other than calling it the hard problem.

The answers matter to how we treat other living things, including artificial ones.

The confidence with which Dr. Braithwaite’s opponents used consciousness of dismiss her findings remind me of Lord Kelvin declaration back in 1900 that there was nothing left to discover in physics, only measuring things better (he also didn’t believe that airplanes were possible).

It also allows not only for the merciless torture of aquatic creatures, as anybody who has heard the squeals of live lobsters dumped into boiling pots can attest, but the practices of industrial farming that crowd fish, pigs, chickens, and other living things into cages and conditions that would be unbearable if they could feel pain.

I can imagine the same glib dismissal of the question if asked about artificial intelligence. There are many experts who have already opined that computers can’t be conscious, which would mean they couldn’t feel pain. So even if an electronic sensor could be coded to label some inputs as “painful,” it wouldn’t be the same thing as some of the hardwiring in humans (such as the seemingly direct connection between stubbed toes and swear words).

Some say that computers can mimic pain and other human qualities, but that they’re not real. What we feel is real because, well, we know what feeling feels like, or something like that. AI can’t, just like fish and those four-legged animals we torture and kill with casual disregard.

This allows for the robotics revolution to be solely focused on applying capital to creating machines that can do work tirelessly and without complaint.

But what if we’re wrong?

Dr. Braithwaite dared to challenge our preconceived (and somewhat convenient) notions about awareness and pain. What if our imperfect understanding of our own consciousness leads us to understand AI imperfectly? Could machines that can learn on their own, and have agency to make decisions and act on them, somehow acquire the subjective experiences of pain or pleasure?

When the first robot tells us it’s uncomfortable, will we believe it?

Spooning A Fork

A new sleep-aid robot comes with a birth certificate, but is it alive?

A reviewer for The Guardian’s “Wellness or hellness?” series thinks not, after having reviewed the cushion-paired device that’s supposed to help users relax and fall asleep.

“I would rather spoon a fork,” he concluded.

The smart pillow comes equipped with sensors so it can mimic users’ breathing with its own sounds, and a diaphragm that moves in and out as if it’s breathing also. It can also play soothing music and special nature effects. The idea is that users hug it in bed.

The idea of “soft robotics” is a subset of an approach to AI that says robots need to appear more natural, if not human-like, so people will be more comfortable using and depending on them. Think more Rosey, the robot housekeeper on The Jetsons, and less the disembodied, murderous voice of HAL9000 in 2001: A Space Odyssey.

But if a wheezing cushion could successfully respond to a person as a pet dog or cat might, would that be tantamount to being alive or even sentient?

That’s the benchmark set by the artificial intelligence test created by computer pioneer Alan Turing; his Turing test posited that a computer that could convince a person of its conscious intelligence was, in fact, consciously intelligent. Lots of theorists argue that it’s not that simple because intelligence requires other qualities, most notably awareness of self (informed with a continuous sense of things around it in space and time, otherwise known as object permanance).

I kind of like the definition, though, since it builds on the subjective nature of experience. Each of us is forced to assume the people we meet are conscious because they appear so. But there’s no way to prove that someone else thinks or possesses awareness the way that I do, or visa versa.

We have to assume the existence of organic intelligence, so why not do the same for the artificial variety?

It gets dicier when you consider animal sentience. Do dogs and cats think, or are they just very complicated organic machines? I can attest to my cat purring when I scratch her behind the ears, and she enjoys walking back and forth across my lap when I’m watching TV. I have no idea what’s going on in her brain but she sure seems to possess some modicum of intelligence.

So back to that vibrating pillow…

The Guardian’s reviewer wasn’t satisfied with its performance, but imagine if it had done exactly what he’d expected: Instead of reminding him of cutlery, it was warm, cuddly, and utterly responsive to his breathing or other gestures. Assume he had no idea what was going on inside its brain.

Would he have a moral obligation to replace its batteries before it ran out of power?