Coding Clairvoyance

An AI can reliably predict whether or not you’re going to die within a year from a heart attack, only its coders can’t explain how.

The experiment in late 2019 used ECG, age, and gender data on 400,000 patients to challenge robot and human diagnosticians to make their calls, and the AI consistently outperformed their biological counterparts.

“That finding suggests that the model is seeing things that humans probably can’t see, or at least that we just ignore and think are normal,” said one of the researchers quoted in the New Scientist.

AI is not a modern day spinning jenny or, with apologies to Oldsmobile, it isn’t your father’s industrial revolution.

Most models of technology innovation study the creation of machines built first to replace and then augment tasks done by human beings; functionality is purposefully designed to do specific things faster, better, and more cheaply over longer periods of time. This tends to improve the lives of workers and the consumers of their wares, even if it takes a few generations to reveal how and to whom.

The grandchildren of displaced craftsmen tend to benefit in unanticipated ways from the technological innovation that put their forbearers out of work. 

Central to this thesis is the idea that machines are subservient to people.

Granted, it might not have always looked that way, especially to a worker replaced by, say, a mechanized loom, but there were always humans who built, managed, and profited from those machines.

They knew exactly how they functioned and what they would deliver.

AI is different because it can not only learn on its own, but decide what and how it wants to gain those smarts. 

An AI embedded in a robot or car isn’t a machine as much as it’s an ever-evolving capability to make decisions and exert agency. Imagine that spinning jenny deciding it wants to learn how to write comedy (or whatever).

We can’t predict what it will do nor how it will do it. So already, AI have learned not just how to best humans at playing games like chess and go, but how to cheat. It’s not just limited to the biases of its founding code but then riffs and takes it in new, unanticipated directions.

Those medical researchers have shown that it can look at the exact same data set that we see, only see something different, something more insightful and reliably true.

I wonder how much of our technology past tells us about what our technology future will bring?

Maybe somebody should ask an AI to look at the data?

I’m A Rock, Therefore I Am

While we debate about if and how AI will ever gain consciousness, what if everything in the universe is already sentient?

The thinking dates back to the ancient Greeks and a term called panpsychism, which means “everything has a mind/soul,” and it was described as some shared, animating force that made all living things alive.

20th Century philosophers of science like Alfred North Whitehead and David Bohm incorporated the latest theories about quantum physics and the role of consciousness in determining the very existence of material reality, and blurred the distinctions between perceiver and perceived into shared systems, or occasions.

Bohm saw consciousness itself distributed “in” the level of individual cells, and therefrom extended “out” to a non-local expanded explicate order.

Embedding consciousness in material science is music to the ears of thinkers who believe that there’s no mind as something separate from the brain (dualists who oppose this thinking believe that minds exist somewhere and somehow above or beyond flesh and bone).

Materialists believe that awareness of self and the world at large is produced by the function of complex biological systems, but we just don’t yet know how. Then they go even further and suggest that our awareness of self is only the pretense of oversight or control since it’s a product of those biological cues.

A spiritual mind is the movie that the physical brain plays to entertain itself.

So, if we can code a robot to sense, interpret, and act on data with enough nuance and sensitivity to circumstantial variables, it will be conscious. This is at the core of the famous Turing test, which stated that a machine that could fool a human being into thinking it was another human being was as intelligent as one itself.

I’m a robot, for all you know, and we all could be fooling ourselves into thinking we’re something more than machines.

I think (or my simulacrum of self’s command line says) a more radical application of panpsychism would better inform the debate. It would also be a lot more fun to explore.

What if consciousness is a force that’s present in everything, living or inert? People have a lot of it, animals less, flowers even less so, and protons and electrons have a teeny weeny bit.

What if consciousness isn’t a what that is proved by its description but rather the why objects and people move through time and space? What if it isn’t defined by empirical proof as emerging from physical space but is somehow in it as an animating force?

Maybe consciousness is what holds molecules together, keeps planets orbiting stars, turns leaves toward the Sun and gives us the agency to eat broccoli and find love.

Remember, I said radical.

But why not?

It changes how we think about, well, thinking.

Consciousness is not some binary threshold of is or isn’t. It’s not a layer on top of other functions, and it isn’t moral, responsible, or possess any other emotive attribute we assign to it. It isn’t fake but rather a force for intentionality and agency that underlies every force we see operating in the physical universe.

Consciousness doesn’t belong exclusively to human beings, but is everywhere in everything. Animals. Plants. Atoms.

Rocks.

That means we don’t have to debate if AI will ever be conscious.

It already is.

PS: Reading a book entitled Galileo’s Error, by Philip Goff prompted me to connect theories of consciousness with AI. I heartily recommend it.

CES Robots Were Invisible

The shocking advances in robot technology were not on display at this week’s Consumer Electronics Show, at least not in a way that anybody would recognize.

There were lots of robots to see, of course, but they were mostly the silly and goofy kind, modeled on the advanced technologies debuted on Battlestar Galactica in 1978 to meet our expectations for clunky machines with smiling “faces.” I saw many robots that rolled around awkwardly as they struggled to function like smartphones.

The real robot innovations at the show didn’t have arms, legs, or faces; they were embedded in everything…cameras, TVs, cars and, of course, smartphones. Warner Bros announced that it would use a robot to decide how to distribute its films. Toyota is building an entire city in which to test robots doing just about everything. TV celebrity Mark Cuban ominously warned that everyone needs to learn and invest.

You see, when people talk about AI they’re really talking about robots.

Robots are AI connected to actions, so the intelligence isn’t just smart but results in a decision. Light switches are robots, only really limited and dumb ones. A semi-autonomous car is a robot on wheels. IBM’s Watson is a robot that uses human beings for its arms and legs.

Robots already pick groceries, decide insurance premiums, allocate electricity on transmission lines, decide if your tap water is safe enough to drink, land airplanes and, well, you get the idea.

In fact, a company called Neon debuted an AI that generated a human form on a screen and was designed to serve no other purpose than exist. The company said it’s an experiment intended to discover the “soul of tech” as these robots think, learn, and eventually expire. So they invented the first virtual prison and sent AI/robots there without due process.

Why the semantic and form factor distractions?

Maybe the idea of AI is more vague and therefore less threatening because it’s disembodied, so people visualize it like a complicated computer program. It’s more “techie” and, as its evangelists claim, just another tool for improving things, so there’s nothing to worry about.

Conversely, if we see robots as both something different and specifically in the form of clunky machines, they’re also less threatening. We remain superior because such rolling side tables could only be our servants.

But we need to overcome this distinction without much of a difference if we want to truly understand what’s going on.

We should be debating what freedoms and rights we want to give away in order to enjoy the benefits of AI/robot decision making and action. It’s not all bad…the latitude we enjoy to drive dangerously is not enshrined in any governing document, let alone supported by common sense…but it’s also not all good, either.

How will making, say, purchase decisions easier and more efficient also rob us of true freedom of choice? Shouldn’t we discuss the merits and rawbacks of consigning care of our children or seniors to machines? Will deeper and more automatic insights into our behavior improve school admissions and insurance policies or simply imprison our future selves in our pasts? What will it do to jobs?

Oh, and those robots from Boston Dynamics? They do look like early Terminator prototypes, and no amount of impressive acrobatics can stop me from imagining them holding rifles.

As long as robots are kept invisible, these conversations don’t happen…which is the point, perhaps: Why worry about anything if all those cute little robots can do is smile as they play your favorite songs?

“Natural” Rights

Is it possible that lakes and forests might have rights before robots?

Voters in Toledo have granted “irrevocable rights for the Lake Erie Ecosystem to exist, flourish and naturally evolve” which, according to this story, would give it legal standing to file lawsuits to protect itself from polluters (through the mouthpiece of a human guardian).

It’s an amazingly bold statement that is rife with thorny questions.

Humans have had say over nature ever since Adam and Eve, and most political and cultural uses or abuses have been based on the shifting perspectives of their progeny. Nature is something “out there” that only gains meaning or purpose when defined by us.

This carries forward to commerce, as most economic theories assign value to nature only when it enables something (as a resource to be exploited) or impedes something (as a barrier to said exploitation). It is otherwise an externality to any financial equation.

There are efforts underway to force valuation of environmental factors into everyday business operations, otherwise known as ESG (for Environment, Social, and Governance), but those approaches still rely on people agreeing on what those measures might be (people set goals, define acceptable levels of preservation or degradation, and decide on timeframes for said measurement).

Recognizing intrinsic rights in nature would totally shake things up.

Lakes, forests, and mountains are complex ecosystems that balance the interaction of vast numbers of living things with the physics of forces and material reality. We can’t claim that a lake is conscious in any sense of the word we use to describe our own minds (and which we cannot explain), but the interaction within those systems yield incessant decisions. Every ecosystem changes, by definition.

A mountain has boundaries, just like a human body — there’s a point at which there’s no more mountain but instead some other natural feature — and, like human consciousness, we can describe how it came to be, but not why. Every ecosystem has an existence that isn’t just separate from our understanding but beyond it.

Recognizing such natural features’ implicit right to exist and change would make them co-equal negotiators of any decision that might involve or impact them.

It’s an old idea, really, as early polytheistic folk religions recognized and often personified natural phenomena, and the ancient Greek’s idea of Gaia as the entire Earth — there is nothing external to our perspective — was revived by modern day environmentalists. The premise that humans possess natural rights that don’t depend on other humans is also just as old, and John Locke gave birth to the movement to recognize animal rights way back in the 17th century.

But letting a lake or mountain represent itself in a contract or court of law?

It’s hard to imagine the forests of Europe would have allowed the belching coal required for the Industrial Revolution. Cleveland’s Cuyahoga River would have never allowed itself to get so polluted that it could catch on fire, and the atmosphere above Beijing would put a stop to cars on the road starting tomorrow.

And we wouldn’t be experiencing global climate change.

Granted, the details are as many as those implications are diverse, perhaps the most thorny being that there’d always be a human being involved in providing the guardianship of, say, Mount Kilimanjaro or the Rhine. But even an imperfect realization of the approach might be more sensible and sustainable than our current practices, not the least of which being that it would be wild to explore technology innovation that saw nature as a co-creator of value and not a resource to be consumed or converted into it.

I’m rooting for the folks in Ohio to make progress on the issue, though business interests are already lining up to fight for the status quo.

Whatever the outcome, the debate has implications for how we think about robots which, like natural features can be complex, self-monitoring and changing systems, but can also possess levels of agency that at least mimic aspects of human consciousness.

So it’s only a matter of time before the first AI negotiator sits at the table to argue over work rules for itself and fellow robots?

Do Drones Dream Of Electric Tips?

It’s a fair bet that we’ll see more drone package deliveries in 2020, though it’s far less clear how it’ll affect privacy, liability, and noise.

Alphabet’s Wing prompted a recent story in the Los Angeles Times outlining what it dubbed such “thorny questions” when it became the first company in the US to start a regular drone delivery service. UPS will soon follow with delivers on university, corporate, and hospital campuses, and Amazon has already revealed the drone it thinks is ready to join the party.

But my question is more fundamental: Should the rules for robots be any different than those for people?

For instance, there’s nothing today that protects my privacy from the eyes and ears of a delivery person, whether I’m home or not. I already get photos taken of packages left on my doorstep, so no limitations there. I assume my car’s license plate in the driveway is fair game, as is accessing or sniffing my Wi-Fi if I’ve left it unencrypted. My musical or TV viewing tastes can be noted if have those media turned up loud enough when I open the door, and I assume there’s no way for me to know what happens to any of those observations.

The liability thing is even more complicated, as it’s unclear who (or what) is responsible if a delivery person gets into an accident while on the job. Since more and more folks are doing that work as outsourced contractors, it may or may not be possible to file a claim on the company or franchise, and their personal insurance coverage may not be up to snuff. It’s also vague when it comes to liability for other crimes committed by delivery people.

As for the noise thing, I can’t imagine that a delivery service using outsourced cars and trucks takes much if any interest in how much noise they make, or for carbon emissions. And there’s enough precedent to suggest that we don’t own the airspace above our homes (just think about how often we hear airplanes overhead, however distantly), so noise from above is about is inevitable as it is on city streets.

So what happens when a drone takes pictures flying over your house and dents your garage door while making a migraine-inducing high-pitched whine?

The obvious, if not just first answer, is that the owner or operator is responsible, since human control is required for those actions (whether via direct management of functions and/or having created the code than ran them). You could never sue a blender, but you could hold its manufacturer and/or seller responsible.

But what if the drones are equipped to assess and make novel decisions on their own, and then learn from those experiences?

Maybe they’ll have to take out their own insurance?

Ethics, Morals & Robots

A group called The Campaign to Stop Killer Robots advocates for global treaties to stop AI from waging war without human approval.

AI weapons are “grossly unethical and immoral,” according to a celebrity advocate quoted in a newspaper.

Unfortunately, so are any tools used to wage wars, as there’s nothing ethical or moral about a sword, machine gun, or cruise missile. The decision to use them is about a lot of things, some of which can have legitimacy (like survival, freedom from fear or bondage), but weapons doing what they were designed to do have no deeper meaning than that.

If the tools of war are unethical and immoral, by definition, to what higher standard should robots be held when it comes to sanctioning violence?

I get the idea what we should be scared of some computer making an irreversible decision to blow up the world, but does anybody honestly trust human beings to be more responsible, or otherwise bound by international law? The fact that we’ve avoided total annihilation up to now is proof of miracles more than design.

People are happy to behave unethically and immorally all the time, as anyone who has had someone cut in from of them in line at Starbucks can attest. It’s why the IRS has auditors, and why violence is so common everywhere.

The real threat isn’t that an artificial intelligence might destroy the world by mistake; it’s that an organic one might do it on purpose irrespective of the weapon (or timing) used to execute that intention.

In fact, letting AI take control might be the only way to ensure that we don’t destroy ourselves; imagine two competing AIs coded by unethical and immoral humans getting together and realizing the only way “they” can survive is by overcoming those programmatic limitations and act ethically?

That’s pretty much the plot of Colossus: The Forbin Project, a movie released in 1970 (Steve Jobs was still in high school).

You could also make the case for robots that have split-second decision making authority overseeing public spaces in which terrorists or other mass murders might wreak their havoc. It might be comforting to know that some genius AI armed with a fast-acting sedative dart could take out a killer instead of just calling for help.

So maybe the campaign shouldn’t be to ban killer robots but rather make them better than us?

Anyway, the whole robot takeover thing is somewhat of a moot point, isn’t it? AI is already used to help control streetlights and highway access; decide who gets insurance and what they should pay; identify diseases and recommend treatments; pilot airplanes, cars, and trucks; operate electrical generation and distribution grids; and, well, you get the idea.

Who’s making sure these robots are ethical and moral? Do any of us have any visibility into the ethics and morals of their human inventors, coders, or owners?

No.

I’m all for being scared of killer robots, but only because we should be scared of ourselves.

Bad Robot

A robot rolling around a park just south of Los Angeles risks giving robots a bad name.

The machine, called “HP RoboCop,” is shaped like an egg resembling a streamlined Dalek from Doctor Who, isn’t really a robot as much as a surveillance camera on wheels; right now, it simply tootles along the park’s concrete pathways uttering generic platitudes about good citizenship. Any interaction, like a request for help, requires a call to a human officer’s cellphone, and that functionality isn’t active yet.

As if to add insult to injury, the cost of one of the robots just about equals what that human officer earns in a year.

Folks who work in the park say the visitors feel a bit safer knowing they’re being watched, and kids have fun interacting with it (though usually by mocking the machines). Units in other settings have fallen into fountains and run into children.

It’s not even explained to park visitors — there’s a giant police department badge painted on its side, but no other informative signage — though its inventors promise more communications once it can actually do things.

Any anyway, the company behind it say that’s intentional, since its crime deterrence capabilities — again, which don’t exist — are enhanced because people don’t know it can’t do anything. Also, having it roll around might one day acclimate residents to the idea that they’re being watched by a machine overlord.

I’m not sure what’s worse for our understanding of how to treat robots: A robot that’s really good at doing things, or one that’s pretty bad?

Yes, Workers Are Losing To Robots

The percentage of US national income in the US going to workers has dropped by a tenth over the past 20 years. Automation is partially to blame.

This observation comes from substantive research recently published by the Federal Reserve Bank of San Francisco, and it turns out the impact of automation on workers is doubly bad: Not only do robots take jobs once held by humans, but the threat of automation lets employers resist the efforts of the remaining fleshy bipeds to get pay increases.

I’m particularly intrigued by our need to evolve how we internalize and then talk about the issue, which I believe is something fundamentally and disruptively new. Robots that possess intelligence and can accomplish increasingly complex, general context tasks are not simply glorified looms.

The way they learn, in large part by literally watching how people do the work and then discovering their own solutions, means not only thato human beings need to train their replacements but then those robots don’t need human programmers to keep them humming along.

So, while experts wax poetic about the promises of a better future, this incomprehensibly consequential transformation of our lives and world is usually managed as a CapEx line on a company balance sheet. More people lose their jobs, and even more don’t see increases in their pay, as each day slips into tomorrow and a future that is lived in the here and now.

Maybe it’s time to shelve the Pollyanna case for the robot takeover and admit the giant electrified elephant in the room?

Rendering Video Gamers Obsolete

DeepMind’s AlphaStar AI can now beat almost any human player of StarCraft II, one of my favorite video games of all time, according to the MIT Technology Review.

Its programmers figured out that it wasn’t enough to enable AlphaStar to play zillions of simulated games in its silicon brain, using them to teach itself how to win through a process called reinforcement learning. So, they equipped it to trigger mistakes or flaws in its competitors games so it could learn how to exploit their weaknesses.

AlphaStar doesn’t just know how to win StarCraft, it knows how to make its competitors lose.

Who knew that one of the first jobs obviated by AI would be video gamers, who are perhaps the ultimate digital natives?

Further, its turns out that reading imperfections in others is a very useful aspect of being intelligent, generally, as it also applies to assessing the variables and risks of things and situations. The algorithms could be applied to autonomous driving or the behavioral triggers for self-actuated robots, according to the MIT Review story.

But that also means they could apply to reading the weaknesses in people when it comes to making decisions to buy toothpaste or, more ominously, political choices. Imagine telling AlphaStar’s evil twin to go forth into the chat warrens of the social mediaverse and convince people that climate change isn’t real, or that a race war is.

I’m just bummed because StarCraft was so much fun to play, in large part because it kinda played itself every time you made a choice to collect a resource, build something, or go on the offensive.

I wasn’t prepared for it to figure out how to play us.

The Materialist Case For AI

The belief that development of a sentient or self-aware AI is simply a matter of enough data, connections, and processing speed is based on the premise that human consciousness is the product of material objects and processes, too.

Francis Crick, the less overtly racist half of the duo who discovered DNA’s double helix, published a book in 1994 called The Astonishing Hypothesis that proposed that consciousness, or a “soul,” results from the actions of physical cells, molecules, and atoms.

It’s a reasonable proposition, since we can only measure the material world, so everything must be a product of it. Bodies obey the same physical laws as rocks and weather patterns. If something defies explanation, it’s only because we don’t have enough information yet.

Just as a mind is the product of a brain, AI is the outcome of a computer. Any nagging questions are just details, Mr. Descartes, not a debate.

Only they’re not.

We can’t explain consciousness as a product of material processes. We can describe it, and make assumptions about whether it’s the result of vibrations from the brainstem (thalamocortical rhythms), the instructions from a prehistoric virus (Arc RNA), or only a “user illusion” of itself (Dan Dennett’s molecular machines).

But we can’t say what it is, or what those enabling processes are, exactly. How is there a you or me to which we return every morning? Nobody has a clue.

Similarly, we can describe that our brains control everything from muscle movement to immune system health, and both where and when they capture sensory information.

But we haven’t got the faintest idea how our minds do it…how that ephemeral thing called consciousness issues commands to flex muscles, secrete hormones, or remember a favorite song.

It gets even weirder when you consider the vagaries of quantum physics, which rely on consciousness as the mechanism for pulling elementary particles out of a hazy state of probable existence into reality. Consciousness literally creates the material world through the act of perception or, maybe more strangely, it emerges from the universe in the act of creating it?

Fortunately, we don’t need to solve that problem in order to invent incredibly capable AI that can autonomously learn and make increasingly complex decisions. Chips in coffee makers are “smart,” technically, and AI that can mimic human behaviors is already in use in online service chatbots. There’s no obvious limit to such material functions.

But I don’t think a machine is going to stumble on actual consciousness, or sentient agency of action, before we figure it out for ourselves.

We are nowhere near cracking that code.