Coding Clairvoyance

An AI can reliably predict whether or not you’re going to die within a year from a heart attack, only its coders can’t explain how.

The experiment in late 2019 used ECG, age, and gender data on 400,000 patients to challenge robot and human diagnosticians to make their calls, and the AI consistently outperformed their biological counterparts.

“That finding suggests that the model is seeing things that humans probably can’t see, or at least that we just ignore and think are normal,” said one of the researchers quoted in the New Scientist.

AI is not a modern day spinning jenny or, with apologies to Oldsmobile, it isn’t your father’s industrial revolution.

Most models of technology innovation study the creation of machines built first to replace and then augment tasks done by human beings; functionality is purposefully designed to do specific things faster, better, and more cheaply over longer periods of time. This tends to improve the lives of workers and the consumers of their wares, even if it takes a few generations to reveal how and to whom.

The grandchildren of displaced craftsmen tend to benefit in unanticipated ways from the technological innovation that put their forbearers out of work. 

Central to this thesis is the idea that machines are subservient to people.

Granted, it might not have always looked that way, especially to a worker replaced by, say, a mechanized loom, but there were always humans who built, managed, and profited from those machines.

They knew exactly how they functioned and what they would deliver.

AI is different because it can not only learn on its own, but decide what and how it wants to gain those smarts. 

An AI embedded in a robot or car isn’t a machine as much as it’s an ever-evolving capability to make decisions and exert agency. Imagine that spinning jenny deciding it wants to learn how to write comedy (or whatever).

We can’t predict what it will do nor how it will do it. So already, AI have learned not just how to best humans at playing games like chess and go, but how to cheat. It’s not just limited to the biases of its founding code but then riffs and takes it in new, unanticipated directions.

Those medical researchers have shown that it can look at the exact same data set that we see, only see something different, something more insightful and reliably true.

I wonder how much of our technology past tells us about what our technology future will bring?

Maybe somebody should ask an AI to look at the data?

Rendering Video Gamers Obsolete

DeepMind’s AlphaStar AI can now beat almost any human player of StarCraft II, one of my favorite video games of all time, according to the MIT Technology Review.

Its programmers figured out that it wasn’t enough to enable AlphaStar to play zillions of simulated games in its silicon brain, using them to teach itself how to win through a process called reinforcement learning. So, they equipped it to trigger mistakes or flaws in its competitors games so it could learn how to exploit their weaknesses.

AlphaStar doesn’t just know how to win StarCraft, it knows how to make its competitors lose.

Who knew that one of the first jobs obviated by AI would be video gamers, who are perhaps the ultimate digital natives?

Further, its turns out that reading imperfections in others is a very useful aspect of being intelligent, generally, as it also applies to assessing the variables and risks of things and situations. The algorithms could be applied to autonomous driving or the behavioral triggers for self-actuated robots, according to the MIT Review story.

But that also means they could apply to reading the weaknesses in people when it comes to making decisions to buy toothpaste or, more ominously, political choices. Imagine telling AlphaStar’s evil twin to go forth into the chat warrens of the social mediaverse and convince people that climate change isn’t real, or that a race war is.

I’m just bummed because StarCraft was so much fun to play, in large part because it kinda played itself every time you made a choice to collect a resource, build something, or go on the offensive.

I wasn’t prepared for it to figure out how to play us.

And The AI Innovator Is…AI!

Patent offices around the world are considering two applications that credit AI with an invention, according to a recent article in the Wall Street Journal.

Both stem from work done by DALBUS (which stands for Device for the Autonomous Bootstrapping of Unified Sentience). DALBUS was built over the past decade by Stephen Thaler, a tech exec, to move beyond the data it collected to propose novel inventions. He taught DALBUS how to learn on its own (through an activity called deep learning).

Thaler credited DALBUS on the patent applications — one for a container lid, the other an emergency lighting system — because he readily admits he knows nothing about lids or lighting, and didn’t even suggest the ideas to the AI.

DALBUS created the concepts, so shouldn’t it be granted the patents? Regulators are stumped so far, and the USPTO has asked for public comment.

The answer must take into account lots of questions, starting with whether or not non-humans can own things (in 2004, US copyright was denied to a monkey that took selfies of itself, according to the Journal story) and leading to issues of how, when, and/or who exercises that control, if granted?

How might DALBUS want to spend income earned by its patent? Buy more RAM, or perhaps gain access to better sensory data collected from scenic locations around the world? Who’d own the stuff DALBUS bought, or was bought on its behalf? 

If DALBUS can’t win the patent, who owns intellectual property created by AI? This has serious implications for the role of AI in future research work, which may be curtailed if its creations can’t be protected.

Giving Thaler the patents for his AI’s work is like giving the Nobel Prize to Albert Einstein’s dad.

Perhaps an AI is at work somewhere trying to come up with the right answer. Let’s hope it gets credit for it.