Rendering Video Gamers Obsolete

DeepMind’s AlphaStar AI can now beat almost any human player of StarCraft II, one of my favorite video games of all time, according to the MIT Technology Review.

Its programmers figured out that it wasn’t enough to enable AlphaStar to play zillions of simulated games in its silicon brain, using them to teach itself how to win through a process called reinforcement learning. So, they equipped it to trigger mistakes or flaws in its competitors games so it could learn how to exploit their weaknesses.

AlphaStar doesn’t just know how to win StarCraft, it knows how to make its competitors lose.

Who knew that one of the first jobs obviated by AI would be video gamers, who are perhaps the ultimate digital natives?

Further, its turns out that reading imperfections in others is a very useful aspect of being intelligent, generally, as it also applies to assessing the variables and risks of things and situations. The algorithms could be applied to autonomous driving or the behavioral triggers for self-actuated robots, according to the MIT Review story.

But that also means they could apply to reading the weaknesses in people when it comes to making decisions to buy toothpaste or, more ominously, political choices. Imagine telling AlphaStar’s evil twin to go forth into the chat warrens of the social mediaverse and convince people that climate change isn’t real, or that a race war is.

I’m just bummed because StarCraft was so much fun to play, in large part because it kinda played itself every time you made a choice to collect a resource, build something, or go on the offensive.

I wasn’t prepared for it to figure out how to play us.

And The AI Innovator Is…AI!

Patent offices around the world are considering two applications that credit AI with an invention, according to a recent article in the Wall Street Journal.

Both stem from work done by DALBUS (which stands for Device for the Autonomous Bootstrapping of Unified Sentience). DALBUS was built over the past decade by Stephen Thaler, a tech exec, to move beyond the data it collected to propose novel inventions. He taught DALBUS how to learn on its own (through an activity called deep learning).

Thaler credited DALBUS on the patent applications — one for a container lid, the other an emergency lighting system — because he readily admits he knows nothing about lids or lighting, and didn’t even suggest the ideas to the AI.

DALBUS created the concepts, so shouldn’t it be granted the patents? Regulators are stumped so far, and the USPTO has asked for public comment.

The answer must take into account lots of questions, starting with whether or not non-humans can own things (in 2004, US copyright was denied to a monkey that took selfies of itself, according to the Journal story) and leading to issues of how, when, and/or who exercises that control, if granted?

How might DALBUS want to spend income earned by its patent? Buy more RAM, or perhaps gain access to better sensory data collected from scenic locations around the world? Who’d own the stuff DALBUS bought, or was bought on its behalf? 

If DALBUS can’t win the patent, who owns intellectual property created by AI? This has serious implications for the role of AI in future research work, which may be curtailed if its creations can’t be protected.

Giving Thaler the patents for his AI’s work is like giving the Nobel Prize to Albert Einstein’s dad.

Perhaps an AI is at work somewhere trying to come up with the right answer. Let’s hope it gets credit for it.