“Natural” Rights

Is it possible that lakes and forests might have rights before robots?

Voters in Toledo have granted “irrevocable rights for the Lake Erie Ecosystem to exist, flourish and naturally evolve” which, according to this story, would give it legal standing to file lawsuits to protect itself from polluters (through the mouthpiece of a human guardian).

It’s an amazingly bold statement that is rife with thorny questions.

Humans have had say over nature ever since Adam and Eve, and most political and cultural uses or abuses have been based on the shifting perspectives of their progeny. Nature is something “out there” that only gains meaning or purpose when defined by us.

This carries forward to commerce, as most economic theories assign value to nature only when it enables something (as a resource to be exploited) or impedes something (as a barrier to said exploitation). It is otherwise an externality to any financial equation.

There are efforts underway to force valuation of environmental factors into everyday business operations, otherwise known as ESG (for Environment, Social, and Governance), but those approaches still rely on people agreeing on what those measures might be (people set goals, define acceptable levels of preservation or degradation, and decide on timeframes for said measurement).

Recognizing intrinsic rights in nature would totally shake things up.

Lakes, forests, and mountains are complex ecosystems that balance the interaction of vast numbers of living things with the physics of forces and material reality. We can’t claim that a lake is conscious in any sense of the word we use to describe our own minds (and which we cannot explain), but the interaction within those systems yield incessant decisions. Every ecosystem changes, by definition.

A mountain has boundaries, just like a human body — there’s a point at which there’s no more mountain but instead some other natural feature — and, like human consciousness, we can describe how it came to be, but not why. Every ecosystem has an existence that isn’t just separate from our understanding but beyond it.

Recognizing such natural features’ implicit right to exist and change would make them co-equal negotiators of any decision that might involve or impact them.

It’s an old idea, really, as early polytheistic folk religions recognized and often personified natural phenomena, and the ancient Greek’s idea of Gaia as the entire Earth — there is nothing external to our perspective — was revived by modern day environmentalists. The premise that humans possess natural rights that don’t depend on other humans is also just as old, and John Locke gave birth to the movement to recognize animal rights way back in the 17th century.

But letting a lake or mountain represent itself in a contract or court of law?

It’s hard to imagine the forests of Europe would have allowed the belching coal required for the Industrial Revolution. Cleveland’s Cuyahoga River would have never allowed itself to get so polluted that it could catch on fire, and the atmosphere above Beijing would put a stop to cars on the road starting tomorrow.

And we wouldn’t be experiencing global climate change.

Granted, the details are as many as those implications are diverse, perhaps the most thorny being that there’d always be a human being involved in providing the guardianship of, say, Mount Kilimanjaro or the Rhine. But even an imperfect realization of the approach might be more sensible and sustainable than our current practices, not the least of which being that it would be wild to explore technology innovation that saw nature as a co-creator of value and not a resource to be consumed or converted into it.

I’m rooting for the folks in Ohio to make progress on the issue, though business interests are already lining up to fight for the status quo.

Whatever the outcome, the debate has implications for how we think about robots which, like natural features can be complex, self-monitoring and changing systems, but can also possess levels of agency that at least mimic aspects of human consciousness.

So it’s only a matter of time before the first AI negotiator sits at the table to argue over work rules for itself and fellow robots?

Rendering Video Gamers Obsolete

DeepMind’s AlphaStar AI can now beat almost any human player of StarCraft II, one of my favorite video games of all time, according to the MIT Technology Review.

Its programmers figured out that it wasn’t enough to enable AlphaStar to play zillions of simulated games in its silicon brain, using them to teach itself how to win through a process called reinforcement learning. So, they equipped it to trigger mistakes or flaws in its competitors games so it could learn how to exploit their weaknesses.

AlphaStar doesn’t just know how to win StarCraft, it knows how to make its competitors lose.

Who knew that one of the first jobs obviated by AI would be video gamers, who are perhaps the ultimate digital natives?

Further, its turns out that reading imperfections in others is a very useful aspect of being intelligent, generally, as it also applies to assessing the variables and risks of things and situations. The algorithms could be applied to autonomous driving or the behavioral triggers for self-actuated robots, according to the MIT Review story.

But that also means they could apply to reading the weaknesses in people when it comes to making decisions to buy toothpaste or, more ominously, political choices. Imagine telling AlphaStar’s evil twin to go forth into the chat warrens of the social mediaverse and convince people that climate change isn’t real, or that a race war is.

I’m just bummed because StarCraft was so much fun to play, in large part because it kinda played itself every time you made a choice to collect a resource, build something, or go on the offensive.

I wasn’t prepared for it to figure out how to play us.

The Materialist Case For AI

The belief that development of a sentient or self-aware AI is simply a matter of enough data, connections, and processing speed is based on the premise that human consciousness is the product of material objects and processes, too.

Francis Crick, the less overtly racist half of the duo who discovered DNA’s double helix, published a book in 1994 called The Astonishing Hypothesis that proposed that consciousness, or a “soul,” results from the actions of physical cells, molecules, and atoms.

It’s a reasonable proposition, since we can only measure the material world, so everything must be a product of it. Bodies obey the same physical laws as rocks and weather patterns. If something defies explanation, it’s only because we don’t have enough information yet.

Just as a mind is the product of a brain, AI is the outcome of a computer. Any nagging questions are just details, Mr. Descartes, not a debate.

Only they’re not.

We can’t explain consciousness as a product of material processes. We can describe it, and make assumptions about whether it’s the result of vibrations from the brainstem (thalamocortical rhythms), the instructions from a prehistoric virus (Arc RNA), or only a “user illusion” of itself (Dan Dennett’s molecular machines).

But we can’t say what it is, or what those enabling processes are, exactly. How is there a you or me to which we return every morning? Nobody has a clue.

Similarly, we can describe that our brains control everything from muscle movement to immune system health, and both where and when they capture sensory information.

But we haven’t got the faintest idea how our minds do it…how that ephemeral thing called consciousness issues commands to flex muscles, secrete hormones, or remember a favorite song.

It gets even weirder when you consider the vagaries of quantum physics, which rely on consciousness as the mechanism for pulling elementary particles out of a hazy state of probable existence into reality. Consciousness literally creates the material world through the act of perception or, maybe more strangely, it emerges from the universe in the act of creating it?

Fortunately, we don’t need to solve that problem in order to invent incredibly capable AI that can autonomously learn and make increasingly complex decisions. Chips in coffee makers are “smart,” technically, and AI that can mimic human behaviors is already in use in online service chatbots. There’s no obvious limit to such material functions.

But I don’t think a machine is going to stumble on actual consciousness, or sentient agency of action, before we figure it out for ourselves.

We are nowhere near cracking that code.