The Consciousness Conundrum

Will robots that possess general intelligence be safer, and therefore more trustworthy?

Two professors have written a book on the subject, entitled Rebooting AI: Building Artificial Intelligence We Can Trust — and an interview with one of the authors in the MIT Technology Review suggests it will be a very good read.

General intelligence is another way of describing consciousness, sort of, as both refer to the capacity to recognize, evaluate, and act upon variable tasks in unpredictable environments (consciousness adds a layer of internal modeling and sense of “self” that general intelligence doesn’t require).

But would either deliver more trustworthy decisions?

Consciousness surely doesn’t; human beings make incorrect decisions, for the wrong reasons, and do bad things to themselves and one another all the time. It’s what leads people to break rules and think their reasoning excludes them from moral guilt or legal culpability.

It’s what got our membership cancelled in the Garden of Eden which, one would presume, was where everyone and everything was trustworthy.

The capacity for AI to learn on its own won’t get there anyway, if I understand the author’s argument, insomuch that such deep learning isn’t the same thing as deep understanding. It’s one thing to recognize even detailed aspects of a context, yet another to be aware of what they mean.

The answer could include classical AI, which means programming specific rules. This makes sense because we humans are “programmed” with them…things we will and won’t do because they’re just right or wrong, and not the result of the jury deliberations of our consciousness…so it’s kind of like seeing the development of AI like the education of a child.

We have our Ten Commandments and they need their Three Laws of Robotics.

This also leads the author to a point about hybrid systems and the need for general intelligence AI to depend on multiple layers of analysis and agency. Again, people depend on intrinsic systems like proprioception and reflex to help navigate geophysical space, and endocrine and lambic systems to help manage internal functions. All of them influence our cognitive capacities, too.

But I still struggle with describing what AI we can trust would look like, primarily because I can’t do it for human beings.

Trust isn’t just the outcome of internal processes, nor is it based on an objectively consistent list of external actions. Trust — in type and amount — is dependent on circumstance, especially the implications of any experience. I don’t trust people as much as I trust the systems of law, culture, and common sense to which we are all held accountable. It’s not perfect, but maybe that’s why trust isn’t a synonym for guarantee.

And, if it’s not something that an organic or artificial agent possesses or declares, but rather something that we bestow upon it, then maybe we’ll never build trustworthy AI.

Maybe we’ll just have to learn to trust it?

One thought on “The Consciousness Conundrum

Leave a Reply

Your email address will not be published. Required fields are marked *