Solomon’s Code … Being human in a world of intelligent machines … and the need for a “Digital Magna Carta”
January 23, 2019
Popular perception always thinks of AI in terms of game-changing narratives. Deep Blue beating Gary Kasparov at chess, for example. But it’s the way these AI applications are “getting into our heads” and making decisions for us that really influences our lives. That’s not to say the big, headline-grabbing breakthroughs aren’t important; they are.
But it’s the proliferation of prosaic apps and bots that changes our lives the most, by either empowering or counteracting who we are and what we do. Today, we turn a rapidly growing number of our decisions over to these machines, often without knowing it—and even more often without understanding the second- and third-order effects of both the technologies and our decisions to rely on them.
There is genuine power in what we call a “symbio-intelligent” partnership between human, machine, and natural intelligences. These relationships can optimize not just economic interests, but help improve human well-being, create a more purposeful workplace, and bring more fulfillment to our lives.
However, mitigating the risks while taking advantage of the opportunities will require a serious, multidisciplinary consideration of how AI influences human values, trust, and power relationships. Whether or not we acknowledge their existence in our everyday life, these questions are no longer just thought exercises or fodder for science fiction.
In many ways, these technologies can challenge what it means to be human, and their ramifications already affect us in real and often subtle ways. We need to understand how.
That’s the view of Olaf Groth and Mark Nitzberg, co-authors of a new book called Solomon’s Code: Humanity in a World of Thinking Machines, which explores artificial intelligence and its broader human, ethical, and societal implications that all leaders need to consider, and where they state that the shift in balance of power between intelligent machines and humans is already here.
Olaf Groth is Professor of Strategy, Innovation and Economics at HULT International Business School and CEO of advisory network Cambrian.ai, whilst Mark Nitzberg is Executive Director of UC Berkeley’s Center for Human-Compatible AI.
They call for call for a “Digital Magna Carta,” a broadly-accepted charter developed by a multi-stakeholder congress that would help guide the development of advanced technologies to harness their power for the benefit of all humanity.
The book provides a useful distinction between the cognitive capability that we often associate with AI processes, and the more human elements of consciousness and conscience.
Could machines take over consciousness some day as they become more powerful and complex? It’s hard to say. But there’s little doubt that, as machines become more capable, humans will start to think of them as something conscious—if for no other reason than our natural inclination to anthropomorphize.
Machines are already learning to recognize our emotional states and our physical health. Once they start talking that back to us and adjusting their behavior accordingly, we will be tempted to develop a certain rapport with them, potentially more trusting or more intimate because the machine recognizes us in our various states.
Consciousness is hard to define and may well be an emergent property, rather than something you can easily create or—in turn—deduce to its parts. So, could it happen as we put more and more elements together, from the realms of AI, quantum computing, or brain-computer interfaces? We can’t exclude that possibility.
Either way, we need to make sure we’re charting out a clear path and guardrails for this development through the Three Cs in machines: cognition (where AI is today); consciousness (where AI could go); and conscience (what we need to instill in AI before we get there). The real concern is that we reach machine consciousness—or what humans decide to grant as consciousness—without a conscience. If that happens, we will have created an artificial sociopath.
The future of humanity will be radically different than what we see today. As Ray Kurzweil put it, “We won’t experience 100 years of progress in the 21st century, it will be more like 20,000 years of progress.”
More from the blog