Runaway human intelligence

This piece is from 2013. An edited version appeared on TheConversation.com with the title Super-Intelligent Machines Aren’t to be Feared and a longer version, with a slightly more polished academic argument, in the 2013 Proceedings of the International Living Machines Conference with the title The AI Singularity and Runaway Human Intelligence.



Fear of advancing Artificial Intelligence (AI) has captured the popular imagination for several decades now, and its roots go back even further. The age-old concern that humankind will overreach and seal its own doom, stretches back at least as far as the Ancient Greeks, and was explored in Milton’s Paradise Lost and later Mary Shelley’s Frankenstein.  Present day Prometheans are, supposedly, the scientists who are advancing AI, and who therefore run the risk of creating machines more intelligent than us that are capable of advancing beyond human control. Epitomised by the sleek robots of films like AI, iRobot, and Terminator, the worry is that they could take over the world and replace humans as the dominant ‘life-form’ on our planet.

This idea has long pervaded dystopian science fiction and even extends to much academic writing— a recent review by Luke Muehlhauser, Executive Director of the Machine Intelligence Research Institute, suggests that the default outcome from advanced AI is human extinction. If this is the case then we should certainly be concerned.

Many of the worries voiced by Muelhauser and others are based on the capacity of AI to surpass human intelligence at some future moment that is often referred to as the AI “singularity”.  Beyond this point, AI might continue to improve itself, so-called “runaway AI”, potentially at an accelerating rate, and increasingly leaving human intelligence far behind.  The question then is “what use will humankind be to super-intelligent AI?”.

However, this oft-imagined and bleak future is based on a particular assumption about how we should compare artificial and human intelligence. Human Intelligence, lets call it HI, is usually thought of as the ‘raw brain power’ of the average individual, which, having evolved to its current capacity around 100,000 years ago, hasn’t changed much since and is not likely to improve any time soon. Based on this comparison, it does, indeed, seem plausible that AI could surpass HI in many of its major aspects in the foreseeable future. However, there are other comparisons we can make that might be more appropriate and more informative. For instance, perhaps we shouldn’t be comparing AI with individual human intelligence, but with the collective intelligence of humanity at the group level—after all, AI, as an entity, can stretch across multiple machines, likewise, humans share their capabilities with each other making us, collectively, much more than the sum of parts (individual minds).

Furthermore, why strip us humans of our intelligence-enhancing artefacts when comparing AI and HI? Since the Stone Age (more precisely, the Upper-Paleolithic period 10-50,000 years ago), humans have used external symbol systems to store and communicate knowledge and to boost their individual and collective reasoning capacity. Indeed, computers, the internet, even AI itself, are just the latest inventions in a set of technologies whose earlier members include red ochre (for cave painting), papyrus, the abacus, the typewriter and the telephone. By inventing these intelligence-boosting and knowledge-preserving technologies, we have exponentially increased our shared knowledge and our ability to apply these insights to control our environment according to our goals.

We might describe this historical acceleration “runaway human intelligence” (to contrast with the notion of runaway AI), noting that it has happened at both an individual and societal level, fuelled by positive feedback as cultural and scientific development have led to a larger, more long-lived and better-educated world population.

Now, as the result of our most recent technologies (internet, cloud and mobile computing), we are more connected and better able to share our intelligence than ever before, contributing the raw processing power of our individual brains to an “enhanced collective intelligence” (or ECI), or what Francis Heylighen has called a Global Brain.  Further, this species-level, technology-enhanced intelligence has no obvious ceiling. We can continue to create technologies that complement our natural intelligence, allow us to communicate faster, and make us collectively smarter. Comparing future AI with ECI (rather HI) then renders the notion that humanity will be outstripped and left behind much less plausible.

One question we might still ask is: How likely is it that humanity will cease to exploit advances in AI that have the potential to boost collective intelligence? One worrying scenario is that a split could emerge between AI and ECI, with a sneaky and malevolent version of AI attempting to conceal its advances, biding its time until it is ready to eliminate all the unnecessary humans (Skynet, the self-aware AI in Terminator is a fictional example of this).

Back to the extinction threat again. But this possibility underestimates the contribution of our biological intelligence to any future human-machine collective. There are many things that our brains and bodies do exceptionally well that will be very hard for machines to master, and where there will be little incentive to improve them in order to do so. Like symbiotic systems in nature, the future partnership of people with intelligent machines, will be successful by virtue of the complementarity of its two halves.

The more plausible scenario, then, is that our enhanced collective intelligence will continue its runaway path, but with an increasingly bio-hybrid (part biological, part artificial) form, due to greater and deeper integration between humans and our intelligence enhancing technologies. What is good for AI, then, will also be good for us.


The future impacts on UK society and economy of robotics and AI

2 comments

Leave a comment