So 2015 has come and largely gone and we still don’t have those hover-boards that the cinematic masterpiece Back to the Future II promised us. I know this is disappointing to many, so what if I told you about something better to look forward to? What if I opened your eyes to the impending robotic revolution?
By robotic revolution I mean the point at which artificial intelligence (AI) can be considered smarter than garden variety human intelligence. This is commonly referred to as the technological singularity by people far more qualified than I and who no doubt have many more letters in front of their names. These same people widely agree that the technological singularity is coming, though they disagree on exactly how far down the horizon it is and even if it’s going to be a force for good, bad or ugly.
There are people such as Vernor Vinge (Professor of computer science, Author and all round high flyer) who, in his 1993 paper “The Coming of the Technological Singularity: How to Survive in the Post Human Era” (paper here), estimate that the year 2023 (with ranges extending to 2005 and 2030) will be when the technological singularity will kick down the door and enter our lives. This estimate is on the earlier side of the fence when compared to estimates agreed by others in the field (link here).
Another respected author who agrees with him is Ray Kurzweil, who in a 2001 article (essay here) echoed Vinge’s sentiment and explained it using the law of accelerated returns. In the simplest of terms, this means that human technological progress undergoes exponential growth due to the influence of positive feedback loops simultaneously refining technological processes and enabling more resources to be thrown at them. As a result, the history of technological progress can’t be used to project future growth unless exponential growth is accounted for.
A textbook example of this exponential growth is Moore’s law, which states that the number of transistors and hence computational power, on a microchip doubles every 18 months. Though this law is often spoken about is if it will hold true forever, which is far from the case. As the continued use of a silicon microchip base on which to place transistors only leaves between 3-4 iterations of Moore’s law before the transistors can’t be placed any closer and a process called quantum tunnelling takes effect (This is a large topic and I will try to talk about it another time).
The result of Moore’s law is a future in which the raw power of computation far exceeds that of the present day and blows human processing power out of the water. Although this may not be enough for artificial intelligence to supersede human minds for the top intellectual spot. In Ray Kurzweil’s 2005 book (link here) he popularised the distinction between narrow sense (weak) AI, and broad sense (strong) AI. Weak AI is AI that you and I all know and love, it’s a program that carries out a specific set of functions and often does its function very well and/or at lightning speed. Ask weak AI to do anything else outside of its programming and it’ll stare back at you with blank puppy dog eyes. Strong AI however, is a hypothetical (at this point) AI that is capable of a wide variety of functions and often has the ability to learn by extrapolating from data currently available to it, much like the human mind.
This idea of strong AI was originally fleshed out by an earlier thought experiment devised in 1980 by a philosopher called John Searle, called the Chinese room thought experiment. In the experiment, a man whose not fluent in a particular language (Chinese in this example) must receive written Chinese instructions and by referring to an instruction manual written in another language that he can understand, must produce a written Chinese output from these instructions. Since the man has no understanding of the meaning of what he is reading and is only carrying out instructions from the manual, he is considered analogous to a weak AI, blindly following its coding. If there is any written input that deviates from the man’s manual/ AI’s code; such as colloquialisms or spelling errors, then both flounder and can’t produce a tangible response.
Conversely a strong AI could be considered analogous to a man fluent in the particular language at hand and would be able to interpret the written instructions, respond accordingly and accommodate textual errors. Both the literate man and the strong AI can be considered to truly understand the instructions before them by utilising a mind (or something else approximating a mind).
The limits of weak AI are not the only road blocks on the way to the technological singularity. As shown by a recent simulation of the human brain carried out by IBM (Wong et al. 2012, article here), which utilised a new microchip dubbed “TrueNorth”. This microchip undergoes a process IBM calls “cognitive computing” and closer simulates human behaviour than conventional chips. Though the simulation still ran 1,500 times slower than the human brain and consumed 12 Gigawatts compared to the brains 20 Watts. These results suggest that it’s the behaviour of AI rather than its processing power that may prevent the technological singularity from occurring (link here).
Further evidence that processing power isn’t the main obstacle to the singularity is offered by the Chinese supercomputer ‘Tianhe-2’ (summary here), which clocks in at 3.4 x 10^17 calculations per second. Faster than the human mind which is estimated to produce 10^16 calculations per second. Though quantifying the power of the human mind is difficult, as there are few obvious metrics to use and produce tangible results with.
As we come to the end of this rabbit hole, it seems to me that the technological singularity isn’t a matter of computational power. Rather an issue with the design and behaviour of AI. Though it’s not all bad news, there are solutions such as intelligence amplification that I will (try to) explain at a later time.