|Kurzweil's Law of Accelerating Returns
||[Feb. 2nd, 2013|10:37 pm]
My review of The Great Stagnation provoked sufficient discussion of Kurzweil and whether his ideas made sense that a friend (who wishes to remain anonymous) offered to do the research if I would write a post about him. This post is mostly zir research, which explains why it is unusually complete.
Kurzweil's Law of Accelerating Returns is the opposite of the Great Stagnation theory. It says that instead of technological change gradually sputtering to a halt as all the interesting ideas are exhausted, technological change builds on technological change to produce even faster technological change. Some technologies allow other technologies, as the combustion engine allowed the airplane. Other times they make research itself more efficient, like computer-assisted design of new circuits. And other times they allow society to become richer and more populous, driving future growth.
Most people think of technological advance as linear; technology advanced a little bit in our parents' generation, it's advancing another little bit our generation, and it'll advance another little bit in our kids' generation. Kurzweil's ideas imply it's exponential; our generation will have more and more exciting advances than our parents', and our childrens' more still, until eventually things go crazy and we're having new world-shaking technologies every minute (this could either be a reductio ad absurdum, or an argument for a technological singularity).
One of his most common arguments is that the obvious and basic insight that things happen slowly in geologic time, a little quicker in evolutionary time, quicker still in human time, and very quickly indeed in modern times is actually another way of saying there's exponential growth in novelty. One of his books includes this graph:
And despite agreeing with the basic insight, I want to mention just how terrible this graph is.
The horizontal axis is time before present, a perfectly valid measure. But the vertical axis is "time to next event". This moves cherry-picking from a potentially annoying but nondisastrous error to the force driving the entire argument. That is, once you use "time to next event" as a measure, you can't just pick a couple of representative data points and let the human ability to draw lines through a scatter plot do the rest.
For example, suppose that instead of using "art, early cities" as a single invention, he had split it into "art" and "early cities", with Early Cities being invented a few hundred years after art. In that case, "early cities" would be quite near bottom of the graph, a huge deviation from the line. In fact, note that the closer together early cities and art are, the worse his graph looks, but that if he assumes they were simultaneous and lumps them together into a single "early cities, art" category, his graph looks perfect.
This is super sketchy.
But he has a response. In order to prove he's not just cherry-picking events to fit his theory, he does the same thing with lists of important events put together by other people, and comes up with the following more colorful plot:
The black diamond on the graph above, Modis, writes a paper critiquing Kurzweil's methodology here as well. In it he protests that only one of the data sets (Sagan's) covered and dated the entire range of events from Big Bang to present, and that in fact some of the data sets on there are based entirely off the other data sets yet presented as independent confirmation.
A more damning critique is that in many of these data sets, the whole point was to talk about things at specific intervals. One of them (Heidmann, the purple square) is from a book explaining scientific notation by listing important events that happened 10, 100, 1000, etc years ago, so of course those are going to follow exponential laws. Even when the bias wasn't that blatant, it may be that people want to seem "fair" by giving representative events from each "era" - a list of "cosmic events" that included ten different scientific discoveries from the twentieth century, plus the Cambrian explosion, would seem pretty non-cosmic.
Overall I think it's obviously true that if you define "technological advance" broadly enough, it happens more quickly in the 21st century than back in evolutionary times when it took ten million years to come up with a slightly different enzyme configuration, and most of these criticisms are haggling over exactly how neat the line is.
A Moore Objective Measure
Besides, there is a much more measurable, non-cherry-pickable area where Kurzweil seems to have an equally strong proof. This is Moore's Law, which gets formulated in a couple ways but is usually something along the lines of "the number of transistors on a chip doubles every two years".
Different formulations of Moore's Law have different doubling periods and different accuracy levels, but they all seem to be pretty much on track. Here are graphs of transistor counts and PC hard drive capacity, respectively:
There are some physical limitations on whether Moore's Law can continue in its current form indefinitely, but it might be possible for paradigm-shifting technologies to continue to capture the "spirit" of the law. For example, a quantum computer might not literally have more transistors, but it might be able to do more efficient calculations as if it did. In this spirit, Kurzweil replaces very specific measures like "number of transistors" with his own "million instructions per second $1000" measure. Annnnd
This also looks pretty good, and his data have been mostly confirmed by various other sources. Although there's some confusion about the measures used, most sources agree that MIPS is likely if anything to understate growth, and that if anything the trend is better than exponential.
You can generalize Moore's Law to lots of different aspects of electronics, from number of pixels on an average camera to capacity of optical fibers. Kurzweil wants to take it even further and say that exponential growth applies to all "information technologies". He believes that "information technologies" will eventually include all technologies even tangentially associated with data, which causes him to include for example medicine:
"Drugs are essentially an information technology, and we see the same doubling of price-performance each year as we do with other forms of information technology such as computers, communications, and DNA base-pair sequencing. AIDS drugs started out not working very well and costing tens of thousands of dollars per patient per year. Today these drugs work reasonably well and are approaching one hundred dollars per patient per year in poor countries such as those in Africa."
This does not really seem to mesh with what anyone else believes is happening in medicine right now. AIDS is sort of a huge exception in terms of being one of the biggest medical success stories of recent decades, and even AIDS drugs are not doing as well as he thinks. According to the WHO, the median treatment cost per year for AIDS drugs declined from $245 in 2003 to $140 in 2006 to $107 in 2009, and in the last three years has declined only slightly, to $93 in 2012. To fit Kurzweil’s prediction, there would need to have been a more than 30-fold increase in the effectiveness of AIDS drugs in the last six years (it’s not even clear what that would mean, given that by 2006 the drugs had long been effective enough to keep AIDS from being a death sentence for those who could afford them). And the declining price of AIDS drugs has had at least as much to do with successfully getting Third World countries exemptions from patent laws than with any tech increase.
MRI is also kind of like this. A paper by Sandberg and Bostrom claims that it's "impossible" to get a resolution better than about 8 micrometers unless you want to make people sit in an MRI machine for thirty hours. And in fact there has not been significant progress in this field in the last ten years.
(this is also a good example of how Kurzweil's book can be misleading. This version of the graph removes a data point from a previous version in 2000 which was about the same level as the 2012 data point and would have made it clear that, contrary to how it looks now, MRI progress has in fact stopped.)
Software is another information technology that just isn't doing as well as predicted. Computer scientist Ernest Davis writes of AI research:
Moreover, the success rates for such AI tasks generally reach a plateau, often well below 100%, beyond which progress is extremely slow and dicult. Once such a plateau has been reached, an improvement of accuracy of 3% — e.g. from 60% to 62% accuracy — is noteworthy and requires months of labor, applying a half-dozen new machine learning techniques to some vast new data set, and using immense amounts of computational resources. An improvement of 5% is remarkable, and an improvement of 10% is spectacular.
In a lot of these types of fields (machine translation is a similar one) it looks more like progress is approaching an asymptote than growing exponentially.
Stuart Armstrong did a pretty complete evaluation of some of Kurzweil's more specific predictions that can be found here
Overall the idea that there is more rapid change now than in deep geologic time seems correct, and although we can dispute particular data points there does seem to be something to the idea that the growth is exponential.
Moore's Law applies to various digital technologies, is at least exponential, and seems to still be in effect.
A lot of technologies are not growing exponentially, or started growing exponentially and then plateaued.
In general it does seem like in the best case technology can grow exponentially, and that an outside view that it will keep doing so can trump an inside view telling us that come on, computers can never have an entire ten megabytes of memory, that would be ridiculous. But it also seems that there is no hard-and-fast rule that all technologies will always grow exponentially and never plateau. I'm disappointed how little research there is in quantifying medical technology growth as that seems to be one area that a lot of people think is plateauing.
If I had to take one lesson from Kurzweil, it would be that once again, the absurdity heuristic doesn't work. Exponential growth can go on a lot longer and change things a lot more radically than someone who believes in linear growth would think remotely possible. But I don't think it would be a good idea to count on it.