There is a lot of talk about exponential technologies, nowadays. You’d better take it with a pinch of salt, though it is becoming clear that many technologies follow a generalised version of Moore’s law. This is very interesting because it opens up the possibility to make quantitative forecasts:
Technological progress is widely acknowledged as the main driver of economic growth, and thus any method for improved technological forecasting is potentially very useful. Given that technological progress depends on innovation, which is generally thought of as something new and unanticipated, forecasting it might seem to be an oxymoron. In fact there are several postulated laws for technological improvement, such as Moore’s law and Wright’s law, that have been used to make predictions about technology cost and performance. But how well do these methods work?
Doyne Farmer and Francois Lafond intend to provide a quantitative answer to this question in a paper(1) published in 2016 . The authors use a simple approach to forecasting motivated by Moore’s Law. Gordon Moore famously predicted in 1965 that the number of transistors on integrated circuits would double every two years, i.e. at an annual rate of about 40%. Exponential improvement is also a good approximation for other types of computer hardware as well, such as hard drives. In fact, exponential improvement is a much more general phenomenon that applies to many different technologies, even if in most cases the exponential rates are much slower (albeit look genomics!)
Moore’s law is traditionally applied as a regression of the log of the cost on a deterministic time trend. Farmer and Lafond reformulate it as a geometric random walk with drift, and they apply their model to a base of historical data on 53 technologies. Motivated by the structure found in the data, they further extend Moore’s law to allow for the possibility that changes in price are positively auto correlated in time. Their key assumption is that all technologies follow the same random process, even if the drift and volatility parameters of the random process are technology specific.
They do not claim that the generalisations of Moore’s law proposed provide the most accurate possible forecasts for technological progress. They acknowledge there is a large literature on experience curves, studying the relationship between cost and cumulative production, since originally suggested by Wright in 1936, and that many authors have proposed alternatives and generalisations. In a previous paper(2), Nagy et al. tested some of those alternatives using a similar data-set. It seems likely that methods using auxiliary data such as production, patent activity, or R&D can be used to make forecasts for technological progress that should yield improvements over the simple method used in the paper.
In the future we anticipate that theories will eventually provide causal explanations for why technologies improve at such different rates and this will result in better forecasts.
The approach of basing technological forecasts on historical data stands in sharp contrast to the most widely used method, which is based on expert opinions (delphy). The use of expert opinions is clearly valuable, but it has also serious drawbacks. Expert opinions are subjective and can be biased for a variety of reasons, including common information, herding, or vested interest.
The method proposed by Farmer and Lafond provides a benchmark against which other approaches can be measured. It provides a proof of principle that technologies can be successfully forecast and that the errors in the forecasts can be reliably predicted.
(1) Farmer, J Doyne, and François Lafond. 2016. ‘How Predictable Is Technological Progress?’ Research Policy 45 (3): 647–665.
(2) Nagy, Béla, J Doyne Farmer, Quan M Bui, and Jessika E Trancik. 2013. ‘Statistical Basis for Predicting Technological Progress’. PloS One 8 (2): e52669.