Research on the science of performance curves

From Evo Devo Universe
Jump to: navigation, search

Overview

Technology performance curves, also known in engineering, economics, and manufacturing as progress or production functions, and in cognitive science as learning curves or experience curves, involve the growth of technological capability or efficiency by exponential, power law, logistic, or other fashion with cumulative experience or production. Technology is broadly interpreted here to include information, computation, communication, and physical production and transformation technologies. These curves have been studied by a small group of scholars around the world since the 1930's from physical, engineering, planning, manufacturing, management, policy, computational, psychological, philosophical, and other perspectives. There are also a few collections of them, such as Santa Fe Institute's Performance Curve Databases, but no broadly representative databases yet. Given their accelerating impact on the technology environment, they seem a particularly useful topic of technology innovation, strategy, economics, and policy. Yet in spite of their increasing importance, we do not presently have broadly accepted theory or understanding of the physical and informational basis, limits, and reliability of long-term forecasts of these curves. Performance data are growing, but remain poorly organized, and many open questions remain. The scientific, technical, and policy potential for scholarship and collaboration in this emerging area has never been greater.

Situation

We need a better understanding of the science of performance curves.

Problems

  • How can we best improve today's early scholarship of technology performance growth?
  • What models do we have for the physical and computational foundations of technology performance curves?
  • What are first-order implications of these models for technology innovation, strategy, sustainability, economics, and policy?
  • What are the limitations of performance curve models?
  • How can we make performance curve forecasting more precise and improve our data sets?
  • How can we relate performance curves to information theory, learning theory, and complex, hierarchically structured systems?
  • How are cognitive performance curves in individual and collective learning related to technology performance curves?
  • How do these models differ from performance curve models in socioeconomic, biological, ecological, and other complex systems domains?
  • What physical processes differentiate superexponential, exponential, logistic, life cycle, and other tech performance curves?
  • When can logistic, agent based, cellular automata, and other modeling approaches explain technology performance curve behavior?
  • Can we develop unifying theories (physical, efficiency, computational, informational, psychological) among performance curve models?
  • How do non-computational (physical process, efficiency) performance curves differ from computational (computing, memory, communication) performance curves?
  • When does exponential performance end in any technology performance curve? Under what circumstances can we predict a transition to a logistic, catastrophic, or other regime?
  • Can we reliably differentiate non-persistently exponential performance curves (market-limited, etc.) from persistently exponential (scale reduction, ERD, etc.) curves?
  • What models (perceived and actual risk, etc.) explain decreasing technology performance in some social domains, such as (eg., Eroom's law for new drug introduction)?.
  • What models explain growth rate switches (transitions to steeper or flatter exponential modes) in technology performance curves?
  • When does technology substitution (creating a composite technology performance curve) occur in any technology platform? How can we predict the rate and extent of substitution?
  • The most rapidly accelerating performance curves appear to occur in technologies where the greatest rates of densification, efficiency increases, and virtualization (subsitution of informational for physical process) are occurring (e.g., nanotechnologies, computing, and communications technologies). How do we measure these processes?
  • Densification or localization of nodes and edges of many technological, social, and information networks is also occurring over time, following a power law (Leskovec 2005). As one example, increasingly dense/local metropoli have been outcompeting less dense cities and rural areas as civilization develops, with the densest environments usually delivering greater rates of innovation and life services efficiency per dollar, per capita (Bettencourt et.al. 2007). When and why can we expect densification/localization to occur, and how do we model its contribution to performance curves?
  • For exponential curves, learning is based on a fixed percentage of what remains to be learnt. For power laws, learning slows down with experience. When is each model valid?
  • Standard deviation and skew in performance times often show power law decreases with cumulative experience. Why and when does this occur?
  • How do computer hardware and software performance curves differ, and why does hardware exhibit consistently better long-term exponential performance improvement?
  • Why are technology product outliers so often market failures? How are outliers typically distributed (normal, log-normal, etc.) vs. the curve?
  • Do performance curves that appear hyperbolic (tending toward a finite-time singularity) signal an impending phase change in physical systems?

Progressing on these issues

We are looking for researchers to collaborate on investigating performance curves and their larger implications. Team members who could be particularly valuable to the Performance Curve Research Project:

  • Physicists, systems and process engineers, functional performance capability planners, management and learning theorists
  • Neuroscientists, cognitive scientists, technology substitution scholars, miniaturization, densification, dematerialization, virtualization, simulation and automation scholars
  • Computer scientists, economists, complexity theorists, technological evolution and development scholars and their critics.
  • Scholars who approach performance curve study from materials science, thermodynamic, computational, informational, evolutionary, developmental, economic, competitive, cognitive science, social science, systems theoretic and other perspectives are welcomed.
  • Anyone else who has studied these issues, or is interested in helping us improve the data sets, and methodology, and validation or falsification of performance curve growth models.

Benefits

An improved quantitative understanding of these processes, that will allow us to better characterize technological performance in universal context.

EDU Scholars Interested

External scholars who have published on performance curve topics from a physical or informational perspective (starter list):

  • Robert Aunger
  • Robert Bryce
  • Quan Bui
  • Eric Chaisson
  • James P. Crutchfield
  • George Ellis
  • Doyne Farmer
  • Heebyung Koh
  • Chris Magee
  • Bela Nagy
  • Jessica Trancik
  • Geoffrey West

Tools

In 2010, Bela Nagy set up a prototype website at the Santa Fe Institute, the Performance Curve Database (PCDB) to explore learning/experience curves (also known as functional performance metrics) in technology and other learning systems. The website allows researchers to download and upload datasets. He has a brief video introduction to the PCDB.

Bibliography