Technological Singularity
Quote:
Originally Posted by Wikipedia
Technological singularity is a term used with varying meanings related to self-improving artificial intelligence, superintelligence,[1] breakdowns in the predictability of the future and accelerating change.
In 1965, I. J. Good first wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unforeseen by their designers, and thus recursively augment themselves into far greater intelligences. The first such improvements might be small, but as the machine became more intelligent it would become better at becoming more intelligent, which could lead to a cascade of self-improvements and a sudden surge to superintelligence (or a singularity).
In 1982, Vernor Vinge proposed that the creation of smarter-than-human intelligence represented a breakdown in humans' ability to model the future, for the same reason that authors cannot write realistic characters much smarter than humans: if we knew what smarter-than-human intelligences would do, we would be that smart ourselves. Vinge named this event "the Singularity" in an analogy to how then-current models of physics broke down when they tried to model the gravitational singularity at the center of a black hole. In 1993, Vernor Vinge associated the Singularity more explicitly with I. J. Good's intelligence explosion, and tried to project the arrival time of artificial intelligence using Moore's law, which thereafter came to be associated with the "Singularity" concept.
Futurist Ray Kurzweil generalizes singularity to apply to the sudden growth of any technology, not just intelligence; and argues that singularity in the sense of sharply accelerating technological change is inevitably implied by a long-term pattern of accelerating change that generalizes Moore's law to technologies predating the integrated circuit, and includes material technology (especially as applied to nanotechnology), medical technology, and others. Aubrey de Grey has applied the term the "Methuselarity"[2] to the point at which medical technology improves so fast that expected human lifespan increases by more than one year per year.
Robin Hanson, taking "singularity" to refer to sharp increases in the exponent of economic growth, lists the agricultural and industrial revolutions as past "singularities". Extrapolating from such past events, Hanson proposes that the next economic singularity should increase economic growth between 60 and 250 times. An innovation that allowed for the replacement of virtually all human labor could trigger this event.[3]
Eliezer Yudkowsky has suggested[4] that many of the different definitions that have been assigned to Singularity are mutually incompatible rather than mutually supporting. For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or smarter-than-human intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.
Some prominent technologists such as Bill Joy, founder of Sun Microsystems, and Peter Shaw Keller Lynch of Harmonix Music Systems, have voiced concern over the potential dangers of the Singularity.
Please share your thoughts on this. Could Technological Singularity be a good thing, or a bad thing? Will it increase the standard of living for humans, or will robots go on a rampage with the goal of destroying the human race? Some scientist have predict Technological Singularity to happen as soon as about 2045.