sin·gu·lar·i·ty: the theoretical emergence of superintelligence through technological means.
The essence of the singularity is that humans will capture the most powerful form of intelligence we know about—our brains—within a machine. With this context, the singularity becomes the single point within history in which computers become as intelligent as humans are. And because the rate of evolution of electronics is much quicker than biological systems, after we reach that singularity point, our machines become more intelligent much quicker than humans.
It’s an extremely difficult phenomenon to grasp and raises obvious questions:
– Will it create a robotic society that oppresses and rises up against humanity?
– Does it allow us all to live lives of luxury and perpetual vacation?
– Do human beings become irrelevant?
These are all questions that are posed as we pursue technology and machines that become as intelligent and then quickly more intelligent than any other system on the planet.
The experts in The Singularity believe we are drastically underestimating the rate of which technological evolution occurs. We consider outcomes as if things will progress at their current rate. We consider the evolution of the telephone from rotary to cellular and digital and the decades it took to realize that advance. We assume technological evolution is linear. In fact it happens exponentially.
If we look back to the dawn of technology (think: stone tools, the wheel, fire) we don’t consider the thousands of years it took to get to that point. Since the invention of those first human tools we’ve continued to develop tools to create a new, even better tool. So each time we develop a new tool, we’re increasing the rate and impact of each advancement.
Ok. Breathe. …because now it gets real.
At some point is this evolution you reach the singularity. It’s at this point in history that we now have machines that are as smart as humans are. And then, on the very next day, we’ll have an artificial intelligence system that is better at developing artificial intelligence systems than humans. In essence, technology will then be able to create more advanced technology than humans can. The trend then becomes irreversible.
So while it is common belief that we will one day reach the singularity (the timeline is the debate), there are still a number of things that need to happen for computers—which are brittle machines—to become as intelligent as humans.
Natural language understanding – empowering computers to reason and understand using human language.
Vision – recognition of objects, how they are moving, and what to rationalize from those perceptions.
Consciousness – ability to feel and experience the world around the object.
With that, there are two competing theories as to where the future of humans and machines will lead.
Theory 1: We built computers in the 80’s, networked them in the 90’s, and in the last decade have taken advantage of cheap sensors (accelerometers, cameras, etc), to empower machines to observe and manipulate the world around us. In essence, we’ve started a robot revolution which ends one of two ways: 1) If we’re lucky, they will treat us like pets, and if we’re unlucky, they treat us like food.
Theory 2: Humans will never build these machines. Why bother? We already have the ability to procreate on our own and we can augment our own capacities by building technology around us to influence us while we remain the conscious managers of that system.
What’s your belief?