For decades scientists, writers and visionaries have dreamt of a world in which robots make coffee for their bosses or even eradicate life on the planet. So far, we haven’t succeeded in making these robots, although in Japan with the development of robots such as the Asimo this may not be far off. In the United States scientists are slowly waking up to the notion that the dream of artificial intelligence, which can teach itself, will one day be reality.
Intelligent machines are abundant and they’re getting smarter and smarter. The moment when their intelligence will exceed that of people, is called Singularity. How and when computers will show super intelligence was the question posed at the Singularity Congress in San Francisco.
The scientists at the congress set up a framework for a discussion with many interesting questions. Will the computer be more intelligent than people in 20 years from now? How do we ensure that they remain “nice”? In order to understand us do they need morality, emotion and consciousness? Or should we try to let them think strictly rationally?
The conference was organised by the Singularity Institute and was sponsored among others by Peter Thiel, founder of PayPal, the company that was resold in 2002 to eBay for one and a half billion dollar. The thirty-nine-year old Thiel uses his money and fame these days to put Singularity on the agenda and bring it to the attention of the general public, “It is five to twelve and people underestimate it, Thiel thinks. There is still time to influence the process, there are choices that must be made.”
It is conceivable that the moment of Singularity will pass unnoticed. Because the question is whether people are intelligent enough to recognise super intelligence. Nobody drew attention to the moment when Google indexed more pages than human memory can contain. But possibly we are experiencing Singularity nevertheless if machines share their intelligence with our brains after our body is upgraded with nanotechnology. Or there is a clearly perceptible explosion of intelligence because super intelligence appears to improve itself indefinitely and very rapidly.
Technological accelerations are exponential: hardware becomes faster, software algorithms more powerful and millions on the internet are able to contribute their knowledge and intelligence by means of web 2.0 services. The company Artificial Development has software that can simulate 100 billion neurones, roughly the number of neurones in 1 human brain. The system is soon to be opened to the public to start independent brain simulations.
The advantages of powerful artificial intelligence are enormous. In combination with other techniques such as gene technology and nanotechnology it can improve and extend our lives drastically. We can better predict the impact of our actions and our political decision-making might become more rational.
At the conference, a series of scientists spoke on the topic. Such as Rodney Brooks, professor in robotica at the MIT (Massachusetts Institute of Technology) and chief technology officer at iRobot, a firm which sells robots to the army. Peter Norvig, director or research at Google and author of “Artificial Intelligence: A modern Approach” and Christine Peterson, founder of the Foresight Nanotech Institute. The author of the book “Singularity is Near”, Ray Kurzweil spoke by means of a direct video connection answering questions concerning his future ideas.
The book, “The Singularity is Near”, published in 2005, made an impression in Silicon Valley. It gives a clear picture of the developments in gene technology, computer technology and nanotechnology, how they will combine and what could be the outcome of this. Ray Kurzweil, an inventor who in the 60’s already made his first computer program, predicts in this book that the developments have come in a flow of acceleration. He expects the moment of Singularity in the year 2029, a moment that will be difficult to recognise because by that time most of us will already be equipped with nanotechnology, as a result of which, we will be not only more intelligent and healthier but, in particular, will live longer. It is no coincidence for Kurzweil to have written a book with a dietician about life extension by means of training and vitamins; he absolutely wants to experience this time.
And that characterizes this conference. On the one hand, there are the idealists who aim at the future in the hope of a better life. They expect that super intelligence, which will come alive immediately after the moment of Singularity, will improve life. There will be peace, the problem of world hunger and climatic problems will be solved and, above all, death is overcome. On the other hand, the sceptics don’t believe in such rapid growth, and they associate huge dangers with this explosion in intelligence. Machines that can teach themselves and have an intelligence that exceeds human IQ could well be uncontrollable machines that are not at all pleasant for people and will frustrate all social processes.
Or, in the words of Paul Saffo, professor at Stanford University, essayist, consultant at Samsung Advanced Institute for Technology and member of the Institute For The Future, “in best circumstances we will be like pets, in the worst case we are used as food.”
If we want to be well prepared then we must build in sufficient security. Christine Peterson of the Foresight Nanotech Institute pleads for the same openness and transparency used when developing open-source software. In current society, agencies can check other agencies and according to this principle it must also be possible to have artificial intelligence check other artificial intelligence. We can also learn from the security of IT systems. The internet is attacked daily by several viruses which are dealt with rather effectively.
A worrying question is whether intelligent computers need have a morality and consciousness. At first sight, it seems safer to give them no consciousness, but the question is whether a robot can understand and serve people without knowing itself well. If we decide to make robots aware then we also need to give them standards and values. But who and how will we do that. Eliezer Yudkowsky, co-founder of the Singularity Institute thinks that we could have some unpleasant surprises if we teach robots some, but not all, human values. A robot must be well-educated, as well as a boyscout according to Storss Hall, the writer of ‘Beyond artificial intelligence.’
Whether the public will accept super intelligence depends on fear and faith. According to historian and futurist Paul Saffo, we need a positive and catching tale that can be understood by everyone. He reads “All Watched by machines of Loving Grace” written by Richard Brautigan in 1967, when computers were distrusted by many:
I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
like pure water
touching clear sky.
I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
as if they were flowers
with spinning blossoms.
I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.
This article is the first of four articles concerning Singularity. The following parts will zoom in on the different thoughts, morality and consciousness, and threats and challenges regarding artificial intelligence.
Singularity summit 2007
Google news concerning Singularity
What is the Singularity?
Wikipedia concerning Singularity
Domo robot of the MIT
Asimo, the Japanese robot
The world according to Ray Kurzweil
The world according to Paul Saffo
The world according to Peter Norvig
Een bijdrage van Gerrit-Jan Wielinga en Bob Overbeeke
Gerrit Jan Wielinga is schrijver en free-lance journalist. Bob Overbeeke is business developer bij XS4ALL en schrijft voor www.netr.nl.