Benny Peiser / 12.09.2007 / 13:04 / / Seite ausdrucken

Menschen als Haustiere?

By 2030, or by 2050 at the latest, will a super-smart artificial intelligence decide to keep humans around as pets? Will it instead choose to turn the entire Earth, including the messy organic bits like us, into computronium? Or is there a third alternative?

These were some of the questions pondered by the 600 or so technosavants meeting in the Palace of Fine Arts at the second annual Singularity Summit this past weekend. The meeting was convened by the Singularity Institute for Artificial Intelligence. The Institute’s chief goal is to make sure that whatever smarter-than-human artificial intelligence is eventually spawned by exponentially accelerating information technology that it will be friendly to humans.

What is the “Singularity?” As Eliezer Yudkowsky, cofounder of the Singularity Institute, explained, the idea was first propounded by mathematician and sci-fi writer Vernor Vinge in the 1970s. Vinge found it difficult to write about a future in which greater than human intelligence arose. Why? Because humanity would stand in relation to that intelligence as an ant does to us today. For Vinge it was impossible to imagine what kind of future such superintelligences might craft. Vinge analogized that future to black holes which are singularities surrounded by an event horizon past which outside observers simply cannot see. Once the Singularity occurs the future gets very, very weird. According to Yudkowsky, the Event Horizon school is just one of the three main schools of thought about the Singularity. The other two are the Accelerationist and the Intelligence Explosion schools.

The best-known Accelerationist is inventor Ray Kurzweil whose recent book The Singularity is Near: When Humans Transcend Biology (2005) lays out the case for how exponentially accelerating information technology will spark the Singularity before 2050. In Kurzweil’s vision of the Singularity, AIs don’t take over the world: Humans will have so augmented themselves with computer intelligence that essentially we transform ourselves into super-intelligent AIs.

Yudkowsky identifies mathematician I.J. Good as the modern initiator of the idea of an Intelligence Explosion. To Good’s way of thinking, technology arises from the application of intelligence. So what happens when intelligence applies technology to improving intelligence? That produces a positive feedback loop in which self-improving intelligence bootstraps its way to superintelligence. How intelligent? Yudkowsky offered a thought experiment which compared current brain processing speeds with computer processing speeds. Speeded up a million-fold, Yudkowsky noted, “you could do one year’s worth of thinking every 31 physical seconds.” While the three different schools of thought vary on details, Yudkowsky concluded, “They don’t imply each other or require each other, but they support each other.”

But is progress really accelerating? Google’s director of research Peter Norvig cast some doubt of this claim. Norvig briefly looked at past technological forecasts and how they went wrong. For example, in Arthur C. Clarke’s 1986 novel Songs of Distant Earth, set 1500 years in the future, the world was going to be destroyed as the sun went nova. So humanity had to cull through all the books ever written to decide which were good enough to scan and save for shipment in starships. Only a few billion pages could be stored and only one user at a time could search those pages to get an answer back in tens of seconds. Norvig pointed out that only 20 years later, Google saves tens of billions of pages and tens of thousands of users can query and answers back in tenths of a second.
http://www.reason.com/news/show/122423.html

Achgut.com ist auch für Sie unerlässlich?
Spenden Sie Ihre Wertschätzung hier!

Hier via Paypal spenden Hier via Direktüberweisung spenden
Leserpost

netiquette:

Test 45: 3146

Leserbrief schreiben

Leserbriefe können nur am Erscheinungstag des Artikel eingereicht werden. Die Zahl der veröffentlichten Leserzuschriften ist auf 50 pro Artikel begrenzt. An Wochenenden kann es zu Verzögerungen beim Erscheinen von Leserbriefen kommen. Wir bitten um Ihr Verständnis.

Verwandte Themen

Es wurden keine verwandten Themen gefunden.

Unsere Liste der Guten

Ob als Klimaleugner, Klugscheißer oder Betonköpfe tituliert, die Autoren der Achse des Guten lassen sich nicht darin beirren, mit unabhängigem Denken dem Mainstream der Angepassten etwas entgegenzusetzen. Wer macht mit? Hier
Autoren

Unerhört!

Warum senken so viele Menschen die Stimme, wenn sie ihre Meinung sagen? Wo darf in unserer bunten Republik noch bunt gedacht werden? Hier
Achgut.com