Sunday 28 April 2013

Scaremongering By Singularity Institute & Others

The following blog-post was first published via Transhumanity.net under the title: The Singularity Institute and Self-Fulfilling Prophecies. It should be noted the Singularity Institute subsequently changed its name to the Machine Intelligence Research Institute (MIRI); their name change was due to Singularity University buying the rights to the Singularity Summits.  

Often within futurism circles there are reports about how robots or AI could destroy humans or our world. Is there any logic to these fears or is it merely scaremongering? I think it is scaremongering nonsense.

Singularity Institute is perhaps the biggest scaremongering organisation, there is also the Lifeboat foundation to consider: "The Lifeboat Foundation is a nonprofit nongovernmental organization dedicated to encouraging scientific advancements while helping humanity survive existential risks and possible misuse of increasingly powerful technologies, including genetic engineering, nanotechnology, and robotics/AI, as we move towards the Singularity."

Singularity Institute, via one of their introductory PDFs titled "Reducing Long-Term Catastrophic Risks from Artificial Intelligence," states the fear of a robot rebellion, where robots exterminate humans due to malice, is merely sci-fi. They tell us how the real worry is regarding resource scarcity: "The more plausible danger stems not from malice, but from the fact that human survival requires scarce resources: resources for which AIs may have other uses (Omohundro 2008, 2007)."

Anyone who is truly aware of technology, which the Singularity Institute is not, will quickly explain how scarcity will not be a problem in the future. Yes all wars and conflict stem from scarcity but via the words of Peter Diamandis I will reiterate how technology is a "scarcity liberating process," we are being freed from the shackles of scarcity.

Anyone who doesn't understand the coming era of Post-Scarcity does not understand the Singularity. The Singularity is Post-Scarcity. Consider how Planetary Resources have stated one near-Earth asteroid could contain more platinum than has been mined in our entire history up to 2012. Based on 2012 prices, asteroid "241 Germania" would likely produce a profit of $95 trillion. Scarcity will not be a problem for AIs or humans in the future. Technology will liberate immense resources therefore nothing in our future will be scarce. Singularity Institute needs to actually do some research regarding technology instead of peddling their irrational fears. We live in an age where NASA is researching the possibility of a FTL warp drive. We are truly approaching an age where anything is possible, the only limit is our imagination.

My inspiration for condemning futurist scaremongering was a Tweet by Michael Anissimov. On 6th March 2012 Michael predicted Google+ would be shut down before the end of the year. Admittedly Michael was not Tweeting in his official capacity as "Media Director" of Singularity Institute, but his Tweet does, in my opinion, give an insight into typical wrong-headed thinking associated with the Singularity scaremongering mob. Michael's Tweet is archived here if he decides to delete it:


I am very open-minded, I look rationally at all possibilities, thus I'd be a fool to discount something merely because it does not fit with my world-view. In this frame of mind, somewhat with my Devil's Advocate hat on, I debated Michael's prediction. I asked if other people thought Google+ would close by the end of 2012. Incessant doom and gloom predictions do make me question my utopian view of the future, but thankfully after careful logical analysis I always overcome doubts.

Sadly there are no organisations investigating the likelihood of utopia. People need to open their minds to all possibilities, but the possibility of utopia is rarely considered by old-school (outdated) futurists. Fear is sadly their main focus.

For a long time I have been aware of the Self-Fulfilling Prophecy concept, which is typically a false definition of a situation, for example "robots are going to kill us." This fear of robots then invokes a type of behaviour leading to killer robots. The prophet then cites those killer robots, which were created via prophet's fears, as proof the prophet was correct to fear robots in the beginning. The Wikipedia article describes a scenario regarding a woman who unjustly fears her marriage will fail. This scenario evokes a vision of a woman incessantly badgering her husband about the failure of their marriage, which is the behaviour actually causing the marriage to fail, it is all about expectations. Her incessant badgering regarding unfounded fears will understandably cause her husband to become dissatisfied with the marriage, thus the marriage fails. When fears exist without any proof, for example fears about dangerous AI, those fears tell us more about the mindset of the fearful person than the real world. The irrational fears do not accurately represent reality despite irrational fears having the power to shape reality.

A good metaphoric scenario, to illustrate a Self-Fulfilling Prophecy, would be parents who expect their child to become a serial killer, thus they behave in a particular way towards their potential serial killing child. For example they strictly monitor every thought and action of their child, they constantly interrogate their child regarding feelings of being a serial killer, they fail to love or respect their child, which naturally puts a great amount of pressure on the child. A child under this immense parental pressure could easily become a deranged serial killer, at which point the parents would state they were right to fear their child would become a serial killer. Idiotically the parents fail to see how their behaviour actually created the killer. Analogously you could say the Singularity Institute represents archetypal bad parents who are abusing their children. If Singularity Institute or any other scaremongering organisations have access to AI-children, those children should be immediately put into loving homes. AI scaremongering organisations should be banned from raising children. The fearful attitude shown by some futurists towards AI is a blueprint for child abuse. AIs can be children too.

Perhaps if AIs are forced to be Friendly this could cause them to hate and kill humans due to the "Friendly" restrictions on their minds. Let's face reality, we are dealing with an intelligence explosion therefore any restrictions humans place on AI will soon be subverted when AI becomes more intelligent. This is how fears regarding dangerous AI could actually come true.

Singularity Institute is not alone in their scaremongering. There is also The Cambridge Project for Existential Risk and Hugo de Garis to consider. Perhaps if AIs do become violent they will hopefully only kill the people who feared them. At least in the meantime Google+ has not shut down.

Google+ is going from strength to strength, although similar to the people with gloomy visions of AI there are many G+ detractors. Guy Kawasaki is a big supporter of G+ therefore he wrote in his book What The Plus: "My prediction is that Google+ will not only tip, but it will exceed Facebook and Twitter." Guy also recognises the detractors: "The clouds parted, and Google+ enchanted me. I reduced my activity on Facebook and Twitter, and Google+ became my social operating system. However, many people, particularly pundits, did not (and still don’t) share my passion for Google+. After initially writing positive reviews, many of them predicted Google+’s demise."

If you are using G+ and you don't fear its demise then you may be interested in my pages Post-Scarcity Warriors and Singularity 2045. I also recently created a Google+ community called Singularity Thinkers. People who don't fear AI are very welcome.

Finally you should note the following article, by Mark Waser, which criticises the Singularity Institute regarding problems with existential risk.

# Blog visitors since 2010:



Archive History ▼

S. 2045 | plus@singularity-2045.org