January 2018

S M T W T F S
  123456
78910111213
14151617181920
21222324252627
28293031   

Style Credit

Expand Cut Tags

No cut tags

September 21st, 2009

pozorvlak: (polar bear)
Monday, September 21st, 2009 08:23 pm
Last night I made a serious strategic error: I dared to suggest to some Less Wrongers that unFriendly transcendent AI was not the most pressing danger facing Humanity.

In particular, I made the following claims:
  1. That runaway anthropogenic climate change, while unlikely to cause Humanity's extinction, was very likely (with a probability of the order of 70%) to cause tens of millions of deaths through war, famine, pestilence, etc. in my expected lifetime (so before about 2060).
  2. That with a lower but still worryingly high probability (of the order of 10%) ACC could bring about the end of our current civilisation in the same time frame.
  3. That should our current civilisation end, it would be hard-to-impossible to bootstrap a new one from its ashes.
  4. That unFriendly AI, by contrast, has a much lower (<1%) chance of occurring before 2060, but that its consequences include Humanity's total extinction.
I'm a pessimist. I make no apology for this fact. But note that I'm actually less pessimistic in this regard than the Singularitarian Nick Bostrom, whose paper on existential risks lists runaway ACC among the "bangs" (total extinction risks) rather than the "crunches" (permanent end of industrial civilisation). Defending my numbers is complicated by the fact that they're all pulled out of thin air extremely ballpark estimates¹, but I'll give it a go. )