Last night I made a serious strategic error: I dared to suggest to some Less Wrongers that unFriendly transcendent AI was not the most pressing danger facing Humanity.
In particular, I made the following claims:pulled out of thin air extremely ballpark estimates¹, but I'll ( give it a go. )
In particular, I made the following claims:
- That runaway anthropogenic climate change, while unlikely to cause Humanity's extinction, was very likely (with a probability of the order of 70%) to cause tens of millions of deaths through war, famine, pestilence, etc. in my expected lifetime (so before about 2060).
- That with a lower but still worryingly high probability (of the order of 10%) ACC could bring about the end of our current civilisation in the same time frame.
- That should our current civilisation end, it would be hard-to-impossible to bootstrap a new one from its ashes.
- That unFriendly AI, by contrast, has a much lower (<1%) chance of occurring before 2060, but that its consequences include Humanity's total extinction.
Tags: