In the back half of the 20th century, everyone was worried about nuclear apocalypse. That cooled off for a while at the very end, but as the 21st century has got going, not only is it back on the table, but there are several new apocalypses we have to worry about. There is a minority but not uncommon attitude of inevitable doom.
And it’s really unambiguously wrong.
I don’t want to overstate this case. There are a lot of very bad outcomes on the table and we should work to avoid them. There’s almost no scenario where all of humanity is destroyed, though, let alone a probable one, and let far alone an inevitable one. There’s also plausible scenarios where humanity gets through the whole 21st century largely unscathed, bloodied only by the usual wars and pandemics that have marred every century of history. For that matter, if the last ten years are the worst the 21st century has to throw at us, it’s going to unambiguously be a golden age. Covid is a huge improvement over the Spanish Flu and the cholera pandemics (of which the 19th century had five, each of which individually lasted at least five years and often more than a decade), and Iraq, Afghanistan, and Ukraine combined have nothing on the Great War or Napoleonic Wars. Even without taking population growth into account, the 21st century has been serving some pretty weak tea compared to the first 25 years of the 20th and 19th centuries. People call Francis Fukuyama (the End of History guy) the wrongest man in history, but if these doomer predictions ever get gathered up into a particularly notable book, Francis will have to settle for second behind that book’s author.
Certainly there is no reason to assume our luck in the next 80 years will be as good as it has been in the last 20, 2010 to 2020 have certainly been noticeably worse than 1990-2000 or 2000-2010, and we should not ignore potential dangers, but things are actually going quite well overall.
Let’s start with the old school apocalypse: There is basically no chance that nuclear weapons will wipe out humanity. There just aren’t enough of them left. Once upon a time the Soviet Union had nuclear missiles aimed at specific villages in Ireland because they had so many of them that after guaranteeing the total destruction of London and New York and Tokyo, they still had a bunch left over. Now, not only is Russia’s nuclear stockpile a small fraction of what it was, but improved interception technology means no one knows how many of those missiles will even get through. Early fears about things like nuclear winter are, in light of modern science, most likely misplaced. Even a general strategic nuclear exchange will not cause enough environmental damage to kill anyone who is not within the radius of blast and firestorms for the initial impacts.
As will be a recurring theme in this post, that’s no reason to get cocky, because those interception systems might end up being completely ineffective and there’s still plenty enough nukes to make New York, Moscow, and Beijing glow. Even in a worst case scenario, though, Latin America and Africa will keep on trucking, because there’s very few nukes pointed at them, and even rural areas in nations that will be annihilated as political entities (like America, Russia, and China) will probably survive, provided they don’t die as a consequence of no longer being part of a country capable of engaging in trade and border defense.
Nuclear war is still a scenario where 1 billion+ people die and then an era of war and chaos is ushered in by the power vacuum left behind by the destruction of all global powers, and we should try not to do that. However, 1) humanity will survive and 2) despite what seems like comical incompetence at not having nuclear wars, we’ve not had them for eighty years now, so the scoreboard suggests we’re a lot better at this than we seem. I think we’ll be fine as long as we don’t get cocky and let go of the wheel.
Moving on to the apocalypse most of us have probably spent most of our time worrying about: There is basically no chance that global warming will wipe out humanity. Like with nuclear winter, this is a situation where science has marched on, uncertainty has narrowed, and a lot of worst case scenarios have been pushed out of plausibility, but none of the activists or pundits have noticed. Like with nuclear armament, this is a situation where significant progress has been made towards decreasing the threat, but messages of initial success and hope get less clicks than fear-mongering that gradually induces despair. Since global warming has been topical for a lot longer, that process of constant fear-mongering gradually inducing despair in an audience has had a lot more time to work.
Current science is very confident that runaway global warming will not happen. Our civilization will collapse long before we’re able to initiate a feedback loop that allows warming to continue even after our factories have gone dark and our cars are all rusted and dead. Even in the worst case scenario, global warming simply is not capable of getting that last inch of wiping out humanity entirely. Parts of the world will remain habitable, and humans will probably be able to rapidly rebuild civilization from them (it’s worth noting that previous civilization ending calamities – like the fall of the Roman Empire – caused “civilization” to end so briefly that it had already restarted in one place by the time it got around to ending in another).
Not only that, but individual action has put us on course for what is, while still calamitous and potentially civilization-ending, a much better future course than the one we were on in 2000. The rise of cryptocurrency, a system which by design will consume as much power as you throw at it, has presented a serious setback to the gains we’ve made in things like solar and wind power and electric cars, and the ongoing hysteria about nuclear power still prevents us from using our most obvious and straightforwardly effective tool in combating global warming. Nevertheless, the decade from 2010-2020 was mostly a story of the global warming situation unambiguously getting better despite failure to implement sweeping climate reforms (if you consider global warming, specifically, to be humanity’s most pressing issue, there’s a strong argument that 2010-2020 was actually a really good decade, despite having lost ground on many other issues). We’re starting to reach the limits of what’s possible for people taking individual action, so at some point we have to do things like switch to nuclear power and bring corporations to heel, but climate activism has a demonstrable ability to accomplish things. A major reason for climate despair is the feeling that twenty years of effort hasn’t budged the needle, so even though we’ve got forty years left to figure something out, it seems like we can’t get anything done. But our efforts totally have moved the needle, and there’s no reason to believe we can’t move it more. Stockpiling food to survive a famine is a Bronze Age technology, we are totally capable of getting the ten or twenty years of food supply needed to get through the realignment of farmland totally unscathed.
Two honorable mentions: Supervolcanoes and asteroid strikes stand almost no chance of happening before we become a multi-planet species. Both of these might plausibly cause environmental damage so severe that no part of Earth is habitable and humanity is totally wiped out. The thing is, that obviously hasn’t happened at any point in the last couple hundred thousand years, or else we’d already be gone. Neither of these is on a cooldown, either. An asteroid strike doesn’t become more likely next year because we didn’t have one this year. It’s the same probability every year, and clearly that probability is extremely low.
It’s not clear whether we’ll ever be able to travel to other stars, but there’s not really anything besides resource scarcity stopping us from colonizing Mars or, for that matter, recycling essentially all matter that exists in our solar system into a Dyson swarm (that is, a Dyson sphere made out of billions or trillions of individually small platforms rather than being constructed all at once – since each individual platform is a manageable project whose benefits justify the effort regardless of how many other platforms are completed, it’s a plausible project to complete in a way that trying to build an entire Dyson sphere all at once is not). The odds that a supervolcano or asteroid impact will get us before we’re able to complete one of these off-world projects to the point of being self-sustaining is basically zero. It’s not so much that these projects are more plausible than people think, although it is partly that, it’s mainly that the odds of these superdisasters is so puny that we’ve got plenty of time to get around to becoming a multi-planet species before they occur, so if it takes us another 200 years to get there, that isn’t really a big deal.
Finally, the new hotness, it’s not especially likely that malevolent AI will wipe out humanity. This one does get the prize for being the most likely, and indeed, the only one whose likelihood is even worth taking particularly seriously. Unlike nuclear war or global warming, malevolent AI is quite capable of dumping in the effort needed to kill not just the most vulnerable first one million humans, but the most durable last one million humans (where “durability” is mostly informed by things like being located in a remote location or in an environment that happens to be well suited to produce lots of food easily despite the calamity, rather than the humans themselves being personally harder to kill). Unlike supervolcanoes and asteroid impacts, malevolent AI is more likely to occur next year because of things which happened this year.
But it’s not inevitable. Looking at space travel, during the 50s and 60s the pace at which our reach into space expanded was incredibly rapid. Mars/moon colonies by 2000 seemed like perfectly reasonable estimates. And then it all stopped. From 1957 to 1976 we went from Sputnik to the first rover on Mars. From 1976 to 2022, the only significant accomplishment over that was to land a probe on a comet. That was a pretty significant accomplishment, but to have just one in 46 years compared to everything that was accomplished in the 19 year space race, it’s clear that space travel has completely petered out.
AI is often treated like it will continue growing infinitely, something helped along by the concept of the singularity, an AI that is better at improving itself than humans are and thus causes the AI revolution to accelerate even faster than it already is. And that is a plausible outcome. But also, any advance in AI could end up being the last major accomplishment in the AI boom, and for the next half-century after that we struggle to come up with any significant advances. Also, none of the advances in AI so far, with the exception of self-driving cars, are particularly scary except in how they indicate that other, directly scary advances might be approaching. Even carbots don’t seem to be taking the jobs in long haul driving and taxi services that they were supposed to, even though the technology seems to have entirely arrived. Carbots are better at driving than humans, even if they’re not perfect.
AlphaGo is better than humans at Go, but so what? Playing Go at the highest level possible was never a practical skill, and of course using machine assistance has always been considered cheating in professional competitions (and most other competitions). Nothing’s really changed because bots are now better than us at certain board games. They haven’t even gotten better than us at video games yet (as is common in AI journalism, AlphaStar’s success is strictly in playing as one faction against one faction on a specific map, despite having been reported as “AI is better than humans at StarCraft” – although in fairness to AlphaStar, it is better than the overwhelming majority of StarCraft players even when on a level playing field). AI art and writing programs like ArtBreeder and AI Dungeon are useful only in extremely narrow fields (ArtBreeder is good at making portraits of people staring directly at the camera, AI Dungeon is good for Goat Simulator style hilarious chaos with no consistent plot or characters). DALL-E 2’s capabilities are not in the hands of the public and I am skeptical of how strong it is because AI companies have overpromised and underdelivered so consistently, but even if all the company’s claims are totally true, it’s still yet to demonstrate an ability to accept revisions or stay on-model (i.e. to draw the exact same character but in a new situation, or with only minor details altered), which means it’s still failed to outpace human artists.
The breakthrough that renders human artists/writers/whatever obsolete could be here by the end of 2022, but also maybe we roll into the year 2100 and AI art is still viewed as a cheap alternative to more consistent human work and we have less than 10,000 people living on Mars in research stations that rely on regular shipments from Earth to survive.
Given that AI has yet to demonstrate the ability to replace even one human profession (not even truck drivers, even though I can’t for the life of me find out why not), it’s completely baseless to assume an inevitable ability to render all of humanity obsolete and kill us all because it’s better than us at everything including war and does not need human hands to replace servers anymore.
We’ve also not really yet begun to work on the problem of AI alignment. One of the things that prompted this post was MIRI making an April-Fools-but-not-really post about how humanity is doomed, which takes the hilariously narcissistic standpoint that because MIRI has reached the limits of its ability to solve AI alignment with no sign of success in the immediate future, it must therefore be impossible and we should all assume we’re going to die whenever the machine god awakens. Totally absent from that post are two fairly obvious scenarios which have been ruled out entirely not because they are unlikely, but because they render MIRI irrelevant to human history: That AI alignment turns out to be a non-issue because progress in AI levels off before it becomes powerful enough to be a significant threat independent of humans using it to do bad stuff, or that MIRI reaching its limits turns out to be a non-issue because twenty years from now the problem is solved by a bunch of guys currently in middle school whose entire careers in AI alignment are driven by the general interest in the problem that currently exists in nerdy sub-cultures writ large and involves minimal interaction with MIRI. Indeed, it would not be surprising if a future AI alignment breakthrough first started when some 25-year old looked at MIRI’s work so far, scoffed at how flawed it was, and began working on a new paradigm that tosses out all of the (purely theoretical and untestable) work they’ve done in favor of something else which turns out to actually succeed.