Your View of the End Times Even Affects How You Give Money to Charity
You may have noticed over the past couple of years that effective altruism has become the hot new trend/buzzword in philanthropy. As the Centre for Effective Altruism explains,
Effective Altruism is a growing social movement that combines both the heart and the head: compassion guided by data and reason. It’s about dedicating a significant part of one’s life to improving the world and rigorously asking the question, “Of all the possible ways to make a difference, how can I make the greatest difference?”
As a broad concept, effective altruism is a refreshing change from the all-too-common strand of charity that puts more emphasis on good intentions than effectiveness. Rather than a consumer-driven, feelings-based approach to philanthropic activity (think: TOMS Shoes’ “buy one, give one” model), effective altruism (EA) tends to rely on evidence to maximize individual impact on solving problems. For example, some EA advocates choose to use their skills to get a high-paying job rather than work directly for a non-proift or charity. The thinking is that instead of earning $25,000 a year working for Oxfam you can earn $100,000 on Wall Street, live on $25K a year, and donate $75,000 to hire other workers. Doing that allows an individual to triple their contribution to the solution. In general, this is likely to be a much better angle than pure do-goodism (though as Anne Bradley and Jay W. Richards explain, enterprise is the most effective altruism). But this approach can become less effective and even hindered by a person’s worldview beliefs, such as what a person believes about the “end times.” The phrase “end times” tends to conjure up images of Tim Lahaye’s apocalyptic Left Behind novels (and the movies with Kirk Cameron and Nicholas Cage). But while eschatology is frequently associated with religious believers, view about the end times are also held by secularists. A prime example is belief in the “singularity,” the period in the near future (100 years or less) when artificial intelligence reaches the point where each generation of computers and robots can create machines smarter than themselves. Some transhumanists who believe in this coming technological singularity even think they’ll be able to upload and store their consciousness to neural networks, similar to the way Gmail saves all your emails to the “cloud.” Dylan Matthews points out that this type of thinking is influencing the EA movement, especially in Silicon Valley:
Effective altruism (or EA, as proponents refer to it) is more than a belief, though. It’s a movement, and like any movement, it has begun to develop a culture, and a set of powerful stakeholders, and a certain range of worrying pathologies. At the moment, EA is very white, very male, and dominated by tech industry workers. And it is increasingly obsessed with ideas and data that reflect the class position and interests of the movement’s members rather than a desire to help actual people. In the beginning, EA was mostly about fighting global poverty. Now it’s becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse. At the risk of overgeneralizing, the computer science majors have convinced each other that the best way to save the world is to do computer science research. Compared to that, multiple attendees said, global poverty is a “rounding error.”
The recent Effective Altruism Global conference, Matthews adds, was “dominated by talk of existential risks, or X-risks. The idea is that human extinction is far, far worse than anything that could happen to real, living humans today.”
To hear effective altruists explain it, it comes down to simple math. About 108 billion people have lived to date, but if humanity lasts another 50 million years, and current trends hold, the total number of humans who will ever live is more like 3 quadrillion. Humans living during or before 2015 would thus make up only 0.0036 percent of all humans ever. The numbers get even bigger when you consider — as X-risk advocates are wont to do — the possibility of interstellar travel. Nick Bostrom — the Oxford philosopher who popularized the concept of existential risk — estimates that about 10^54 human life-years (or 10^52 lives of 100 years each) could be in our future if we both master travel between solar systems and figure out how to emulate human brains in computers. Even if we give this 10^54 estimate “a mere 1% chance of being correct,” Bostrom writes, “we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.” Put another way: The number of future humans who will never exist if humans go extinct is so great that reducing the risk of extinction by 0.00000000000000001 percent can be expected to save 100 billion more lives than, say, preventing the genocide of 1 billion people. That argues, in the judgment of Bostrom and others, for prioritizing efforts to prevent human extinction above other endeavors. This is what X-risk obsessives mean when they claim ending world poverty would be a “rounding error.”
Those of us who are Christians may be tempted to dismiss these views as silly and morally obtuse. But while they are indeed silly and morally obtuse, they shouldn’t be disregarded since they serve as a glimpse of our post-Christian future. Many people subscribe to a sort of “subtraction” theory of secularization in which society will continue to discard Christian doctrine and beliefs and yet retain, with some necessary tweaks, the elements gained from the Christian worldview (the importance of the individual, concern for human dignity, etc.). But as the intellectual vanguard continues to show, that is as naïve a belief as thinking we’ll be able to upload our souls to a thumb drive. The dismantling of the basic framework of the Christian worldview—creation, fall, redemption, restoration—will require replacing it with new scaffolding. Although this new eschatologically-oriented framework will be fragile and rickety, it’ll be presented with an astounding level of confidence. For instance, as Kerry Vaughan, one of the EA Global, says, “I really do believe that effective altruism could be the last social movement we ever need.” This is the type of thing we can expect for decades to come: our brightest thinkers presenting the stupidest ideas with a maximal degree of hubris. Fortunately, Christianity and its followers will be around to the end. While the nerds in Silicon Valley spend their time worrying about how to prevent a future in which Skynet ushers in the robot apocalypse, we Christians will continue to effectively apply our altruism to the “rounding errors.” They need us today. This article originally appeared at the Acton Institute.