Poor Dylan Matthews.
Dylan spent a weekend at Google talking with nerds about charity ... and came away ... worried.
We've got great stuff today. Dylan's article is an Idiot's Guide To Flatland.
And Dylan, you should be worried.
There are some remarkable insights in Dylan's article, but then there is the usual stumbling around in the dark. I shall point out both, but Dylan got more right than he knows about.
Dylan attended the Effective Altruism Global conference held in Mountain View, California earlier this month (Silicon Valley, home of Google).
"There's one thing that I have in common with every person in this room. We're all trying really hard to figure out how to save the world."
The speaker, Cat Lavigne, paused for a second, and then she repeated herself. "We're trying to change the world!"
I don't know if Dylan knows this, but right then and there, just after he heard Cat's "we're trying to change the world" line—the first time—he should have realized he was wading in some really deep shit.
Lavigne was addressing attendees of the Effective Altruism Global conference, which she helped organize at Google's Quad Campus in Mountain View the weekend of July 31 to August 2.
Effective altruists think that past attempts to do good — by giving to charity, or working for nonprofits or government agencies — have been largely ineffective, in part because they've been driven too much by the desire to feel good and too little by the cold, hard data necessary to prove what actually does good.
It's a powerful idea, and one that has already saved lives...
Just when I thought I was reading the clueless nonsense Vox typically serves up, I came across these truly remarkable paragraphs (re-formatted, emphasis added)
Effective altruism (or EA, as proponents refer to it) is more than a belief, though. It's a movement, and like any movement, it has begun to develop a culture, and a set of powerful stakeholders, and a certain range of worrying pathologies.
At the moment, EA is very white, very male, and dominated by tech industry workers. And it is increasingly obsessed with ideas and data that reflect the class position and interests of the movement's members rather than a desire to help actual people.
So much for effective altruism!
All human social groups are ultimately self-serving in this way, every damn one of them. In Darwinian terms, human ultra-sociality increases reproductive fitness. As power and status accrues to the group, so does the overall fitness (social success) of its members. Non-selfish ("pure") altruism is theoretical; it is never reliably observed in the wild, and you certainly shouldn't go looking for it in Silicon Valley.
In the beginning, EA was mostly about fighting global poverty. Now it's becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse. At the risk of overgeneralizing...
No, no, please do overgeneralize, Dylan.
... the computer science majors have convinced each other that the best way to save the world is to do computer science research.
Compared to that, multiple attendees said, global poverty is a "rounding error."
Those of you who understood my Flatland essays will immediately understand how insightful this observation is.
Global poverty is a rounding error. Where did that nonsense come from? In Flatland terms, we see right away that actually doing something about global poverty must be de-emphasized. Fighting global poverty must be seen as inferior to, or less weighty than, doing computer science research, which of course directly benefits those in these Silicon Valley social groups.
Unfortunately, Dylan does not have the cognitive wherewithal to understand the unconscious processes which motivate this "rounding error" bullshit, despite his excellent start. As a result he takes a painful journey through the various post-hoc rationalizations which purport to demonstrate that compared to doing computer science research, battling global poverty—actually doing effective altruism—is of no importance whatsoever.
Thus we get total nonsense like this.
EA Global was dominated by talk of existential risks, or X-risks. The idea is that human extinction is far, far worse than anything that could happen to real, living humans today.
To hear effective altruists explain it, it comes down to simple math. About 108 billion people have lived to date, but if humanity lasts another 50 million years, and current trends hold, the total number of humans who will ever live is more like 3 quadrillion.
Humans living during or before 2015 would thus make up only 0.0036 percent of all humans ever...
50 million years! Are you laughing yet? Even if you are, it gets better. Don't discount the sheer entertainment value in what you're about to read. Dylan explains the "rounding error" below.
The numbers get even bigger when you consider — as X-risk advocates are wont to do — the possibility of interstellar travel.
Nick Bostrom — the Oxford philosopher who popularized the concept of existential risk — estimates that about (or 10^52 lives of 100 years each) could be in our future if we both master travel between solar systems and figure out how to emulate human brains in computers.
Even if we give this 10^54 estimate "a mere 1% chance of being correct," Bostrom writes, "we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives."
Put another way: The number of future humans who will never exist if humans go extinct is so great that reducing the risk of extinction by 0.00000000000000001 percent can be expected to save 100 billion more lives than, say, preventing the genocide of 1 billion people.
That argues, in the judgment of Bostrom and others, for prioritizing efforts to prevent human extinction above other endeavors. This is what X-risk obsessives mean when they claim ending world poverty would be a "rounding error."
And how will efforts to prevent human extinction be prioritized? Why, by doing computer science research of course!
There are a number of potential candidates for most threatening X-risk. Personally I worry most about global pandemics, both because things like the Black Death and the Spanish flu have caused massive death before, and because globalization and the dawn of synthetic biology have made diseases both easier to spread and easier to tweak (intentionally or not) for maximum lethality.
But I'm in the minority on that. The only X-risk basically anyone wanted to talk about at the conference was artificial intelligence.
It doesn't seem to have occurred to those self-centered techno-optimists in Mountain View that humans are destroying the Earth's biosphere right now—in the blink of an eye on the Geological timescale—and taking themselves down in the process, but that's not my main point today.
I want you to consider this: did you notice the super-extraordinary lengths humans will sometimes go to to put themselves at the center of the universe? 50 million years! 3 quadrillions future human lives! 10^54 future human life-years! All of these absurd fantasy numbers were cited to justify doing more artificial intelligence research
You can't make this shit up. I've said it before, but it bears repeating. And now, it seems I may have underestimated Dylan Matthews. Against all odds, after a laborious and unnecessary treatment of all this self-serving Silicon Valley bullshit, he attempts a furious comeback. First, he voices his doubts.
What was most concerning was the vehemence with which AI worriers asserted the cause's priority over other cause areas. For one thing, we have such profound uncertainty about AI — whether general intelligence is even possible, whether intelligence is really all a computer needs to take over society, whether artificial intelligence will have an independent will and agency the way humans do or whether it'll just remain a tool, what it would mean to develop a "friendly" versus "malevolent" AI — that it's hard to think of ways to tackle this problem today other than doing more AI research, which itself might increase the likelihood of the very apocalypse this camp frets over.
Do the X-risk people have a response to Dylan's uncertainty? Of course they do!
The common response I got to this was, "Yes, sure, but even if there's a very, very, very small likelihood of us decreasing AI risk, that still trumps global poverty, because infinitesimally increasing the odds that 10^52 people in the future exist saves way more lives than poverty reduction ever could."
Here's where Dylan shines.
The problem is that you could use this logic to defend just about anything.
Exactly! That is, all roads lead to the extinction threat posed by conscious machines. That's the idea here — one size fits all. It's a blanket rationalization. So convenient for self-serving purposes! That's the human unconscious working overtime.
Imagine that a wizard showed up and said, "Humans are about to go extinct unless you give me $10 to cast a magical spell."
Even if you only think there's a, say, 0.00000000000000001 percent chance that he's right, you should still, under this reasoning, give him the $10, because the expected value is that you're saving 10^32 lives.
Well done, Dylan! For humans, there are no limits whatsoever to rationalizing or justifying unconscious and often selfish motives. There is self-serving "motivated reasoning" (i.e., bullshit) everywhere you look (see the third Flatland essay).
Also note that there is no difference in kind between Nick Bostrum's crazy justifications for doing more AI research and (for example) Ben Bernanke's self-serving conclusion that quantitative easing did not increase economic inequality in the United States. There is only a difference in what we might call the superficial plausibility of the bullshit. Regardless of plausibility, we see the same kind of motivated bullshit in both cases.
As bad as things seem up to this point, there were more examples of this kind of "reasoning" at the Effective Altruism conference. There are always more because we're talking about Human Nature here. Dylan tells us about them.
To be fair, the AI folks weren't the only game in town. Another group emphasized "meta-charity," or giving to and working for effective altruist groups.
The idea is that more good can be done if effective altruists try to expand the movement and get more people on board than if they focus on first-order projects like fighting poverty.
Here we are supposed to believe that empowering the effective altruist movement is far more important than actually fighting global poverty. That more "good" will be done if these "altruists" have more influence. And who benefits by further empowering the Effective Altruism movement? I don't have to spell it out, right?
Like I said, you can't make this stuff up. Dylan responds.
This is obviously true to an extent. There's a reason that charities buy ads. But ultimately you have to stop being meta. As Jeff Kaufman — a developer in Cambridge who's famous among effective altruists for, along with his wife Julia Wise, donating half their household's income to effective charities — argued in a talk about why global poverty should be a major focus [because] if you take meta-charity too far, you get a movement that's really good at expanding itself but not necessarily good at actually helping people.
I can only hope you're laughing at this point. I quoted this paragraph because we now know of at least one person (Jeff Kaufman) who actually wants to do effective altruism. He's one of those rare exceptions who prove the rules of Human Nature, as exemplified by people who think making their social group more powerful and influential is more important than actually helping other people.
There is more good stuff in Dylan's article, so I urge you to go to Vox and read it.
I think I write these posts to keep in practice because Dylan winds up his EA global report with ... you guessed it!
There's Hope
I don't mean to be unduly negative. EA Global was also full of people doing innovative projects that really do help people — and not just in global poverty either.
Check this out.
Nick Cooney, the director of education for Mercy for Animals, argued convincingly that corporate campaigns for better treatment of farm animals could be an effective intervention. One conducted by the Humane League pushed food services companies — the firms that supply cafeterias, food courts, and the like — to commit to never using eggs from chickens confined to brutal battery cages. That resulted in corporate pledges sparing 5 million animals a year, and when the cost of the campaign was tallied up, it cost less than 2 cents per animal in the first year alone.
Wait a minute. I thought EA Global was "full of people doing innovative projects that really do help other people." Yo, Dylan! — you're talking about chickens here.
... This [helping chickens] is exactly the sort of thing effective altruists should be looking at... If effective altruism does a lot more of that, it can transform philanthropy and provide a revolutionary model for rigorous, empirically minded advocacy. But if it gets too impressed with its own cleverness, the future is far bleaker.
Dylan, my boy, you have no idea how bleak the future really is
Wouldn't the term 'effective altruism' be an oxymoron? Altruism is ostensibly about giving selflessly with no concern for reward. Trying to maximize altruism seems to work against that in principle - the person wants the maximum benefit from the giving, in effect wanting it to feel (and be) fully rewarding. Actually altruistic acts wouldn't care about the end product at all. They'd be given freely with no such concern.
So, I looked up Nick Cooney, the guy mentioned at the end, because my first thought was that the guy was a marketer, and how better to rationalize marketing than to use it in charity. He's not, but it does turn out that he has a book on 'effective altruism' that is for sale:
http://www.amazon.com/How-Be-Great-Doing-Good/dp/1119041716/ref=asap_bc?ie=UTF8
He's spent his entire career earning a living off the proceeds from charity giving, too, so pure altruism with him is a bit iffy. It was strange in the Vox article that he said he "isn't an animal person" - all of his charities have been about animal suffering in the meat industry. He sounds like he's rationalizing his uber-altruism. He has helped animals with multi-million dollar ad campaigns be less uncomfortable before they are slaughtered and consumed, after all.
Interesting stuff.
Posted by: Jim | 08/12/2015 at 09:40 PM