This is Part I of a three-part essay summarizing my work on DOTE. The second part will be published in the 2nd half of September. The third part will appear sometime in October. This essay is closed to comments. I know that's not the "spirit" of the internet, but that's my preference.
My e-mail is included at the end if you want to contact me — Dave
How Goes Life Among the Humans?
Imagine if you can a wise, compassionate, and reasonable species quite unlike the one you belong to. This other species does not wage war or destroy its planetary environment willy-nilly. This species does not produce most of its own suffering. The members of this species are not deluded—they don't believe, as some humans apparently do, that if only everybody quit smoking, handguns were banned, and gays could get married anytime they wanted, all their big problems would be solved.
The list of differences between this imagined species and we humans would be very, very long.
Now think about your own species, which calls itself Homo sapiens. Kurt Vonnegut did. And here's what he said in the first chapter of his last novel, Timequake. He was 74 years old when he wrote this, in 1996.
I say in speeches that a plausible mission of artists is to make people appreciate being alive at least a little bit. I am then asked if I know of any artists who pulled that off. I reply, "The Beatles did."
It appears to me that the mostly highly evolved Earthling creatures find being alive embarrassing or much worse. Never mind cases of extreme discomfort, such as idealists being crucified. Two important women in my life, my mother and my only sister, Alice, or Allie, in Heaven now, hated life and said so. Allie would cry out, "I give up! I give up!"
Kurt is in Heaven now, too.
The funniest American of his time, Mark Twain, found life for himself and everybody else so stressful when he was in this seventies, like me, that he wrote as follows: "I have never wanted any released friend of mine restored to life since I reached manhood." That is in an essay on the sudden death of his daughter Jean a few days earlier. Among those he wouldn't have resurrected were Jean, and another daughter Susy, and his beloved wife, and his best friend, Henry Rodgers.
Twain didn't live to see World War I, but he still felt that way.
... The African-American jazz pianist Fats Waller had a sentence he used to shout when his playing was absolutely brilliant and hilarious. This was it: "Somebody shoot me while I'm happy!"
That there are such devices as firearms, as easy to operate as cigarette lighters, and as cheap as toasters, capable at anybody's whim of killing [my] Father or Fats or Abraham Lincoln or John Lennon or Martin Luther King, Jr., or a woman pushing a baby carriage, should be proof enough for anybody that, to quote the old science fiction writer Kilgore Trout, "being alive is a crock of shit."
Yes, Kurt, it should be proof enough, but it is not. Like I said—deluded. Over time, that became the problem I found myself grappling with on Decline of The Empire, which I no longer publish.
I gave up! I gave up!
Conspicuous By Its Absence
Ever since humans discovered the scientific method, nearly 500 years ago now, they have formulated theories about nearly everything. There are cosmological theories, theories of physics, and theories of biology and evolution. In the social realm, there are many, many theories of political economy. You name it, and there are competing theories about it. All the important stuff has been covered—almost.
Conspicuous by its absence is a comprehensive theory, or competing theories, of how the human animal functions. There are some old philosophical debates which reside, and deservedly so, in history's dustbin. And that's all there is. This lacuna is not an accident; it is not an oversight. You might think a species which creates so much of its own suffering would be eager to figure out why that is so, and try to fix the problem, but, tragically, such is not the case.
Indeed, there is thinly disguised hostility toward such theories, or distrust of them, on those rare occasions when humans think about themselves at all. A typical example will suffice—will have to suffice because this is an essay, not a book—to illustrate this point. I will use Ian Welsh's Human Nature for Ideology, published on May 7, 2014. The essay's beginning shows promise.
All ideologies, including all economic ideologies like the modern discipline of economics, are theories of human nature in drag. If you believe that humans are innately selfish and greedy, for example, you will believe that monetary incentives are the best way to allocate resources and permission to do things in an economy. If you want more of something, you’ll arrange for people who do it to have more money.
If you believe that greed leads to the best outcomes: that the invisible hand takes selfishness and turns it into public good, then you will argue that most of what people do because of greed is good, and should not be disallowed, but, indeed, encouraged.
To a remarkable extent, this is how we run our economic affairs, and it is not an ideology that most of humanity, for most of history, would have agreed with: even if they thought that humans were greedy and selfish, they would have thought that greed and selfishness should be restrained, not rewarded...
Fair enough, and well-said. But now, predictably, things get murky. Consequently, we don't know what to think.
Human nature is tricky to discuss because the specifics of human nature are remarkably twisty. All humans don’t want almost anything: to live, to procreate, to be rich, to be admired, even to be safe. Whatever you think all humans want, all humans don’t.
You can fall back on “the vast majority of humans”, and use the standard trick of economics “as if”—humans aren’t all greedy, but you can act as if they are and your models will work.
But they won’t. Humans aren’t rational, they aren’t utility seekers except in the most metaphysical of terms (because nobody can give a definition of utility which applies to everyone except “whatever people do/revealed preferences”, which isn’t a definition.)
Welsh brings up an important objection to Human Nature theories which must be addressed: all universal statements about humans fail. The proposition 'all humans are greedy' is false. Some humans are not greedy. Does that mean we are never entitled to make universal claims about humans? Not at all. Welsh refers to "the vast majority of humans," a strategy which he regards as a cop-out. But it is not a cop-out. This is best illustrated by example.
There are roughly 7.2 billion humans on Earth, and, roughly speaking, about 10 million of them are painfully aware that Homo sapiens is destroying the biosphere, slowly on human time scales, but in no time at all on the geological time scale. (10 million is a very generous estimate.) Some of those exceptional people, a goodly portion of whom are working scientists, are actively opposing the ongoing destruction, though many are not.
Rounding up, those 10 million souls represent approximately 0.14% of the entire human population. The other 99.86% are either actively destroying the biosphere, or indifferent to that lamentable trend (i.e., they are merely current or would-be "consumers" who are thus acquiescing in and contributing to the trend indirectly).
Are we therefore not allowed to make general statements about how humans behave with respect to the natural world, even if, strictly speaking, such statements are not universal? I think not. There is still a broad, non-random, statistically significant pattern to be observed. The New Yorker's Elizabeth Kolbert has most recently studied these trends through the lens of human-caused plant and animal extinctions.
Indeed, somebody had better start postulating and considering such general statements before various destructive trends reach their tragic conclusions. It seemed to Kurt Vonnegut that "the mostly highly evolved Earthling creatures find being alive embarrassing or much worse." This is a case in point.
Having bound his own hands, Welsh subverts the very subject he is discussing (emphasis in the text is Welsh's).
Humans have a biology: we have bodies that are much alike, brains that are much alike, and if we wish to continue living, some needs that are much alike (food, water and internal homeostasis.)
But humans are less defined by their biology than any other animal I am aware of: we have culture, and our culture adapts and changes far faster than our biology does.
So, if you’re creating an ideology, you’ve got a problem: humans are so plastic that anything you say about them will be wrong for some of them.
The solution, first, is to make this part of your definition of human nature.
Most humans are malleable. Change the circumstances people live in; change the way they are raised; change their education; change their technology; change the means of production and what people believe and how they act will change. We become what we do and what we believe: we interpret everyday activity through a lens of belief, language and ideology.
Humans are neither good nor bad; ethical nor unethical; moral nor immoral. They are, instead easily led. Peer groups and authority figures can get humans to do almost anything: rape, mass-murder, torture. Feed the hungry, heal the wounded; work together to build great projects no small group could even conceive of.
With this "malleable" move, Welch defines human nature by declaring, in effect, that it does not exist! Humans are neither good nor bad, moral or immoral—they are easily led. The mind is a blank slate upon which arbitrary rules can be written. If human nature does exist, it does so in only the most trivial senses.
If people are so easily led, why haven't they been easily led to create an equitable and egalitarian large complex society? There are no historical examples of such a society, but there is a plethora of inequitable, politically repressive societies to choose among.
Nor has any human society with the means and opportunity to expand its population or economy ever consciously and willingly shunned further growth. No large, complex human society has said "enough is enough," and declared itself satisfied with its current population or GDP levels. As the story goes, since there is invariably a significantly skewed distribution of the income and wealth, societies must always opt for more growth to ameliorate those disparities. Alternatively, and more insightfully, correcting current wealth disparities is always used to rationalize the need for further growth.
If more growth is always required to correct wealth disparities, and more growth inevitably leads to further degradation of the Earth's biosphere, a clear contradiction arises. Consider this broad opening statement from an Institute for Policy Studies webinar called A Deeper Look at the Limits to Growth: Looking Beyond GDP Towards a Post-Growth Society. Calls for a "post-growth" society conflict with the aspirations of those doing without.
How do we move beyond the notion that green economists are tone-deaf to equity issues? How do we move beyond the misguided aspirations of many groups excluded from economic prosperity to grow the pie so they can have a larger piece of the pie? What is the green economist message to traditionally economically excluded constituencies?
Is there a way to “redefine growth” that doesn’t politically concede limits to growth? (After all, conventional wisdom say no politician will win on a degrowth program). Is there a common framework that can unify both of these movements that address both of these group’s deep systemic concerns?
In short, how do we have our cake and eat it too? This human-created dilemma does not, nay, can not, be resolved. Striving for more growth is never negotiable, for no politician "will win on a degrowth program" here in the United States, in the European Union, in Brazil, in China, and anywhere else.
At a more fundamental level, as the human population grows, the ranks of those left behind swell. An estimated 22% of the world's population currently live in poverty (1.6 billion people). And when growth falters in the developed nations, as it has in recent times, the number of those without jobs or prospects increases. In either case, there are always calls to stimulate growth to correct economic disparities. Consequently, there is never a good time to create a "post-growth" society because it is impossible to stabilize an ever-shifting target. And thus the rationalization supporting the need for more growth always seems justified.
In short, growth and poverty are constants in human affairs. Paraphrasing Matthew's Gospel, "the poor are always with us." And once the means and opportunity (energy, science, markets) to achieve growth were established, that too became a constant around which human life became organized.
We can only conclude, contrary to Welsh's blank slate, that the "vast majority" of humans can not be easily led to create a sustainable, equitable society which exists in harmony with the natural world. In fact, those humans can not even envision such a thing, let alone be led to desire such an arrangement. Yes, humans are malleable. Yes, humans are easily led, but they can only be encouraged (or forced) to move in those directions they are able to go. That is the most parsimonious explanation of the phenomena described here. (I will return to human social malleability in Part III of this essay.)
Otherwise, we must imagine (via the "blank slate") new ways of being human which have never been observed. We might speculate that "exceptional" people—those who are not in "the vast majority"—are, in part, a product of simple genetic variation in the total human population. However, at the level of populations, such speculation is not required to buttress the overwhelming statistical evidence supporting the notion that there are real phenomena demanding an explanation.
Jesus of Nazareth said "Blessed are they that mourn," and "Blessed are the meek," and so on. I myself should have realized the jig was up when, nearly 20 years ago, I became aware that humans (genus Homo) share more than 98% of their DNA with chimpanzees (genus Pan). Jesus didn't know that.
Welsh's unsatisfactory conclusions accurately reflect the default human view of themselves. Let's look at that.
This section's title is a Latin phrase used in the law meaning "to whose benefit?" Let us ask who benefits from what I have called "the argument from ignorance" with respect to Human Nature. It should now come as no surprise that the answer is: Nearly everybody! All the time!
If one assumes that Human Nature is a complete mystery, or denies that it exists through the back door as Ian Welsh does—the "blank slate"—then humans everywhere are able to carry on as though wisdom (self-knowledge) does not exist. They are thus able to carry on as though nothing happened. And why? Because nothing did!
When wisdom is dismissed as fiction, "business as usual" can continue undisturbed. Thus people are free to divide up the populace into good guys and bad guys (e.g., "liberals" and "conservatives"). They are free to pursue without constraint their characteristic and inevitably pointless political squabbling—their power or status struggles—and thus they are free to lead (or force) people to go where they want them to go. They are free to rationalize inequitable socioeconomic conditions as they always have. They are free to continue destroying the Earth's biosphere. And so on.
We've seen that all "1st-order" statements like (1) seem to fail (are false).
(1) All humans are greedy.
Welsh thus asserts the "2nd-order" (metalevel) statement (2), which is also false because humans are easily led, but only in some directions and not others.
(2) Humans are endlessly malleable.
Still, there are some 2nd-order statements which are true (empirically verifiable).
(3) Humans always assume and act as if the mind is a "blank slate."
Observation (3) means, in effect, that humans act as though anything is possible for them. All socioeconomic and sociopolitical belief systems and behaviors are based on this assumption. Paraphrasing Welsh, if only we could "change the circumstances people live in, the way they are raised, their education and technology, etc., then what people believe and how they act will change. We become what we do and what we believe: we interpret everyday activity through a lens of belief, language and ideology."
In the "nature versus nature" debate, Welsh, conveniently for him or anyone else with an agenda for social change, comes down hard on the nurture side. Following (3), humans always assume that the human-made environment (upbringing and culture) determines who they are and what they do. In short, the default view assumes that humans are making it up as they go along.
But what if the "blank slate" assumption is deeply, and tragically, mistaken?
Whistling In The Dark
There is another observationally verifiable 2nd-order statement closely related to (3).
(4) Humans always believe and act as if all their beliefs and decisions are their own.
Statement (4) captures the observation that people always believe and act as if their subjective sense of self, aka. the conscious Ego, is running the show. What (4) shares in common with (3) is the de facto assumption that the unconscious mind does not exist. Indeed, in so far as "unconscious" means what it says, it is hard to see how humans could act any other way. But the unconscious mind does exist, a conclusion which can be confirmed by indirect inferences (described in more detail below) and many direct observations made in psychology and the neurosciences.
In human decision-making, a number of experiments have demonstrated that unconscious brain activation invariably precedes the moment when the conscious self (Ego) thinks it has "made" a decision. Neurologist V.S. Ramachandran explains what's going on:
You think you are willing the brain to do something, but it's your brain which is willing you to will. It's thinking ahead of you, and your so-called thinking is a post-hoc rationalization.
So you don't have any free will — that was the implication the philosophers came up with.
If free will is largely an illusion, if much of human cognition consists of post-hoc rationalizations, we bump up against the idea of hard limits on human behavior, an idea which makes humans very, very uncomfortable, so uncomfortable, in fact, that nearly all such suggestions are immediately rejected, when they are considered at all.
|Rationalizing Social Elites|
Either humans have "free will" and thus always make self-determined choices in important matters like economic or environmental policy, or they don't.
Here's a good enough definition of free will:
If the mind is a "blank slate" upon which anything can be imprinted—humans are endlessly malleable—problems arise for the view that humans can "act without the constraint of necessity or fate."
If we assume the mind is predominantly a product of nurture alone, including cultural preferences or norms, education, and upbringing, then social conditioning is largely determining the "choices" people make.
And thus we get (1):
And further difficulties arise from (1). Social conditioning implies that someone is doing the conditioning, and someone is being conditioned. Are we supposed to assume that the former have free will and the latter do not?
Of course not!—that supposition is self-evidently absurd (a contradiction). If human choices are predominantly the product of social conditioning, so are the choices of those who do the conditioning.
Nonetheless, that absurdity appears to be precisely the assumption those with social agendas make when they discuss which values should be inculcated and which should be deprecated. And thus we get (2):
The simple fact is that all large, complex societies have elites who make all the important decisions, "choices" which benefit them to the detriment of everyone else, a reality which becomes obscured in so-called "democratic" societies.
The inevitable existence of self-serving, powerful elites constitutes the observed social pattern requiring an explanation. The "blank slate" merely provides a post-hoc rationalization justifying the existence of these elites, those who serve them, or those who seek to join them.
And that is of course what we see historically and in the world today—social elites making self-interested "choices" for the rest of us. The history of social transformation (sometimes called "reform") demonstrates again and again the social disasters our human "elites" have wrought. Consider V.I. Lenin, Robespierre, or Alan Greenspan, to pick only three examples among the countless others history offers.
It is highly suspicious of course that elites, or those who serve them, always find a way to rationalize not only their own entitlement, but also why the rest of us can't possibly get along without them
The views of those running things are and have always been self-interested delusions. To borrow from Mark Twain, these beneficiaries of the status quo have corn-pone opinions — "you tell me whar a man gits his corn pone, en I'll tell you what his 'pinions is."
Those making up society's elite, or those seeking higher social status or those with an agenda for social change, are no wiser or better than anybody else. However, they are much worse in one crucial respect—they believe they can make wise "choices" for the rest of us.
I previously noted that the desire for economic growth is never negotiable in any human society where the means and opportunity for achieving it exist. I also noted that poverty and inequitable distribution of the wealth—these related conditions are always present in large, complex human societies—are often used to justify the need for further growth. The observation that "free will" is largely an illusion strongly suggests that these justifications are nothing more than post-hoc rationalizations.
If this were not so, we would expect to observe a society which shuns growth and opts instead for a more equal distribution of its present wealth. But we see no such thing. If one looks for them, post-hoc rationalizations turn up everywhere. Economists always rationalize the need for an unequal distribution of the income and wealth because such an arrangement is believed to create the incentives which drive increased wealth. But, as we've just seen, an unequal distribution of the income and wealth itself is used to rationalize the need for more growth!
|Rationalizing Growth and Inequality
Human reasoning about growth and inequality is circular. Let's lay out the "argument" because the circularity is not immediately apparent.
Presented in this stripped down way, I hope the circularity is obvious.
The confused nature of human thinking on inequality and growth reinforces the conjecture that we are dealing with post-hoc rationalizations of unconscious processes. Human thinking about growth and inequality only appears to be rational within the human frame of reference. Viewed from "outside" this frame of reference, human thinking on these matters is incoherent.
In the "mature" (OECD) economies, inequality (X) has been increasing for 35 years now, despite continuing economic growth (Y). This trend casts considerable doubt on assumption (3), as illustrated in the graph below.
There is further confusion. Economic growth is paramount in all cases, but growth in the mature economies has slowed in recent decades while inequality has increased. It is therefore common to see statements like (4):
Although (4) is "correct" if you are operating within all these assumptions, (4) and (2) directly contradict each other. Furthermore, if (3) does not work—if economic growth is not reducing inequality—what is being achieved in striving for more growth, as in (4)?
I suppose one could argue (and economists do) that it's a matter of degree with respect to inequality—what is the "optimal" level of inequality?—but substantial inequality (1) always exists, so it appears we are dealing with another rationalization. If inequality is morally repugnant—for a large majority of humans, it is simply accepted—the simplest, most straightforward solution is to start reducing inequality irrespective of growth.
But in the general case, humans are loathe to make sacrifices to help others when there is no benefit to them—unselfish altruism is vanishingly rare—so reducing inequality is always linked to the need for more growth. Therefore, as the rationalization goes, the "wealth pie" must always be growing, regardless of how the wealth is distributed. Each of the sacred tenets (2) through (4) bolsters the apparently unshakable human conviction that 'Growth Is Good'.
Confusion abounds here, so I have hypothesized that we are dealing with post-hoc rationalizations: a hard-wired tendency toward social hierarchy in large, complex human societies, following from (1) and rationalized by (2); and animal growth instincts, following from the sanctity of growth generally, as reflected in (4), and rationalized by (3).
The supposed endpoint of human "reasoning" on these issues is a world of 9 or 10 billion people, all of whom have an "adequate" standard of living. As the story goes, it would not matter in this future utopia if wealth and income were inequitably distributed because everyone would be reasonably well-off.
But the poor always with us—(1) is always true—and therefore this circular argument has no endpoint.
Behavioral scientists are teasing out various aspects of the unconscious. For example, there is now a broad consensus among them that "liberals" and "conservatives" differ in fundamental ways [emphasis added by the author Chris Mooney].
A large body of political scientists and political psychologists now concur that liberals and conservatives disagree about politics in part because they are different people at the level of personality, psychology, and even traits like physiology and genetics.
That's a big deal. It challenges everything that we thought we knew about politics—upending the idea that we get our beliefs solely from our upbringing, from our friends and families, from our personal economic interests, and calling into question the notion that in politics, we can really change (most of us, anyway).
The occasion of this revelation is a paper by John Hibbing of the University of Nebraska and his colleagues, arguing that political conservatives have a "negativity bias," meaning that they are physiologically more attuned to negative (threatening, disgusting) stimuli in their environments. (The paper can be read for free here.)
... In other words, the conservative ideology, and especially one of its major facets—centered on a strong military, tough law enforcement, resistance to immigration, widespread availability of guns—would seem well tailored for an underlying, threat-oriented biology.
The authors go on to speculate that this ultimately reflects an evolutionary imperative. "One possibility," they write, "is that a strong negativity bias was extremely useful in the Pleistocene," when it would have been super-helpful in preventing you from getting killed.
Commenting on this result, social psychologist John Jost, who published a synthesis of such research in 2003, sounded the death knell for the "blank slate" (emphasis added by Chris Mooney).
There is by now evidence from a variety of laboratories around the world using a variety of methodological techniques leading to the virtually inescapable conclusion that the cognitive-motivational styles of leftists and rightists are quite different. This research consistently finds that conservatism is positively associated with heightened epistemic concerns for order, structure, closure, certainty, consistency, simplicity, and familiarity, as well as existential concerns such as perceptions of danger, sensitivity to threat, and death anxiety.
Research on "cognitive-motivational styles" has much in common with studies on innate biases in human cognition. The exchange below is telling in this regard.
Bill Moyers — There was a Gallup poll in this country a few weeks ago that said despite rising temperatures and all of this strange weather we've been having, the percentage of Americans who care a great deal about global warming has been dropping, from 41 percent six years ago to 34 percent today.
What is it about human nature that wants to believe the worst can't happen?
David Suzuki — I don't know. I don't know...
Moyers asks precisely the right question: What is it about human nature that wants to believe the worst can't happen? If you doubt that (the vast majority of) humans view the world through rose-colored glasses—if you doubt that such an unconscious cognitive bias exists—consider the IPCC climate scenarios. The IPCC is an international (cross-cultural) organization.
In the year 2100, under the Business As Usual scenario, the global economy is still growing, despite the fact that the Earth's average surface temperature is likely to be 3-4°C higher than it was anytime during the period in which humans evolved. One would think it needless to say that industrial civilization as we know it could not possibly exist under those conditions. The IPCC's only concession to reality was to assume that the global economy will be expanding at a slower rate than it otherwise would be if future action is taken to mitigate global warming.
In the face of such delusional optimism, and as we see these kinds of absurd expectations pile up in various aspects of human life, it becomes well-nigh impossible to avoid the conclusion that whistling in the dark is characteristic human behavior. This useful idiom means "being confident that something good will happen when it is not at all likely."
If humans are typically whistling in the dark, only a strong theory of human nature can explain what's going on. Nurture alone can not possibly explain it. Researcher Tali Sharot has discovered that the brain is "hardwired for hope" (her phrase). Here is the abstract of Sharot's finding (Nature Neuroscience, 14, 1475–1479, 2011).
How unrealistic optimism is maintained in the face of reality
Unrealistic optimism is a pervasive human trait that influences domains ranging from personal relationships to politics and finance.
How people maintain unrealistic optimism, despite frequently encountering information that challenges those biased beliefs, is unknown. We examined this question and found a marked asymmetry in belief updating. Participants updated their beliefs more in response to information that was better than expected than to information that was worse. This selectivity was mediated by a relative failure to code for errors that should reduce optimism.
Distinct regions of the prefrontal cortex tracked estimation errors when those called for positive update, both in individuals who scored high and low on trait optimism. However, highly optimistic individuals exhibited reduced tracking of estimation errors that called for negative update in right inferior prefrontal gyrus. These findings indicate that optimism is tied to a selective update failure and diminished neural coding of undesirable information regarding the future.
Dr Sharot adds: “Our study suggests that we pick and choose the information that we listen to. The more optimistic we are, the less likely we are to be influenced by negative information about the future. This can have benefits for our mental health, but there are obvious downsides. Many experts believe the financial crisis in 2008 was precipitated by analysts overestimating the performance of their assets even in the face of clear evidence to the contrary.”
It is revealing that optimism is associated with specific regions in the pre-frontal cortext—Sharot found "selective update failure and diminished neural coding of undesirable information about the future."
Yale researcher Dan Kahan, working independently of Sharot, has found much the same thing. I discussed those findings in Talking About Global Warming Makes People's Heads Explode! This quote is from the Marketplace story Climate Change — how to talk about bad news, which that post was based on.
... [those concerned about global warming] have learned a lot about communicating climate change. No. 1, it’s harder than anybody thought. After years of dire warnings, a little over half of Americans worry about climate change “only a little,” if at all, according to a Gallup poll.
“At first the attitude was, the truth speaks for itself,” says Dan Kahan, a professor of law and psychology at Yale Law School and a member of the Cultural Cognition Project. “Show them the valid science and the people will understand. That’s clearly wrong.”
... The real challenge, however, may be to talk about climate change in ways that don’t push people’s cultural and political buttons. Dan Kahan’s research shows that the way people view climate change is closely tied to their values.
People “aggressively filter” information that doesn’t conform to their worldview.
“And remarkably the more proficient somebody is at making sense of empirical data," he says, "the more pronounced this tendency is going to.”
There is a clear link between Kahan and Sharot's research with respect to negative information filtering. Kahan refers to "information which doesn't conform to [a person's] worldview," but in my "exploding heads" post I pointed out that "anything which contradicts a person's worldview will be construed as negative information by that person."
Before I offer a generalized rule (hypothesis) about it, we need to look at an important qualifier which comes to light in Ezra Klein's How politics makes us stupid, in which he discusses Kahan's work in some detail. Politics makes us "stupid" because giving people more information about some contentious issue—Kahan calls it "the science comprehension hypothesis" with respect to climate—doesn't change anybody's mind. The great 20th century economist John Kenneth Galbraith summed it up beautifully.
“Faced with the choice between changing one's mind and proving that there is no need to do so, almost everyone gets busy on the proof.”
Nevertheless, Kahan points out that more information (evidence) does matter in most cases.
Kahan is quick to note that, most of the time, people are perfectly capable of being convinced by the best evidence. There’s a lot of disagreement about climate change and gun control, for instance, but almost none over whether antibiotics work, or whether the H1N1 flu is a problem, or whether heavy drinking impairs people’s ability to drive. Rather, our reasoning becomes rationalizing when we’re dealing with questions where the answers could threaten our tribe — or at least our social standing in our tribe. And in those cases, Kahan says, we’re being perfectly sensible when we fool ourselves.
I will have more to say about this in a subsequent section, but, for now, the take-home message is that "reasoning becomes rationalizing" in the face of existential threats. Kahan is talking about threats to social groups, or threats to our identity or status within these "tribes," but such threats exist generally. Clearly, anthropogenic climate change is one such threat, among many others. And thus "reason" breaks down in these cases as well.
It appears then that the most general rule (hypothesis) describing human information processing in the cases we're interested in goes like this:
(5) Humans always filter really Bad News (information posing or implying existential threats) unless that threat is on their doorstep.
We might refer to (5) as the "Bad News" rule. There is an important extension of (5) which follows from Sharot's observation that people are always optimistic about the future.
(5)* Humans are therefore always optimistic about their own future, the future of the social groups they belong to, including their families, the future of our species, etc. However, the future is unknown, and may go badly, which poses a potential existential threat. For example, this could be the day the Grim Reaper pays you a visit. Continued optimism requires people to filter future threats.
Kahan states that people "aggressively" filter threatening information, as in political disputes, but they also passively filter such information through the traditional defense mechanisms defined by psychoanalytic theory, including dissociation, rationalization, compartmentalization, outright denial, etc.
People can thus filter threatening information in myriad different ways: they simply don't hear it, they deny or reject it outright, they acknowledge it but deny the bad news is actually bad, etc. The most common way to filter bad news is to avoid it altogether—ignorance is bliss. This has been dubbed the "ostrich effect."
In short, there are all sorts of unconscious strategies for running away from reality. We will encounter others not mentioned here as we go along.
There are various flavors of optimism bias. Technological optimism is common, though that tendency might be generalized as "human ingenuity" bias. Both are prevalent among economists, and both are demonstrations of secular faith, for the ideas and technology which will fix humankind's mounting problems in the 21st century do not presently exist.
Bill Moyers asks "what is it about human nature that wants to believe the worst can't happen?" I grappled with Moyers' question again and again on DOTE. Although we are no wiser after the Suzuki interview than we were before it, asking the right question is half the battle. Assuming that humans are endlessly malleable goes nowhere at all.
How Deep Does the Rabbit Hole Go?
somebody's making a mistake somewhere
or is it everybody?
and if everybody's making a mistake
is it really a mistake?
— the dirty poet
We have now journeyed a long way from a "blank slate" view of human cognition and motivation. But how deep does the rabbit hole go? Reasoning further, we can't help but notice another 2nd-order (metalevel) truth—
(6) The default "blank slate" view is itself a form of delusional optimism in so far as that view implies that there are no hard limits on human thought and action.
If human behavior is heavily constrained by unconscious processes—if the "blank slate" is a form of delusional optimism—this tragic reality would not only raise hard questions as to how we humans got into these large 21st century predicaments in the first place, but would also set hard limits on our species' capacity to respond to its self-created problems.
For humans, that would be the worst news of all. Human aspirations would be revealed as hopeful fantasies. The optimistic stories they tell each other would collapse like a house of cards. I shall return to this important point in the last section of this essay.
It therefore comes as no surprise that scientists and knowledgeable laymen who have discovered or had intimations of hard limits on human behavior are extremely reluctant to follow the implications of these realizations to their logical conclusions. Dan Kahan, who was quoted above, can not accept his own findings. The reporter is Ezra Klein, whose story was linked-in above.
To spend much time with Kahan’s research is to stare into a kind of intellectual abyss. If the work of gathering evidence and reasoning through thorny, polarizing political questions is actually the process by which we trick ourselves into finding the answers we want, then what’s the right way to search for answers? How can we know the answers we come up with, no matter how well-intentioned, aren’t just more motivated cognition? How can we know the experts we’re relying on haven’t subtly biased their answers, too? How can I know that this article isn’t a form of identity protection? Kahan’s research tells us we can’t trust our own reason. How do we reason our way out of that?
Those are the hard questions Kahan's research poses. Klein is looking for answers—aren't we all?—so he talks to Kahan himself. It is instructive to work through this example.
I expected a conversation with an intellectual nihilist. But Kahan doesn’t sound like a creature of the abyss. He sounds like, well, what he is: a Harvard-educated lawyer who clerked for Thurgood Marshall on the Supreme Court and now teaches at Yale Law School. He sounds like a guy who has lived his adult life excelling in institutions dedicated to the idea that men and women of learning can solve society’s hardest problems and raise its next generation of leaders.
And when we spoke, he seemed uncomfortable with his findings. Unlike many academics who want to emphasize the import of their work, he seemed to want to play it down.
Klein has stumbled upon a profound point—when important, potentially threatening issues are at stake, Kahan's research strongly implies that politics is pointless, nobody changes their mind based on evidence, and people filter any information which threatens their worldview. Shouldn't he be suicidal?
But Kahan is unperturbed. As a highly successful, high-status member of our species teaching at the prestigious Yale Law School, a man who once clerked for Thurgood Marshall, the natural move is to "play down" his own results so as to not rock the boat. Let the spin begin.
"We fixate on the cases where things aren’t working," he says. "The consequences can be dramatic, so it makes sense we pay attention to them. But they’re the exception. Many more things just work. They work so well that they’re almost not noticeable. What I’m trying to understand is really a pathology. I want to identify the dynamics that lead to these nonproductive debates." In fact, Kahan wants to go further than that. "The point of doing studies like this is to show how to fix the problem."
How to fix the problem? Apparently the problem is literally inside people's heads.
Consider the human papillomavirus vaccine, he says. That’s become a major cultural battle in recent years with many parents insisting that the government has no right to mandate a vaccine that makes it easier for teenagers to have sex. Kahan compares the HPV debacle to the relatively smooth rollout of the hepatitis B vaccine.
"What about the hepatitis B vaccine?" he asks. "That’s also a sexually transmitted disease. It also causes cancer. It was proposed by the Centers for Disease Control as a mandatory vaccine. And during the years in which we were fighting over HPV the hepatitis B vaccine uptake was over 90 percent. So why did HPV become what it became?"
Kahan’s answer is that the science community has a crappy communications team. Actually, scratch that: Kahan doesn’t think they have any communications team at all. "We don’t have an organized science-intelligence communication brain in our society," he says. "We only have a brainstem. We don’t have people watching for controversies over things like vaccines and responding to them."
Kahan concludes, contrary to everything his research demonstrates, that if only scientists were better communicators, problems with getting people to accept new vaccines would disappear. Also note that there is no requirement that humans will find every vaccine threatening. We are interested in the cases where they do find vaccines threatening.
Kahan cites problems with distributing the human papillomavirus vaccine as the exception, not the rule. Unfortunately, his delusional optimism in this case doesn't hold up. I will quote the New Yorker's I Don't Want to Be Right, formerly titled Why Do People Persist in Believing Things That Just Aren't True?
Last month, Brendan Nyhan, a professor of political science at Dartmouth, published the results of a study that he and a team of pediatricians and political scientists had been working on for three years. They had followed a group of almost two thousand parents, all of whom had at least one child under the age of seventeen, to test a simple relationship: Could various pro-vaccination campaigns change parental attitudes toward vaccines? Each household received one of four messages: a leaflet from the Centers for Disease Control and Prevention stating that there had been no evidence linking the measles, mumps, and rubella (M.M.R.) vaccine and autism; a leaflet from the Vaccine Information Statement on the dangers of the diseases that the M.M.R. vaccine prevents; photographs of children who had suffered from the diseases; and a dramatic story from a Centers for Disease Control and Prevention about an infant who almost died of measles. A control group did not receive any information at all. The goal was to test whether facts, science, emotions, or stories could make people change their minds.
The result was dramatic: a whole lot of nothing. None of the interventions worked. The first leaflet—focused on a lack of evidence connecting vaccines and autism—seemed to reduce misperceptions about the link, but it did nothing to affect intentions to vaccinate. It even decreased intent among parents who held the most negative attitudes toward vaccines, a phenomenon known as the backfire effect.
The other two interventions fared even worse: the images of sick children increased the belief that vaccines cause autism, while the dramatic narrative somehow managed to increase beliefs about the dangers of vaccines.
What do Nyhan and his colleagues think about their results?
“It’s depressing,” Nyhan said. “We were definitely depressed,” he repeated, after a pause.
Damn right Nyhan is depressed! And why? Because the discovery of hard limits on human cognition and motivation is the worst news of all.
Maria Konnikov's New Yorker story discusses a number of different studies (not including Kahan's) which all point to the same conclusion: when people interpret a vaccine as an existential threat to themselves or their children, they resist taking it. Konnikov attempts to generalize this insight with a larger example.
One thing [Nyhan] learned early on is that not all errors are created equal. Not all false information goes on to become a false belief—that is, a more lasting state of incorrect knowledge—and not all false beliefs are difficult to correct.
Take astronomy. If someone asked you to explain the relationship between the Earth and the Sun, you might say something wrong: perhaps that the Sun rotates around the Earth, rising in the east and setting in the west. A friend who understands astronomy may correct you. It’s no big deal; you simply change your belief.
But imagine living in the time of Galileo, when understandings of the Earth-Sun relationship were completely different, and when that view was tied closely to ideas of the nature of the world, the self, and religion.
What would happen if Galileo tried to correct your belief? The process isn’t nearly as simple. The crucial difference between then and now, of course, is the importance of the misperception. When there’s no immediate threat to our understanding of the world, we change our beliefs. It’s when that change contradicts something we’ve long held as important that problems occur.
This example beautifully illustrates the "Bad News" Rule (5), the application of which is highly generalized. Notice again that filtered information needn't involve potential physical threats. More commonly, socially threatening information is filtered.
The contrast between Dan Kahan and Brendan Nyhan is stark—the former can not live with his results, but Nyhan, who can, is depressed about them. Doesn't that indicate that it is indeed possible for people to accept bad news?
Unfortunately, Nyhan is the very rare exception to the "Bad News" rule. Such exceptions merely prove the rule. All those who find vaccines existentially threatening, and Dan Kahan himself, who apparently sees in his results a threat to the human enterprise and his place it it, illustrate the rule.
Nyhan is depressed because it seems that no one benefits from a recognition of hard limits on human behavior (see the section Cui Bono? above).
Thus examples of those who follow the filtering rule (5) are easy to come by. In a study of status quo bias, social psychologist John Jost, quoted above, can not believe "that the existing evidence is sufficient to warrant accepting the notion that hierarchy and inequality are genetically mandated at either the individual or species level, as argued by [others]." And yet, as noted above, there are no historical or current examples of large, complex human societies which do no exhibit high degrees of hierarchy and inequality—none, zero, nadda, zip.
New York Times environmental reporter Andy Revkin, who should know better, does an extraordinary Doctor Pangloss imitation [image above, left] in concluding that humans are perfect* just as they are. There's an asterisk, or qualification, which Revkin does not specify. Revkin is looking for paths to a "good" anthropocene.
Lo and behold, Revkin, a typical optimist, a man hard-wired for hope, a high-status, "successful" human being, a non-boat-rocker, a gatekeeper at the New York Times, a rejecter of very bad news, finds the good paths he is seeking!
We have seen that the worst news of the 21st century is the increasingly self-evident supposition that there are hard limits on human behavior. It is perfectly understandable that humans find it totally unacceptable that, in the general case, they are not exercising Free Will and making conscious choices.
And now, as poker players say, it is time to up the ante.
Adventures In Flatland
Humans are destroying the biosphere through their relentless expansion on this planet. Humans don't seem able to prevent themselves from doing so. This quote and graph are taken from Defaunation in the Anthropocene, which was published in a special section of the July, 2014 issue of the journal Science.
The long-established major proximate drivers of wildlife population decline and extinction in terrestrial ecosystems—namely, overexploitation, habitat destruction, and impacts from invasive species—remain pervasive. None of these major drivers have been effectively mitigated at the global scale. Rather, all show increasing trajectories in recent decades. Moreover, several newer threats have recently emerged, most notably anthropogenic climate disruption, which will likely soon compete with habitat loss as the most important driver of defaunation...
The graphic shows that there is a "selective impact on animals with larger body sizes." It is as though these big species are either easily visible to humans, or threatening to them. Otherwise, as in in the oceans, it seems that the natural world upon which humans utterly depend is invisible to them.
Only a strong theory of Human Nature could possibly explain such destructive (and self-destructive) behavior on such a enormous scale. Without such a theory, humans are merely whistling in the dark. As I noted before, a strong theory of Human Nature is conspicuous by its absence. To explain apparent limits on human thought and behavior, I hypothesized that unconscious processes are far more important than is generally thought. For example, it appears that the typical "blank slate" view itself derives from unconscious bias (rule (6) above).
I call the tendency to operate in the dark Flatland. This hypothesis was designed to get outside the normal human frame of reference, to add a "third dimension" to our understanding (hence the name).
We are dealing with Flatland when it is reasonable to infer that unconscious processes are driving some highly generalized behavior. What does "reasonable" mean in this context?
(7) It is reasonable to hypothesize that unconscious processes determine or influence some behavior if the behavior in question (1) is nearly universal, and (2) delusional to some significant extent (i.e., there is good reason to believe that the behavior departs from reality in a serious way, or departs from what rationality alone would dictate).
Consider economic growth and social inequality (review Rationalizing Growth and Inequality above). There is something suspect about the constant desire for more growth. Despite the grave consequences of the continuing human expansion on this planet, which our best science documents, the desire for more growth remains undiminished and the inherent goodness of growth remains unquestioned by all but a very tiny minority of the human population.
The circular or contradictory rationalizations which are used to justify more growth make us suspect that something deeper is going on. There is something fishy about the fact that growth is supposed to reduce inequality, but it seems that no amount of growth suffices to significantly reduce it (grotesque inequality and the poor are always with us).
Although humans everywhere assure us that (or act as if) they are making rational decisions about growth and inequality, we sense that there is an unseen elephant in the room which is unaccounted for. (In the future, that may be a hypothetical elephant, or a zoo elephant, because elephants will be extinct in the wild )
In short, those post-hoc rationalizations documented earlier strongly suggest that the missing "elephant" lies in the human unconscious. Bear in mind that Flatland is a hypothesis about the limits of self-awareness. It has nothing whatsoever to say about what humans call "intelligence" (or lack thereof). To say that such and such a behavior is "stupid" is often a Flatland (unenlightening) characterization of a behavior whose roots probably lie in unconscious biases or motivations.
How do we recognize Flatland when we see it? The foregoing suggests this rule of thumb:
(8) We can recognize Flatland by looking for and identifying what is missing (omitted, left out, avoided) in human behavior, including belief systems, speech, writing, etc.
In short, we are looking for what lies buried in the unconscious, and is therefore left out of the behavior in question. In a published article or presentation, what wasn't discussed can tell us more than what was discussed, and often does. Alternatively, what is most important about a political policy is what is missing from it, not what it promises to deliver (and see here or here).
This may strike you as a very strange rule because what is missing or left out is invisible to us. How can we see things which aren't there? A few detailed examples will help. It is very difficult to identify what is missing, what remains hidden in the "shadow" (outside of awareness) to use Carl Jung's term. Consider the Wall Street Journal story Lobster Dishes With Mass Appeal (August 1, 2014).
Lobster might be the ultimate totem of the seaside experience. Though it looms large in the summer vacationer's imagination, it has traditionally been pigeonholed into a tediously narrow range of preparations. This is a shame, because lobster has so much to recommend it. It's sustainable, for one, in an ocean full of creatures being fished toward extinction. It's lean. It has also, in recent years, become a bargain.
The cost of meats, fish, poultry and eggs has risen, overall, by almost 8% in the past year, according to the Bureau of Labor Statistics, but lobster is getting more affordable. Thanks to a glut of so-called soft-shell lobsters—the delicate specimens in new shells caught off the coast of Maine in the summer months—the past three seasons have delivered deals for anyone buying close to the source. Consumers at the seaside this summer are finding local prices as low as $5 a pound, as much as 50% below where they were a decade ago.
... Thanks in part to easing prices, chefs have begun to reconsider the crustacean's potential, branching out from well-worn luxury presentations. Think lobster BLTs and lobster mac and cheese. Some of the most inspiring and inspired of these treatments draw on lobster's traditional uses in far-flung global cuisines.
Lobster has become a bargain! And better yet, this tasty crustacean is "sustainable ... in an ocean full of creatures being fished toward extinction." Yes, many creatures are being fished to extinction, but according to the Wall Street Journal it seems that "all is for the best," as Doctor Pangloss says in Voltaire's Candide, in this "best of all possible worlds." Do we not still have lobster to eat?
In a post called Fishing Down The Foodchain, Booming Shellfish Populations which I wrote in 2013, we find out what the Wall Street Journal failed to mention. I cited a study called The unintended consequences of simplifying the sea: making the case for complexity, which appeared in the journal Fish and Fisheries in May, 2013. Here is the abstract, with some annotations to make its message clear.
Many over-exploited marine ecosystems worldwide have lost their natural populations of large predatory finfish and have become dominated by crustaceans [lobsters] and other invertebrates.
Controversially, some of these simplified ecosystems [the Gulf of Maine] have gone on to support highly successful invertebrate fisheries capable of generating more economic value than the fisheries they replaced. Such systems have been compared with those created by modern agriculture on land, in that existing ecosystems have been converted into those that maximize the production of target species.
Here, we draw on a number of concepts and case-studies to argue that this is highly risky. In many cases, the loss of large finfish has triggered dramatic ecosystem shifts to states that are both ecologically and economically undesirable, and difficult and expensive to reverse. In addition, we find that those stocks left remaining are unusually prone to collapse from disease, invasion, eutrophication and climate change.
We therefore conclude that the transition from multispecies fisheries to simplified invertebrate fisheries is causing a global decline in biodiversity and is threatening global food security, rather than promoting it.
The Gulf of Maine, which is the source of the lobster glut the Wall Street Journal celebrates, has become a lucrative lobster monoculture because—
... the gulf evolved from a marine system "dominated by large predatory fish," primarily cod, into one in which such species were almost completely absent.
"Big fish are ecologically extinct," marine biologist Robert Steneck said.
The world below the surface of the Gulf of Maine "is now (one of) abundant small fish. It is an ecosystem that has fundamentally changed," with "an unbelievably high density of lobster."
Last summer, a shell disease, caused by bacteria that invade through pores in the outermost layer of the shell, ravaged the lobster fishery in Rhode Island, said Steneck, and that experience "should be a wake-up call" for all Atlantic coastal fisheries.
In short, the increasingly popular lobster craze stems from an ecological disaster in the Gulf of Maine. See my original post for more details. This lobster example is instructive because —
- there is delusional positive spin (lobster is plentiful and great to eat!)
- we can identify what was omitted (the lobster glut stems from an ecological disaster)
This is only one example, but after one has seen thousands of examples just like it in a variety of subject areas, it becomes possible to detect a non-random, strong signal in what superficially appears to be a very noisy human sample. If one is careful to avoid what is called confirmation bias, an underlying pattern emerges from the superficial chaos of human behavior.
The criteria we set in (7) and (8) are satisfied, but what if I had never tracked overfishing and its consequences on DOTE? Suppose I didn't know anything about the subject. Clearly it would have been impossible for me identify what was missing from the Wall Street Journal story.
And thus we are forced to conclude that you've got to be very well-informed about the subject in question (e.g., overfishing and its consequences, or inequality and economics) in order to truly see what's going on. One person acting alone can only see the tip of the Flatland iceberg.
|Andy Revkin's Delusions — An Adventure in Flatland|
Andy Revkin is an environmental journalist and science generalist who writes the dotearth blog at the New York Times. Revkin is thus uniquely positioned to make a realistic assessment of the enormous environmental problems humankind faces in the 21st century, and to explain those problems to a large, influential audience. This is not to say that explaining those risks will have any lasting or significant effect—review the work of Dan Kahan and others as described above.
Revkin begins his "good" anthropocene talk [video above] by comparing human expansion on this planet to bacteria multiplying exponentially in a petri dish. He further observes that science is telling us that there are hard limits to our expansion (the dish has an edge). Clearly there are grave risks going forward with respect to the so-called "carrying capacity" of the planet [graph below].
Early in his talk, Revkins says:
Revkin then proceeds to use the next 47 minutes spinning an optimistic fairy tale which Clive Hamilton called (paraphrasing) "an unscientific fantasy world of [Revkin's] own construction." Following rule (8) in the text, we need to look for what is missing (left out) in this "good" anthropocene talk.
Revkin quite obviously left out a realistic (or any) assessment of the risks humankind faces going forward. He paid lip-service to those risks in the first 3 minutes of his talk, and then proceeded to act as though those risks do not exist. We are entitled to hypothesize from this conspicuous omission that we are dealing with optimism bias and existential threat filtering (rule (5) above).
Assuming that is so, might Revkin be aware of his hard-wired optimism and threat filtering? Obviously not, for those biases reside in the unconscious mind. Therefore, when he says "I kinda choose" to say "Wow!" instead of "Oh, My God!" regarding the anthropocene, we are entitled to hypothesize that Revkin did not make a conscious choice. Revkin's unconscious mind made a choice, and he provided a 47-minute post-hoc rationalization of his "decision" to view an ongoing ecological catastrophe through the lens of obligatory hope and wishful thinking.
Revkin's recent public behavior serves to reinforce our reasonable inferences. It seeems that Revkin has looked into findings in the psychological and brain sciences lately. In his anthropocene talk, he mentions that he is familiar with Dan Kahan's work, and cites Yales's culturalcognition.net. In the video, Revkin calls the human dimensions of various reactions to the climate problem "scary," and then continues on as if those human dimensions do not exist.
He picks up the social theme again in his concluding remarks at a 4-day conference on sustainability held at the Vatican earlier this year. There is a video available, but Revkin rambles as he addresses the group, so I will quote Revkin's notes for the talk, which he published in Can a Pope Help Sustain Humanity and Ecology? (May 6, 2014).
What is this? Psychological studies have "revealed deeply ingrained human traits" which seem to set boundaries on what humankind can do in the face of big existential threats like anthropogenic climate change, but Revkin immediately rejects (filters) this insight only a few lines later, saying "scientific knowledge reveals options [and] values determine choices." Revkin continues in the same vein.
This is our old friend the "blank slate" again in a new context. The "blank slate" is naturally a refuge for those hard-wired for hope.
If Revkin had been aware of the glaring inconsistency in his remarks, one hopes he would have changed them, which itself would have presented him with some very big problems. For example, he might have been forced to ask himself questions like—
Asking these kinds of hard questions quickly becomes a slippery slope to Hell if you're in the communications business as Revkin is, and you're a high-status, successful human being with children, as Revkin also is.
And now Revkin's feet leave the ground altogether. We get sentimental mush like this:
Navigating these [hard] questions can leave one feeling sapped and paralyzed.
If the "blank slate" seems like a safe refuge for those who can not deal with reality, this kind of spiritual fantasy—we are building a noosphere? a planetary mind? we are looking for a miracle of love and unselfishness?—is the last refuge for those whose flight from reality is nearly complete.
Bear in mind that Andy Revkin is one of the "good guys"—those fighting the good fight against destruction of the planet's biosphere. The vast majority of humans don't give a damn about the biosphere and never will.
Revkin knows about hard limits on human behavior, but he can not accept them because he himself is subject to those limits. Earlier in his remarks, Revkin hits the nail on the head.
Humanity is in "a race between potency and awareness"—precisely. Unfortunately, if Revkin is fooling himself, he can not level with the rest of us. If there is a way out of the mess humanity has made, greater awareness is the indispensable prerequisite for finding it. Such awareness begins with an unqualified acknowledgement of hard limits on human behavior. For example, it becomes possible to question the primacy of economic growth in human affairs only after one becomes aware of an inordinate, irrational devotion to it.
In his 1734 Essay on Man, Alexander Pope wrote—
Pope was on to something. We need to stop hoping for "miracles of love and unselfishness."
Otherwise, it will simply be "business as usual" for Homo sapiens, a species whose best and brightest continue to delude themselves and each other.
Once you train yourself to recognize Flatland, you will see it everywhere you look. For example, as I was writing this text—this really just happened—I wanted to give you a good link to "carrying capacity" in the Revkin textbox above. I found one, and used it. As I looked that Rio+20 study over, I noticecd that it presented a survey of estimates of the Earth's human carrying capacity. Aside from the graph I took from it above, it also included this image.
The caption reads "estimates of Earth's carrying capacity vary dramatically," but when we apportion those estimates, we find that 52 estimates (80%) say we will exceed the planet's carrying capacity in the future, and only 13 estimates (20%) say we have already exceeded it, i.e., we have overshot the Earth's ability to support us all . This distribution of estimates is shocking, given that the kind of person who is concerned with the planet's carrying capacity is the kind of person who worries about that kind of thing in the first place. Thus we are dealing with a very small self-selected group.
Once again, we are entitled to hypothesize that optimism bias and threat filtering are at work in this example. Therefore, we can not be sure that we are looking at a realistic assessment of risk in these estimates. We can guess that a realistic assessment of the Earth's carrying capacity is missing in the large majority of them, though we would have to look at each one individually to reach that conclusion. That said, you will find that pushing really bad news off into the future is a common avoidance strategy. This is yet another form of existential threat filtering (the "Bad News" Rule above).
That concludes our too-brief introduction to Flatland. To explain the observations discussed to this point, I came up with the conceptual model below. Some of the text is from the original post.
This overly simple model attempts to capture the idea that hard-wired, unconscious motivations or cognitive processes (including various biases) drive much of human behavior. God only knows how the interplay between the unconscious and awareness actually works, or where consciousness itself comes from. Neuroscientists don't know, and neither does anyone else. That said, the key observation is that information and control only flows one way—from the unconscious into awareness, and not the other way (text in lower left above).
Thus we hypothesize that there is a fundamental integration problem regarding awareness and the unconscious. If this Flatland "architecture" of the mind is conceptually correct, it implies that consciousness (subjective awareness, the Ego, the sense of self) arose as an evolutionarily useful byproduct of bigger, better connected brains.
If that reasonable conjecture is correct, we would expect to observe the basic illusion that awareness is running the show, whereas it is actually the poor dependent in a master/slave relationship. That is the fundamental human flaw.
Consequently, humans do not have the capacity to come to grips with and respond appropriately to situations which fly in the face of unconscious motivations and biases. Humans are simply ill-equipped to deal with those kind of realities. We might state this as rule (9):
(9) Humans typically delude themselves and each other if there is a conflict with unconscious motivations or biases*. This is especially true if something important is at stake (existential threat filtering, as discussed previously).
* Note that I wrote "if" (specifying a sufficient condition) and not "if and only if" (specifying both sufficient and necessary conditions). In other words, humans also delude themselves and each other when apparently nothing is at stake. That's just the way they are.
Looking at the bigger picture in the 21st century, we can surmise that the oddly large Pleistocene ("stone age") brain was not "designed" by evolution to handle big existential threats like global warming, fossil fuels depletion, the destruction of marine ecosystems and the Sixth Extinction. In fact, it is difficult to get the vast majority of humans to acknowledge that these are indeed existential threats.
In the daily hubbub of current “crises” facing humanity, we forget about the many generations we hope are yet to come. Not those who will live 200 years from now, but 1,000 or 10,000 years from now. I use the word “hope” because we face risks, called existential risks, that threaten to wipe out humanity. These risks are not just for big disasters, but for the disasters that could end history...
I have selected what I consider the five biggest threats to humanity’s existence. But there are caveats that must be kept in mind, for this list is not final... [and they are, sans details]
- Nuclear War
- Bioengineered Pandemic
- Superintelligence (AI, machine intelligence)
- Unknown unknowns (things we haven't thought of yet)
You might wonder why climate change or meteor impacts have been left off this list.
Climate change, no matter how scary, is unlikely to make the entire planet uninhabitable (but it could compound other threats if our defenses to it break down). Meteors could certainly wipe us out, but we would have to be very unlucky.
The average mammalian species survives for about a million years. Hence, the background natural extinction rate is roughly one in a million per year. This is much lower than the nuclear-war risk, which after 70 years is still the biggest threat to our continued existence.
Deconstructing Flatland distortions and omissions is always much harder than simply asserting them, so please bear with me. Surely we can all agree Homo sapiens is not the "average" mammalian species, so I will skip the proof.
Sandberg cites the background extinction rate, which in this context is meant to apply to humans. Apparently, we are being asked to believe that future advances in nanotechnology present a greater risk to our existence than three simultaneous human-caused environmental catastrophes—global warming, ocean acidification, and the Sixth Extinction. Each crisis is happening in less than the blink of an eye on the geological timescale. Also bear in mind that the worst effects of these crises still lie ahead of us
The actual background and current rates for mammals were calculated in Has the Earth's sixth mass extinction already arrived? (Nature, Vol. 471, Issue 7336, March 3, 2011). Rates are expressed in extinctions per million species-years (E/MSY). The fossil record reveals that the background rate is approximately (≈) 1.8 E/MSY for mammals. What is the current rate?
The maximum observed rates since a thousand years ago (E/MSY ≈ 24 in 1,000-year bins to E/MSY ≈ 693 in 1-year bins) are clearly far above the average fossil rate, and even above those of the widely recognized late-Pleistocene megafaunal diversity crash (maximum E/MSY ≈ 9).
Clearly Sandberg has understated the rate at which mammals are going extinct, which allows him to also understate the risk to the Earth's biosphere and, eventually, to humanity itself. Sandberg asserts that global warming is not an existential threat because it is unlikely to make the entire planet uninhabitable! That sets the bar pretty high. Still, this bizarre rationalization—we've seen plenty of those previously—is not the most interesting thing in this example.
What's interesting from a Flatland point of view is that the four "biggest threats" to humanity all arise from human technological cleverness. And if you look at the picture of the Flatland brain I drew above, you will see something I call the technological instinct in the unconscious. That conjecture derives from the observation that humans, presented with some intractable, self-created problem, always seem compelled to apply technological fixes when behavioral changes are clearly called for. In such cases, we hypothesize that unconscious processes proscribe or inhibit the required changes.
Thus we can surmise that Sandberg's strangely constrained view of what constitutes an existential threat derives from an overdose of instinctual technophilia. More colloquially, Sandberg's views amount to this: "We humans are really technologically clever, so clever, in fact, that sometimes we're too clever for our own good!"
The probability of machine "superintelligence" arising at all, let alone doing us in, is indistinguishable from zero, but Sandberg places it among his four biggest existential threats while leaving ongoing catastrophes like the Sixth Extinction and human alteration of the Carbon Cycle off the list. Like nuclear fusion, super-smart machines are decades away, and always will be.
Such a radical departure from reality requires an explanation, and Flatland provides one. Anders Sandberg can not help but be optimistic about the future potency of machine intelligence or nanotechnology even when he is talking about the end of the world
Homo sapiens arose 200,000 years ago. Never before have humans been confronted with such large problems on a planetary scale. These self-created problems get worse every day, and humans have not demonstrated that they have the capacity to respond to them in a meaningful way.
Such failure also demands an explanation. Flatland is my attempt to explain those failures.
Barriers To Self-Knowledge
In overpopulating the unconscious, I undoubtedly made it overly complex. We have seen how optimism bias and subtle defense mechanisms—existential threat filtering in its many guises—work together, but these unconscious processes, along with conceptually similar phenomena like self-serving bias, the pollyanna hypothesis and Dan Ariely's self-concept maintenance, may all be manifestations of some unidentified cognitive scheme which causes people to maintain a positive outlook and filter generally negative or self-negating information.
It is well-established that a generally positive attitude, whether it applies to one's own future, the future of the social groups one belongs to, or the future of the species, is adaptive (health-promoting), even when such an attitude flies in the face of reality. This text is from Maria Konnikova's Don't Worry, Be Happy, which I quoted in a previous section.
By 1992, Alloy and Abramson had replicated their findings in numerous contexts and could take the logic further. Not only were depressed individuals more realistic in their judgments, they argued, but the very illusion of being in control held by those who weren’t depressed was likely protecting them from depression in the first place. In other words, the rose-colored glow, no matter how unwarranted, helped people to maintain a healthier mental state. Depression bred objectivity. A lack of objectivity led to a healthier, more adaptive, and more resilient mind-set. In a 2004 meta-analysis, Abramson and colleagues at the University of Wisconsin at Madison confirmed that the positivity bias held firm both internationally and in large non-student samples. The over-all effect, they concluded, “may represent one of the largest effect sizes demonstrated in psychological research on cognition to date.”
Depression breeds objectivity! There is even a technical term for the tiny minority of humans who lack this generalized optimism—they are called depressive realists. There is a feedback loop at work here, for it is also true that objectivity breeds depression
Despite results like those cited just above, a comprehensive theory of human nature is conspicuous by its absence. So, where's that theory of human nature?
There are lots of bits and pieces of such a theory lying around waiting for someone to put them altogether. Current research in the neurological and social sciences resembles the work of a botanist walking in a forest identifying this or that kind of tree. What is missing is an investigation of the human forest itself, taken as whole. Naturally, much of this work takes place in Flatland. This is cognitive scientist Gary Marcus talking about studies in evolutionary psychology (2nd link, this paragraph).
In my own work, I have thought deeply about what [Daniel] Kahneman's work might mean for evolutionary psychology. The default assumption of evolutionary psychology is one of optimality: give evolution enough time, and eventually it will alight on a beautiful, elegant solution, like the retina, sensitive to a single photon of light.
The reality of evolution is that it is a blind process, with no guarantee of alighting on optimality. Too much of evolutionary psychology, in my view, dwells on systems in which the mind is apparently optimal; the real challenge ought to be in understanding how those apparently-optimal systems live alongside other systems that sometimes confoundingly seem to do the wrong thing; until evolutionary psychology can explain anchoring, availability, and future discounting, as well as it can explain mate selection and reciprocal altruism; it will be only half a science.
The ultimate goal of human psychology must be to characterize both what we do well, and what we do poorly, and how we balance the two. Kahneman and Tversky's work is, without question, the best place to start.
Evolutionary psychology is "only half a science," and we can guess which half that is. The "default assumption" of those researchers is "optimality" in human cognition. After all, aren't we humans pretty much perfect just the way we are? (paraphrasing Andy Revkin).
Work on relatively minor unconscious biases which distort personal decision-making (anchoring, etc.) is of course only distantly related to the fact that Homo sapiens is destroying the Earth's biosphere. Nor would the scientists in question claim that their work deals with such questions. And that's the problem we now face in a nutshell.
There is a near-total disconnect between scientific investigations of the mind and the big existential threats facing humankind in the 21st century. It's not as if we humans have another 1000 years to figure out how the mind works. This is called "fiddling as Rome burns," and humans are very, very good at it.
With results like Tali Sharot's identification of optimism bias, it is easy to put 2 and 2 together, but in broader, murkier areas like the unquestioned, unquestionable and insatiable desire for growth in population or consumption, neither neuroscience nor psychology offers any guidance. When a person travels down that road, he walks alone. Do we know why that is?
Recall rule (9) above, repeated here for your convenience.
(9) Humans typically delude themselves and each other if there is a conflict with unconscious motivations or biases. This is especially true if something important is at stake (existential threat filtering, as discussed previously).
Clearly something important is at stake in a theory of human nature because, as we have seen previously, such a theory gives us very bad news—there are hard limits on human behavior. Moreover, we know that humans filter very bad news. And so we arrive at (10).
(10) Human nature generally precludes the attainment and application of true and consequential self-knowledge in so far as the Flatland mind does not have the capacity to understand itself.
|Barriers To Self-Knowledge|
Unconscious motivations and biases are the source of human self-delusion. We've seen that in our consideration of existential risk. Let's assert that as statement (A).
If you ask humans whether they assess risks accurately (e.g., climate change risk), they will tell you of course they do, despite the fact that human assessments of global warming risk run the gamut from "there is no risk" to "runaway warming will make the Earth as hot as Venus." If humans could judge risk correctly, we would expect to see convergence on a well-delineated set of reasonable assessments, each of which would have some associated uncertainty depending on what humans do in the future. I will discuss climate change risks in Part II.
Now consider (B).
We've got to consider the human source of these assessments. (B) is true because (A) implies (B), i.e., if humans did not filter existential threats, they would be better able to assess risk.
As we've seen, the fact that (A) and therefore (B) are true is itself very bad news, i.e., the fact that humans are self-deluding means they are "whistling in the dark" as discussed above. Watch the Barbara Ehrenreich video at the end of this essay—"delusion is always dangerous."
What made the Dan Kahan and Andy Revkin cases interesting was that both men were aware of (B), but could not accept that conclusion because (B) itself is really bad news, not least for the optimistic worldviews of Revkin and Kahan themselves.
Therefore, instinctual optimism/positivity (A) makes it impossible for humans to accept or fully acknowledge (B). And now we are inexorably led to the grim conclusion (C).
And if (B) is not correctable, neither is (A). Conclusion (C) is the most dangerous existential threat humans face.
If one comprehends and fully accepts (B), i.e., the usual filtering does not apply, as in the Brendan Nyhan example discussed above, it is only a short leap to the conclusion (A). You will recall that Nyhan became depressed when his research revealed (B).
In short, "instinctual optimism" means what it says. Humans generally can not confront and acknowledge human shortcomings. Those limits reside in the unconscious, and always will.
The argument above is thus an informal "proof" of (10).
The implications of (10) are very grim indeed. To cite some examples used in this essay, (10) means that Andy Revkin will never become aware of his instinctual optimism, which is rooted in a delusional "blank slate" view of the human condition. (10) means that Dan Kahan will never accept this own results. (10) means that Anders Sandberg will never accept the fact that various anthropogenic environmental catastrophes constitute an existential risk to humanity.
Flatland provides a crude theory of human nature, but even if there were a well-supported, broad theory of human nature which attempted (for example) to delineate growth instincts in the human animal, a theory in and of itself changes nothing. I know this from direct experience..
The "attainment of self-knowledge" might be reframed as "achieving greater self-awareness" in (10). Bear in mind that we are trying to explain things like the absence of a species-wide debate on the desirability of further growth in populations or consumption. Thus we find nothing about the unquestioned desire for growth in the social sciences, or we get Flatland (politicized, pointless) debates about global warming as the global economy expands.
The lack of realism is very discouraging. Harvard cognitive scientist Steven Pinker wrote a 500-page tome called The Blank Slate — The Modern Denial of Human Nature. This sounds promising until you actually look through the book and realize that Pinker ignored the link between human nature and the huge environmental problems attending human expansion on this planet. It is easy to see why: Pinker's only remarks on "Malthusian prophecies" reveal him to be a garden-variety optimist (pp. 236-239). He heartily endorses the views of economist Paul Romer, also at Harvard, who theorizes in the usual way that human ingenuity—endless human cleverness, especially ideas leading to new technologies—will solve any resource or environmental problems which may turn up.
Pinker believes that human intelligence (cleverness) is effectively infinite.
"Our Nature is an illimitable space through which the intelligence moves without coming to an end," wrote the poet Wallace Stevens in 1951. The limitless of intelligence comes from the power of a combinatorial system. Just as a few notes can combine into any melody and a few characters can combine into any printed text, a few ideas—PERSON, PLACE, THING, CAUSE, CHANGE, MOVE, AND, OR, NOT—can combine into an illimitable space of thoughts.
The ability to to conceive an unlimited number of new combinations of ideas is the powerhouse of human intelligence and a key to our success as a species. Tens of thousands of years ago our ancestors conceived new sequences of actions...
The combinatorial power of the human mind can help explain a paradox about the place of our species on the planet. Two hundred years ago the economist Thomas Malthus called attention to...
The immediate problem with Malthusian prophesies is that they underestimate the effects of technological change in increasing the resources that support a comfortable life... Many people are reluctant to grant technology this seemingly miraculous role... Technology may have bought us a temporary reprieve but it is not a source of inexhaustible magic. Optimism would seem to require a faith that the circle can be squared.
But recently the economist Paul Romer has invoked the combinatorial nature of cognitive information processing to show how the circle might be squared after all. He begins by pointing out that human existence is limited by ideas, not by stuff...
It is embarrassing to read this Flatland nonsense—the adolescent simplicity, the radical departure from reality, the mindless cheerleading for the Homo sapiens team. You will recall Kurt Vonnegut's take on this kind of thing: "it appears to me that the mostly highly evolved Earthling creatures find being alive embarrassing or much worse."
Reading Pinker's breathless endorsement of endless human cleverness, one wonders why anybody would waste their time worrying about the human-caused Sixth Extinction, human-caused climate change, or human-caused destruction of marine ecosystems. Why did Elizabeth Kolbert even bother to write that book?
Pinker is merely a run-of-the-mill optimist of course, but considering that this mindless drivel comes from a cognitive scientist who wrote a voluminous book about human nature, and the modern denial of that nature, we might have hoped for more. The "barriers to self-knowledge" I've discussed in this essay are real. The rabbit hole goes very, very deep. You can toss Steven Pinker into the hopeless bin (rule (10)).
I was limited to a few examples in this essay, but examples like these could be enumerated ad nauseum. And that's what I did on DOTE over the course of four and a half years, for example here. Looking back, I repeated myself a lot, to an embarrassing extent, actually, but that is a result of publishing a daily blog.
Global warming is really Bad News. Ocean acidification is really Bad News. The Sixth Extinction is really Bad News. Etc. But the hypotheses I've laid out above ((9) and (10) above) are far and away the worst news humanity has ever received (or not received, as it turns out).
The Flatland model is falsifiable. Humans could start responding in new and positive ways to their self-created problems, both large and small. If they did so, Flatland would not be an accurate sketch of human nature. I would love to be proved wrong.
But I don't see that happening. Flatland implies that humans are limited and therefore predictable. Most everything I see bears that out (there are vanishingly rare exceptions of course). Quoting Walter Munk and Andy Revkin, I'm not going to hold my breath waiting for a "miracle of love and unselfishness."
"Delusion is always dangerous" — Barbara Ehrenreich
This ends Part I of Adventures In Flatland.
Decline of the Empire
August 31, 2014