I will quote from Jill LePore's After The Fact (The New Yorker, March 21, 2016 issue). She's talking about the views of Michael Lynch, who is introduced in this quote.
Most of what is written about truth is the work of philosophers, who explain their ideas by telling little stories about experiments they conduct in their heads, like the time Descartes tried to convince himself that he didn’t exist, and found that he couldn’t, thereby proving that he did.
Michael P. Lynch is a philosopher of truth. His fascinating new book, “The Internet of Us: Knowing More and Understanding Less in the Age of Big Data,” begins with a thought experiment: “Imagine a society where smartphones are miniaturized and hooked directly into a person’s brain.” As thought experiments go, this one isn’t much of a stretch. (“Eventually, you’ll have an implant,” Google’s Larry Page has promised, “where if you think about a fact it will just tell you the answer.”)
Now imagine that, after living with these implants for generations, people grow to rely on them, to know what they know and forget how people used to learn—by observation, inquiry, and reason. Then picture this: overnight, an environmental disaster destroys so much of the planet’s electronic-communications grid that everyone’s implant crashes.
It would be, Lynch says, as if the whole world had suddenly gone blind. There would be no immediate basis on which to establish the truth of a fact. No one would really know anything anymore, because no one would know how to know.
I Google, therefore I am not.
Lynch thinks we are frighteningly close to this point: blind to proof, no longer able to know. After all, we’re already no longer able to agree about how to know. (See: climate change, above.) Lynch isn’t terribly interested in how we got here. He begins at the arrival gate. But altering the flight plan would seem to require going back to the gate of departure.
My Flatland model is the gate of departure.
How do we know what we know? Lynch is saying that people "know" things based on what Google tells them. Without Google to tell them things, they would know nothing. But that is simplistic nonsense.
The Flatland model as laid out in the three essays (and an unfinished fourth) yields a deceptively simple answer to the knowing question. The abstract model of how the brain "works" pictured below is included in that unpublished fourth essay. In that essay I called the brain a "reducing valve." This text introduced the term.
In “The Doors of Perception,” Aldous Huxley concluded from his psychedelic experience that the conscious mind is less a window on reality than a furious editor of it.
The mind is a “reducing valve,” he wrote, eliminating far more reality than it admits to our conscious awareness, lest we be overwhelmed. “What comes out at the other end is a measly trickle of the kind of consciousness which will help us to stay alive.” Psychedelics open the valve wide, removing the filter that hides much of reality, as well as dimensions of our own minds, from ordinary consciousness...
Here's a brief explanation of what the pictured model says—
Confirmation bias filters what we attend to. Roughly speaking, humans selectively attend to information which confirms what they already "know" (top, left).
Information which 1) contradicts what people "know"; and 2) is threatening in some way is filtered such that it never gets into memory, or becomes distorted in memory (top, center).
Compromised memory, along with the entire unconscious and innate suite of Flatland biases, instincts, defenses, filters, etc. serve as inputs to a "Gatekeeper" (akin to Michael Gazzaniga's interpreter) which reduces that information down to what I call a congruent output (lower left).
Necessarily, a congruent output is 1) an outcome which satisfies—is in agreement or harmony with—all of the constraints Flatland imposes; and 2) is in harmony with what resides in (already compromised) associated memory (middle).
Congruent outputs appear after the fact (post-hoc) in consciousness (lowest left and right). Consciousness itself is epiphenomenal, meaning that there is no conscious "mind" separate from the physical brain. Consciousness, however implemented in the brain, merely reinforces the congruent inputs the Gatekeeper gives it. Thus you are almost always absolutely sure about what you "think" you know.
Oversimplifying, "congruent outputs" are what the "external self" knows. That's the answer to the title question. As the diagram indicates, we are always dealing with an external self when we deal with other persons—that person can be at various times the bullshitter, the actor, the publicist, the defendant, the prosecutor, etc., just as other people deal with your external self.
Humans don't arbitrarily download information from the internet. The vast majority of them don't use the Google portal to learn things. That's very rare, at least among non-scientists. Humans establish a social identity on the internet which pretty much assures that any information they get through their self-selected ("chosen") portals will be congruent with their internal self as shown above.
Those who live and breath politics go to political websites. Progressives go to progressive websites. Doomers go to doomer websites. Environmentalists go to environmentalist websites. And so on. Social apps like Facebook and Twitter, in so far as they are generic, cater to all human social groups and facilitate the formation of social subgroups (likes, dislikes, etc.). Very often, a person's self-selected portal is simple texting, which is purely social and personal.
Thus the subdivided structure of the internet implements the Flatland model pictured above, for the most important Flatland instincts are by far our social instincts, as I wrote about in the third essay. Although I have a left a great deal out of this summary, still, you are in a position to evaluate this text describing "philosopher of truth" Michael Lynch's views—
Then came the Internet. The era of the fact is coming to an end: the place once held by “facts” is being taken over by “data”...
This is making for more epistemological mayhem, not least because the collection and weighing of facts require investigation, discernment, and judgment, while the collection and analysis of data are outsourced to machines. “Most knowing now is Google-knowing—knowledge acquired online,” Lynch writes in “The Internet of Us” (his title is a riff on the ballyhooed and bewildering “Internet of Things”).
Google-knowing? The Flatland model exposes this as specious nonsense.
We now only rarely discover facts, Lynch observes; instead, we download them. Of course, we also upload them: with each click and keystroke, we hack off tiny bits of ourselves and glom them on to a data Leviathan.
That "data Leviathan" humans create (downloading, uploading) mostly reflects Flatland instincts at work (social instincts, confirmation bias, "bad news' filtering, etc.). On the other hand, when NASA and others tell you Feburary, 2016 demolished all previous single-month temperature anomalies, you can believe it.
“The Internet didn’t create this problem, but it is exaggerating it,” Lynch writes, and it’s an important and understated point. Blaming the Internet is shooting fish in a barrel—a barrel that is floating in the sea of history.
It’s not that you don’t hit a fish; it’s that the issue is the ocean. No matter the bigness of the data, the vastness of the Web, the free-ness of speech, nothing could be less well settled in the twenty-first century than whether people know what they know from faith or from facts, or whether anything, in the end, can really be said to be fully proved.
Faith? Facts? Can anything be fully proved? Well, obviously religious instincts are part of the answer, but most of this is silly nonsense, mere Flatland confusion (lack of self-knowledge).
As far as I'm concerned, to a good enough first-order approximation, the Flatland model above explains where human "knowledge" comes from, and it ain't a pretty sight. A second-order approximation might discover what is actually implemented in the physical brain, thus augmenting or even supplanting the Flatland model. I am confident that most (if not all) of the fundamental unconscious processes the Flatland model captures would be retained in or follow from a model which detailed how unconscious processes are implemented.
Obviously I will have more to say about this in the future. If you have questions about this model in the comments, I will do my best to answer them.