Extending the mind, retracting intelligence and augmenting reality
Those who support this second claim tend–rightly so, to a certain degree–to subordinate the potentiality of a centre-less distribution of knowledge to the limits imposed by the capitalist, post-industrial structure. As Nicholar Carr argued a couple of years ago
Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it [Google] carries out thousands of experiments a day…and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind.
Most of the proprietors of the commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we flit from link to link—the more crumbs, the better. The last thing these companies want is to encourage leisurely reading or slow, concentrated thought. It’s in their economic interest to drive us to distraction.
The argument thus being that the supposed democratization of information retrieval is actually a new way to channel consumers into the arms of the appropriate service or goods provider, where the ‘free’ bit actually goes only so far, after which the consumer will be forced to spend what he or she wants. So far so good (and this would deserve another post on IT and open access). But, the thesis goes further than this: it claims that this flattening of barriers between information will produce a flattening of intelligence itself, since ‘Google is doing the work of the mind’. Indeed, quoting an essay of Richard Foreman, Carr concludes arguing that
As we are drained of our “inner repertory of dense cultural inheritance,” Foreman concluded, we risk turning into “‘pancake people’—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.”
That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.
This thesis seems to be somewhat supported by a study from University College London: Information Behaviour of the Researcher of the Future. Here the researchers argue that
CIBER deep log studies show that, from undergraduates to professors, people exhibit a strong tendency towards shallow, horizontal, `ﬂicking’ behaviour in digital libraries. Power browsing and viewing appear to be the norm for all. The popularity of abstracts among older researchers rather gives the game away. Society is dumbing down.
Is the Internet, together with all sorts of technological prosthetic apparatuses (from an external HD to an iPhone) turning our society into an idiocracy? And what can the difference be between intelligence and artificial intelligence? In what way is the second one ‘flatter’?
Some people (here Carl Zimmer) disagree with these predictions of ‘dumbing down’:
The extended mind theory doesn’t just change the way we think about the mind. It also changes how we judge what’s good and bad about today’s mind-altering technologies. There’s nothing unnatural about relying on the Internet—Google and all—for information. After all, we are constantly consulting the world around us like a kind of visual Wikipedia. Nor is there anything bad about our brains’ being altered by these new technologies, any more than there is something bad about a monkey’s brain changing as it learns how to play with a rake.
There’s no point in trying to hack apart the connections between the inside and the outside of the mind. Instead we ought to focus on managing and improving those connections. For instance, we need more powerful ways to filter the information we get online, so that we don’t get a mass case of distractibility.
I don’t want to dissimulate my sympathy for technological enhancements and augmented reality, but I do think that one problem should be highlighted, one which both Carr and Zimmer mention: the problem of distractability. Is ‘superficial reading’ a necessary byproduct of the information overload which the Internet exposes us to? And if so, does superficial reading equate to a diminishing of ‘concentrated thought’? In other words, does power browsing make us dumb, while reading a good old fashioned book from cover to cover make us smarter? My answer is no.
The assumption behind such a claim is that ‘intelligence’ works linearly, algorithmically, and therefore that our interface with structured content (such a whole book) will produce the maximal assimilation of knowledge and will activate the maximal exercise of cognitive abilities. This is just not the case. A Heideggerian approach to intelligence for example (and here of course I think of the work of Hubert Dreyfus), dispels such an ideal. I believe that Zimmer, when claiming that ‘we are constantly consulting the world around us like a kind of visual Wikipedia’ is precisely arguing for this, if we replace the ‘consulting’ with ‘coping with’. Our interaction with the world being in the world, never comes through as an organized structure of information, but we move through it haphazardly, where referential totalities are not pre-packaged, but organized through the act of living in them (as an imperfect example, consider the difference between Google, whose results are conditioned by, but do not replace, human contextualizing preferences–Google does not do the work of the mind–and Wolfram Alpha, designed to give ‘systematic knowledge’). We are already in a world, and in our dealings with it we hyperlink this or that facet of it.
[Incidentally, Dreyfus, pushing this argument against the 'extended mind' argued that
for a Heideggerian, all forms of cognitivist externalism presuppose a more basic existential externalism where even to speak of “externalism” is misleading since such talk presupposes a contrast with the internal. Compared to this genuinely Heideggerian view, extended-mind externalism is contrived, trivial, and irrelevant.
The fault of 'extended minders' is one of keeping a (Cartesian) distinction between world and mind, ignoring the basic Heideggerian presupposition of the 'always already'].
If artificial intelligence is ‘flat’ it is because it (still?) cannot achieve this contextualizing power. But the idea that through the use of technology we ‘flatten’ our own intelligence makes simply no sense. The fact that our attention is spread over an increasingly larger field of information providers does not mean that we lose contextualising power. The reason why we are more prone to ‘distraction’ is because it is harder to holistically process a large amount of information, but it still is the only way we can do it, because that is the kind of relationship we develop with the world. [James Webster (in here) has defined the 'hyperlink world' of blogs and commercial websites a 'marketplace of attention': in an information-overloaded world the real commodity becomes attention, how much time we can dedicate to receiving information].
In order to keep-up with the flow of information, mind-extensions grant us large and flexible ways to store and filter new ones. There are three forms in which technology provides help:
- Technological providers of information (mainly, any device which allows for remote access to the Internet)
- Technological storage for information (hard drives, which will eventually disappear in favour of more flexible cloud-uploading)
- Linear computation
[Often, of course, a single device can perform all the above roles: I retrieve the address of the restaurant I want to go tomorrow and save it on my iPhone's memory, and when I am there I can divide the amount of money on the bill if I am sharing it with friends].
This means that my capabilities have been expanded, surely, but in a limited field. I think that the usual example of the calculator (faster and better than any mathematician) is a trivial one, for it refers to the specific field of linear computation which can be certainly be more efficient in a machine than in a human mind, but that hardly is ‘intelligence’.
But the fact that I am not ‘more intelligent’ thanks to my technological devices does not mean that I am therefore dumber.
Here I am, first, agreeing with Chalmers and Clark when they claim that ‘once the hegemony of skin and skull is usurped, we may be able to see ourselves more truly as creatures of the world’ and I do think that external objects play a role in aiding cognition; second, agreeing with Dreyfus when he claims that the previous point is obvious if we understand us as being ‘always already’ in a referential totality; third claiming that this does not mean that technology makes us dumber because the kind of service that it provides (information retrieval, storage and linear computation) is not the mark of ‘intelligence’ (and this is also the reason for which I believe that most IQ tests are, at best, testing certain mental abilities of the subject, but not intelligence) hence it cannot make us ‘lazy’ because it does not replace our own contextualizing abilities.
The best example of this is augmented reality. If I am wearing AR goggles, technology is providing a richer picture of reality, where the hyperlink structure is projected onto the real world. And the reason why ‘it works’ (well, the technology is not quite ready yet) is because we already work that way, and the technology simply gives us an enhancements.
In this mind-centered discussion I am keeping aside an ontological problem: imagine a perfect AR technology, which allows us (through a retinal implant) to seamlessly integrate virtual objects in our vision of reality. In this case it would not only be a matter of additional information about the world, but an actual enrichment of the objects that populate the world itself. What is the ontological status of these virtual objects ‘projected’ onto the real world? Are they distinct from it? Do they have causal interactions with real objects? Of what kind?