By Rob Clowes
✅ AI Essay Writer ✅ AI Detector ✅ Plagchecker ✅ Paraphraser
✅ Summarizer ✅ Citation Generator
It is possible to imagine that human nature, the human intellect, emotions and feelings are completely independent of our technologies; that we are essentially ahistorical beings with one constant human nature that has remained the same throughout history or even pre-history? Sometimes evolutionary psychologists—those who belief human nature was fixed on the Pleistocene Savannah—talk this way. I think this is demonstrably wrong.
Consider ancient technologies, for example: cooking. When ancient hunter-gatherers discovered that firing their meat would not only make it tastier but could make their food easier to digest, it had a number of knock-on effects. It made parts of an animal carcass that were previously inedible edible, and also made it possible to preserve food longer. But this was not only a culinary but a cognitive success. Why? Because the amount of time our ancestors had to spend finding, hunting, and butchering their food, and thus maintain life, could be reduced making time available to invest in other activities such as thinking. Thanks to cooking, new time became available to prehistoric humans to plan, to consider, and invent other even more liberating technologies—or even to fritter away drawing bison and mammoth on the walls of caves.
Another ancient cognitive technology, writing, was first invented perhaps five to six thousand years ago around the fertile crescent, probably developing from ‘counting tokens’ which it is thought were used to keep track of agricultural stores from as early as 7000 BC. Over the following millennia, the widespread (and unforeseen) uses of writing allowed some societies to develop not just entirely new forms of civilization and culture, but new modes of cognition. Writing is a cognitive technology if anything is. Writing (and reading) allowed us to develop new and latent cognitive abilities as—especially in Ancient Greece—thought moved from something that was of the moment to something which could be recorded and then brought to mind later. Thanks to writing, it became possible to stabilize thoughts, more effectively share thoughts, and especially criticize thoughts, both of ourselves and others. Writing also fostered the ability to produce and follow long chains of argument that could be criticized and expanded in an iterative way. All this helped draw out the timescale of thought from seconds and minutes, to week and years, to eternity (it is possible to read Plato’s dialogues today or even listen to it on your iPod, and thus commune with the ancients). Writing allowed us to ever more effectively redirect and reinvent our cognitive abilities. Many think a particular inwardness of mind we take for granted today and certain forms of imaginative projection were only really made possible by writing.
So cognitive technologies—I have given two examples—have freed up tremendous cognitive power by giving us the time and the leisure to turn our minds away from the brute exigencies of life and toward more liberating things as well as tools to extend, restructure, and amplify certain modes of thinking. Writing especially, by offering human beings new facility to shape and mold our cognitive abilities, changed the nature of human beings. We became in a novel sense, self creators.
But this raises a question: if technology can do all this to change the way we think for the better, might it not similarly work for ill? According to Nicholas Carr, author of “The Shallows: How the Internet is Changing the Way We Think, Read and Remember” (2010), the internet is such a technology. It may appear as a technology to help us find information we need and increasingly to connect with others but, it actually functions as an engine of distraction, geared up not to help us find what we need to know or maintain a train of thought, but to distract, dissipate, and frustrate us. Moreover, by encouraging distraction, it undermines our abilities to deeply engage with knowledge and encourages us to be shallower and less fully developed human beings. Let us look at some of his examples.
Take hypertext: research appears to show that if you read the very same text off a screen using hypertext rather than from a book, you will read slower, more forgetfully, and you will have a poorer sense of the overall meaning of whatever you were reading. Indeed, reading through web-browsers, you are also unlikely to finish reading a whole article at all but to have jumped off to do something else such as check email, or play a game.
The superficial and distracted style of reading this engenders is only amplified by the modern tabbed browser, which encourages us to open many windows to track what we think we might be interested in. Opening tabs may feel like it helps us keep track of other avenues we may follow-up later, yet research suggests most of us never read most of the tabs we open or read no more than a few lines from each. Indeed, we often forget whatever we were looking for in the first place.
Or take what is now a fundamental technology of the internet: Google Search. Google tends to trump any other research technology we might use to locate information because it is just so useful. And yet it has a number of structural failings. It is highly selective of some sources over others and often they are not the most reliable; it also leaves information that we might be interested in out of returned search results and is fairly easy to bias by the unscrupulous. Some of the prominent results it produces are merely sponsored content that conduct the unwary to whatever distractions advertisers have paid for. It customizes itself to our apparent interests and thereby creates a sort of bubble around us, filtering out much of that which we might want to know about. Indeed, Google increasingly uses profiling information to try to guess what its users might want to see based on their search history, thus creating a sort of individualized pre-filtered bubble around them. This ‘service’ is something most of its users are unaware of and may have unfortunate effects in eventually filtering our relevant information, which just does not accord with a user’s history up until now (see: Pariser, 2011).
Carr claims these are essential problems, because Google’s basic financial model works by distracting us from whatever it was we thought we were looking for to whatever it may be that Google advertisers would like to pay for. Google Search, Carr claims, has this tendency toward distraction written into its DNA. It is inherently an engine of distraction. In short, Carr represents the internet as a dissipater of knowledge that is ultimately poised to undermine the autonomy of our minds.
I think you can argue with all of Carr’s claims but not necessarily on the grounds that the research is wrong or that he mischaracterizes current internet technologies. He does tend to miss much of the context in which the trends he point to have evolved in the world beyond the screen. On a technical level, it now appears that hypertext is not best used simply translating an existing text, and it often is distracting when attempting to read an article in depth, and much less good at facilitating understanding and recall than the hype would suggest (Rouet, 1996). This is one reason many millions of readers now choose to print off articles or increasingly use e-paper devices like Kindle to read offline. Nevertheless, hypertext is still a brilliant way to connect articles together and it seems to be more a job for designers to work out how to do this without distracting the reader.
Search technology as currently constituted does build in all manners of biases but it is really only the problem it is because so few people understand how it works and many seem to believe it is far more reliable and comprehensive than it really is. It is mainly the idea that it is infallible and the only source that makes us vulnerable to it. Some attempts to teach students more systematically about how these resources work might counteract some of the worst trends (although see Bartlett & Miller, 2011, for how this is perhaps not happening in schools just now). Most readers are of course more skeptical than they are given credit for here as elsewhere.
What Carr elides is that what we are really talking about the particular form the technology is taking now, and that form is in motion. Reading done through a web-browser on a screen most likely will be less in depth and less distracted when compared to reading from a traditional book. But the internet does add a tremendous amount to the speed of the research process and particularly finding things to read. Shallow browsing, if that is all the reading we do, could certainly would be intellectually incapacitating, but this misses what many of us do with the internet. Many of us use to the internet to find things to read that we use in other ways later. Moreover, it is surely significantly that these technologies from iPhones or android devices, to Facebook, to Google search are all highly customizable and open to different patterns of use. For example, the market for apps make mobile devices highly customizable in ways that no previous technology has been. Internet technology in particular is open to us because we keep remaking it to do the things we need, rather than what it was necessarily designed for. We remake it or can insist others do so, or there is always potential for software designers to step in and reshape the technology into something we might find even more useful. One does not have to be a wild optimist to think we may eventually overcome some of the difficulties to which Carr draws our attention. It is difficult to see why these should be regarded as essential problems.
And then there is the flipside of the internet´s capacity to distract: the amount of cognitive time that the internet frees up for collaboration, Clay Shirky calls this The Cognitive Surplus (2010). One of the most interesting aspects of a technology like Wikipedia is that it is built from tiny fragments of time which the technology allows to be composed into something—its many flaws acknowledged—which is free and fundamentally useful.
Really there is nothing essential about search, or browser technology, the mobile internet, or even the shape of the commercial funding of the internet that undermines the way we think, our sense of self, or the sorts of being we are. It may be the technology is currently embedded in a certain commercial culture and indeed with a culture of knowledge which is detrimental to the development of deep thinking, but even if this is the case, there are so many trends in our societues which count against the development of knowledge for its own sake that we can hardly be surprised if these are reflected and perhaps amplified by certain elements of internet technology. These are all things that individuals, designers, programmers, or even (close to my own heart) philosophers could and should address. We can argue with the current shape of technology and propose how it might be better. But there is seldom much engagement in this direction. More common are dour warnings about our impotence in the face of new technology; that it is the agent and we the passive recipient.
This is the real problem, I think: the idea that it is technology itself that makes us smarter or dumber has come to be seen as a conventional and unremarkable truth. This is summed up in a word which is almost unavoidable when we talk about the way we use technology today: impact. We say the technology impacts us. Almost any article you read on the use or effects of the internet (or other technologies for that matter) uses this metaphor. But technology does not impact us, at least unless we take a very passive stance on it.
Technologies do not make us dumber or smarter, but we can choose to be smarter by making the best of what technology has to offer—but also by thinking much harder about what we want it to do for us. But we need to start thinking not just about technology, but also the sort of intellectual and moral beings we want to be, as this will guide our creation of technologies. In striving to humanize the internet in this way I think there is every chance that we can become ‘smarter,’ but this depends on us. No technology will do the job for us.
This is an edited version of a talk given to the Brighton Salon as part of a Battle of Ideas Satellite debate on the 2nd of November, 2011. Rob Clowes is currently working on the completion of his book: Being Human after Facebook.
References
- Bartlett, J., & Miller, C. (2011). Truth, Lies and the Internet: A Report into Young People’s Digital Fluency.
- Carr, N. (2010). The Shallows: How the Internet is Changing the Way We Think, Read and Remember. London: Atlantic Books.
- Pariser, E. (2011). The Filter Bubble: What the Internet is Hiding from You: Penguin.
- Rouet, J. F. (1996). Hypertext and Cognition: Lawrence Erlbaum Associates.
- Shirky, C. (2010). Cognitive Surplus: Creativity and Generosity in a Connected Age. London: Allen Lane, Penguin.
————–
This is a Creative Commons Licence article, with a few edits: https://creativecommons.org/licenses/by/1.0/
Follow us on Reddit for more insights and updates.
Comments (0)
Welcome to A*Help comments!
We’re all about debate and discussion at A*Help.
We value the diverse opinions of users, so you may find points of view that you don’t agree with. And that’s cool. However, there are certain things we’re not OK with: attempts to manipulate our data in any way, for example, or the posting of discriminative, offensive, hateful, or disparaging material.