I hadn’t heard of this idea until this Atlantic video from James Hamblin:
I hadn’t heard of this idea until this Atlantic video from James Hamblin:
Tom Chatfield’s short essay, “The attention economy,” raises an interesting question: why do we think of attention as a resource?
For all the sophistication of a world in which most of our waking hours are spent consuming or interacting with media, we have scarcely advanced in our understanding of what attention means. What are we actually talking about when we base both business and mental models on a ‘resource’ that, to all intents and purposes, is fabricated from scratch every time a new way of measuring it comes along?
For the ancients, Chatfield notes, attention wasn’t a resource; it was a relationship.
For the ancient Greeks and Romans, this wooing [i.e., getting other’s attention] was a sufficiently fine art in itself to be the central focus of education. As the manual on classical rhetoric Rhetorica ad Herennium put it 2,100 years ago: ‘We wish to have our hearer receptive, well-disposed, and attentive (docilem, benivolum, attentum).’ To be civilised was to speak persuasively about the things that mattered: law and custom, loyalty and justice.
In this understanding, there is no such thing as “attention” as something that exists outside a relationship. It’s not like energy, or a pint of blood: it only exists between the person giving their attention, and the person trying to hold it. Indeed, Chatfield points out,
In Latin, the verb attendere — from which our word ‘attention’ derives — literally means to stretch towards. A compound of ad (‘towards’) and tendere (‘to stretch’), it invokes an archetypal image: one person bending towards another in order to attend to them, both physically and mentally.
I think there’s still some value in the attention-as-resource model, if only because we can demonstrate that humans have only a certain amount of attention they can “pay” in a day; in that respect, it’s like self-discipline or decision-making. But the notion that it can be treated as essentially interchangeable with coal or wind, does bear some rethinking.
One of the things you always, and I mean always, hear about Internet of Things and smart home devices is that they “just work.” They’re all like these magic autonomous robots that’ll connect themselves to your wifi, then go do their thing, yet also be totally unobtrusive and intuitive (whatever those two words mean). Sounds cool, right?
Of course, the reality is very different, as this essay from IoS explains. The light went on sometime around the point when the author’s Internet-enabled thermostat stopped working whenever the wifi connection was lost (and “The only way to control the gadget is via the app, so when it breaks you’re really screwed”), and it came time to update their Philips Hue light bulbs: “When the first firmware update rolled around, it was exciting, until I spent an hour trying to update lightbulbs. Nobody warned me that being an adult would mean wasting my waking hours updating Linux on a set of lightbulbs, rebooting them until they’d take the latest firmware. The future is great.”
In other words, things work great until they don’t, at which point all the wheels come off. Further, as we’ve learned recently, connected devices are “connected” to the fates of their companies, in a way that “dumb” devices are not. If the company that made your hammer or pants goes belly-up, that doesn’t affect your ability to pound nails or cover up your naughty bits. But that’s not the case with smart home devices.
A one-time purchase of a smart device isn’t a sustainable plan for companies that need to run servers to support those devices. Not only are you buying into a smart device that might not turn out to be as smart as you thought, it’s possible it’ll just stop working in two years or so when the company goes under or gets acquired.
The Internet of Things right now is a mess. It’s being built by scrappy startups with delusions of grandeur, but no backup plan for when connectivity fails, or consideration for if their business models reach out more than a year or two — leaving you and me at risk.
Just another indicator of how technologies of the future could turn out to be really distracting.
I’m deep in revisions of the next book and am not taking the time to write at length about anything else, but I wanted to flag this Vincent Horn piece on virtual reality and Buddhism.
Buddhist contemplative traditions have, for millennia, carefully led us in the process of deconstructing our normal sense of identity and replacing it with one that’s both fluid and responsive. Our challenge is to build that wisdom into our next generations of contemplative technology.
We often regard a failure of focus as a failure of will, or a moral failure. But there’s also a physical and physiological foundation to our capacity to focus on a problem, or remember a number. And there’s an interesting study that suggests that our tendency to wander off-topic isn’t so much a function of willpower, or our mental inadequacies, as it is a reflection of our natural capacity for what scientists call “habituation.”
Habituation is the phenomenon where you stop noticing regular things in your environment: the rain on the roof, the ticking of a clock, the objects in your field of vision. We think our vision encompasses a nearly-hemispherical area in front of us, but in fact our eyes are only focused on a small part of that world at any given time, and we stop keeping track of things that aren’t moving. Our brains are good at creating a sense that we’re continuously observing the world, though that illusion is not perfect— if we’re concentrating hard while reading, for example, we can be surprised by the “sudden” appearance of a bird on the windowsill or a person in the room.
A couple years ago, University of Illinois psychology professor Alejandro Lleras wondered, what if focus is subject to the same rules that govern sensory habituation? What if our minds naturally tend to wander off things we think are repetitive? As he explained in 2012,
For 40 or 50 years, most papers published on the vigilance decrement treated attention as a limited resource that would get used up over time, and I believe that to be wrong. You start performing poorly on a task because you’ve stopped paying attention to it. But you are always paying attention to something. Attention is not the problem.
That insight that attention isn’t something that waxes and wanes, but instead is something that’s always directed somewhere, led him to draw a parallel between the attention we give to a task, and the fact that we tend to “edit out” stationary objects in our environment:
Constant stimulation is registered by our brains as unimportant, to the point that the brain erases it from our awareness. So I thought, well, if there’s some kind of analogy about the ways the brain fundamentally processes information, things that are true for sensations ought to be true for thoughts. If sustained attention to a sensation makes that sensation vanish from our awareness, sustained attention to a thought should also lead to that thought’s disappearance from our mind!
He and his colleague Atsunori Ariga, then a postdoc at University of Illinois, constructed a simple test. Four groups of students were given slightly different tasks.
To be clear, the purpose of the experiment wasn’t to test whether people could remember the numbers; it was testing whether having this other brief task helped people pay attention to the lines— that is, their performance on the vigilance test.
What they found was that the performance of the third group was pretty consistent, but everybody else got worse over time.
So does this mean that multitasking is actually good? Does texting while driving make you a better driver.
As they put it, “heightened levels of vigilance can be maintained over prolonged periods of time with the use of brief, relatively rare and actively controlled disengagements from the vigilance task.” But they’re testing how well you do on a very simple task. If you’re working on an assembly line, and literally the only think you do is make sure that three bolts are properly tightened, then this kind of break is essential. But if you’re doing something complex, then introducing a second task isn’t going to improve your performance. Indeed, the opposite is a lot more likely.
The challenge is to find a brief respite that is different, but doesn’t threaten to take too much time. This is why a “quick” email check is problematic: checking your email is rarely quick, because there’s almost always something that you feel needs an immediate reply, or leads to something else.
But you can imagine that automobile auto-pilots could be really useful here: if they were designed to let you take 30 seconds every 10 minutes or so to refocus your eyes, blink, and maybe run through some mental exercise— a couple Trivial Pursuit questions, for example— that could recharge your ability to stay focused on the road.
Here’s the abstract:
We newly propose that the vigilance decrement occurs because the cognitive control system fails to maintain active the goal of the vigilance task over prolonged periods of time (goal habituation). Further, we hypothesized that momentarily deactivating this goal (via a switch in tasks) would prevent the activation level of the vigilance goal from ever habituating. We asked observers to perform a visual vigilance task while maintaining digits in-memory. When observers retrieved the digits at the end of the vigilance task, their vigilance performance steeply declined over time. However, when observers were asked to sporadically recollect the digits during the vigilance task, the vigilance decrement was averted. Our results present a direct challenge to the pervasive view that vigilance decrements are due to a depletion of attentional resources and provide a tractable mechanism to prevent this insidious phenomenon in everyday life.
Georgia State University researcher Susan Snyder is studying the impact of Internet addiction (or PIU, Problematic Internet Use, described as >25 hours/week of non-school or -work use) on family ties. A new article finds that
College students who are addicted to the Internet report positive and negative effects on their family relationships….
On the plus side, these students reported their time on the Internet often improved family connectedness when they and their family were apart. However, their excessive Internet use led to increased family conflict and disconnectedness when family members were all together. And most students with PIU felt their families also overused the Internet, with parents not setting enough limits for either parent or sibling Internet use.
I’m sure there’s more to it, but until I read more, I’ll have to file this under what my mentor Riki Kuklick described as “the power of the social sciences” studies— things like detailed statistical studies of tax records that showed that— TAA-DAA!!!— incomes rose during the Industrial Revolution.
Part of me is also not sure classifying non-work and non-school use as unproblematic, but I’m not sure why.
One of the chapters of the book that I most enjoyed writing looks at digital Sabbaths, and how to make them work. So I perked up at Quartz writer Deena Shanker’s piece about her experience disconnecting during the Sabbath.
The rules have taken time to define, but here is where they stand: No phone and no computer. Television is allowed, but no streaming because it would require a phone or computer. No spending money (except occasionally coffee) and no transportation other than my own two feet. The underlying point is simple—no working.
This is a common start: unless you follow some strict set of rules that someone else has already laid down, or doing it as part of a bigger movement, you need to figure out just what it is that you’re getting away from. For most people, it’s behavior more than specific devices. For me, television being okay, but streaming video would be out, mainly because I can spend a ridiculous amount of time just browsing categories rather than actually watching something.
Here is what the best 25 hours of my week looks like: I leave the office early on Fridays, stop at the supermarket to pick up what I need to make Shabbos dinner, and head home. I turn off my phone at the appointed time and simply enjoy the act of cooking, instead of checking my email or answering texts about what time people should come over. My roommate cleans up and sets a beautiful Shabbos table. Friends come over at a prearranged time, or maybe a little late or a little early, it doesn’t really matter. They bring wine and dessert and funny stories and each week, I let one decorate the hummus with olive oil, paprika and za’atar. We eat a big meal, as much as we want. Nobody takes their phone out at the table—not because there’s a rule against it but because there’s no reason to….
Walking without a phone, of course, also means paying real attention to my surroundings. Whether it’s the person I’m walking with or those passing me by, actually listening and looking for prolonged periods of time brings back waves of nostalgia for a simpler time while simultaneously feeling entirely new. (I noticed, for example, in the unseasonably warm early fall that see-through clothing is apparently very in right now. Everyone in Williamsburg is wearing see-through clothing!)
Read the whole piece (and Shanker’s earlier piece about deciding to start observing Shabbos again), when you’re not on your own sabbath.
Writer Joe Fassler has a piece in The Atlantic on “How Fiction Can Survive in a Distracted World.” It’s mainly a conversation with author Kevin Barry, and it makes the case that “novelists shouldn’t even try to compete for people’s eyes,” which means competing with screens and everything that’s on them. Rather, “they should go for their ears instead…. Barry argued that the human voice still has the power to mesmerize us the way screens seem to, and that modern fiction should be heard and not seen.”
Barry argues that “one thing can still arrest us, slow us down, and stop us in our tracks: the human voice.”
I think this explains the explosion in podcasts and radio narratives. The human voice still holds our attention, allowing us to tune in to a narrative in a way we find increasingly difficult on the page.
Readers and listeners increasingly want their stories to come at them directly in the form of a human voice. While everybody says that book sales are dropping, there’s an explosion in literary events, book festivals, spoken word events. People want to listen, and they want to hear stories.
Barry uses Dylan Thomas’ Under Milk Wood to illustrate the kind of approach he’s advocating. I won’t reproduce it all here, or try to summarize it; it’s long, and deserves to read. But I’ll highlight this bit:
I love the refrain, “listen,” which repeats all the way through the work:
Listen. It is night moving in the streets …
Listen. It is night in the chill, squat chapel, hymning in bonnet and brooch and bombazine black …
Time passes. Listen. Time passes.
With this injunction to listen, Thomas is saying stop, stop, stop. He’s slowing us down so that we can enter this world.
This is striking because stopping is exactly what we instinctively do when we’re listening carefully to something. If you watch people talking on their phones while talking, you’ll often see them slow down or pause when they’re paying really close attention to the conversation. I’m one of those people who usually will pace around when talking, but I find when I really have to listen to someone, I stand still.
When we’re out on a walk and we want to listen for something— a bird, or something in the bushes— what do we naturally do? We stop. We still the self-generated noise that usually surrounds us, so we can better hear what’s going on outside ourselves. So this injunction to stop, stop, stop isn’t one that we only treat as a metaphor; in our daily lives, there’s an embodied aspect to concentration and listening as well. Listening requires slowing down, or being still.
Michael Schulson in Aeon writes about designing devices for addiction:
[S]hould individuals be blamed for having poor self-control? To a point, yes. Personal responsibility matters. But it’s important to realise that many websites and other digital tools have been engineered specifically to elicit compulsive behaviour.
A handful of corporations determine the basic shape of the web that most of us use every day. Many of those companies make money by capturing users’ attention, and turning it into pageviews and clicks. They’ve staked their futures on methods to cultivate habits in users, in order to win as much of that attention as possible. Successful companies build specialised teams and collect reams of personalised data, all intended to hook users on their products.
‘Much as a user might need to exercise willpower, responsibility and self-control, and that’s great, we also have to acknowledge the other side of the street,’ said Tristan Harris, an ethical design proponent who works at Google. (He spoke outside his role at the search giant.) Major tech companies, Harris told me, ‘have 100 of the smartest statisticians and computer scientists, who went to top schools, whose job it is to break your willpower.’
I met Harris not long ago, and seems to me that we’re reaching a turning point in the way we talk about the addictive quality of devices and social media: it’s no longer sufficient to invoke dopamine and intermittent rewards, and then shrug and either assume that these are inherent, unavoidable features of our technologies, or are addictive because of flaws in our human programming, rather than effects that designers work hard to create. Behind every claim that some technology or technological feature is inevitable is someone working hard to make money off that feature, while also convincing you that it just happened, and there’s nothing to be done about it.