Deliberate Rest

A blog about getting more done by working less

Tag: automation

Signals, complexity, and knowledge vs data in baseball

Okay, first a caveat: I go to a few Oakland As games every season, but I’m not a baseball fanatic; I married into the game. My wife agreed to move with me to Chicago 20 years ago when I found an apartment within walking distance of Wrigley Field. Proposing to her also helped, but I’m not sure which was the more important factor.

So I’m not a baseball expert by any means, but I still found this article about how “The Rockies Believe They Have an Unbreakable Code,” and more generally how signaling is turning into a point of contention over who makes decisions about pitches, pretty interesting.

Last Sunday, the Washington Nationals broadcast noticed an unusual card sheathed in clear plastic on a wristband that was adorning the left arm of Rockies catcher Chris Iannetta…. The wrist card wasn’t just something the Rockies implemented Sunday. Iannetta has worn the card since the Rockies’ season began in Arizona….

While other teams might have already done something similar, this wristband — or, at least the scope of it — seemed to be unusual in nature. And with the amount of information available in today’s game, when it’s possible to know every batter’s performance against every pitch type in every count, I have wondered if teams would try and get more information to catchers and players on the field.

Mariners’ outfielders are carrying cards in their back pockets to help with positioning this season. In college baseball, the coaches — lacking the trust in their young battery — often call the vast majority of pitches, which can slow things down.

So what’s on the card?

Iannetta said the information is mostly related to controlling the running game, and he explained some of the mechanics of the process. “It’s just a random three-digit number that corresponds to a sign and then we have 10 different cards with random numbers,” Iannetta said. “As soon as they [the MASN broadcast] zoomed in… we heard about it and switched cards immediately. We switched to a different card with a whole new set of numbers. There’s no way to memorize it. There’s a random-number generator spitting out a corresponding number [for the cards], and the coaches have the same cards.”

So the signals are no longer part of a language that each team possesses, or that evolves between specific pitchers and catchers; it’s now more like coded signals in the military.

What’s also interesting about this, though, is that it foreshadows a change in who control the calls, the catcher or the coaches.

In college, the coaches already do so, mainly to keep the game moving. But in major league baseball, there’s a potential competition between two very different kinds of expertise.

On one hand, the catcher possesses a lot of on-the-ground knowledge. They know how confident each hitter is when they come to the plate, can observe how forcefully they’re hitting, whether they’re in the zone or distracted, etc. They also know their pitcher’s performance. “On any given night,” the article says, “the pitcher is going to have a different feel, a different comfort level, with certain pitches that will be unique to that contest.” Put that together, and a good catcher can really shape the play.

On the other hand, coaches now have a staggering amount of data about players, and there’s a temptation to use that to shape the game by allowing coaches to call more of the pitches.

But as the signaling system gets more complex, and you can change it up more often, there’s an opportunity to either use it to deliver more information to catchers, or to displace the knowledge of catchers entirely and rely on big data.

What’s important here is that it’s not that one form of knowledge or expertise is necessarily superior to another; they’re probably best seen as incommensurate, as forms of knowledge that are so different it’s like comparing apples and manta rays. But you can imagine coaches deciding that they want more control over the game, or thinking that big data will let them use less experienced catchers, or simply being super-impressed at the statistics and all the cool things you can do with it. In order words, adopting what looks like a data-intensive and rational system for reasons that actually aren’t that rational.

“Exploitation is encoded into the systems we are building”

Writer and artist James Bridle has a long, but rather amazing and disturbing, piece arguing that “Something is wrong on the internet.” Specifically he’s talking about how kids’ videos on YouTube have turned super-strange and -dark, thanks to the weird profitability of kids’ videos, their low production standards, efforts to hit the right SEO and keyword notes, etc.. The result, he says, is that

Automated reward systems like YouTube algorithms necessitate exploitation in the same way that capitalism necessitates exploitation, and if you’re someone who bristles at the second half of that equation then maybe this should be what convinces you of its truth. Exploitation is encoded into the systems we are building, making it harder to see, harder to think and explain, harder to counter and defend against. Not in a future of AI overlords and robots in the factories, but right here, now, on your screen, in your living room and in your pocket.

I’ve been thinking a lot recently about the future of automation and work, and whether it’s possible to avoid the kinds of race-to-the-bottom, exterminate-the-worker imperatives that seem to be implicit in so many automation projects today, so this is a bracing argument.

It goes on, after walking through a number of examples of videos that are literally nightmarish:

To expose children to this content is abuse. We’re not talking about the debatable but undoubtedly real effects of film or videogame violence on teenagers, or the effects of pornography or extreme images on young minds, which were alluded to in my opening description of my own teenage internet use. Those are important debates, but they’re not what is being discussed here. What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.

This, I think, is my point: The system is complicit in the abuse.

And right now, right here, YouTube and Google are complicit in that system. The architecture they have built to extract the maximum revenue from online video is being hacked by persons unknown to abuse children, perhaps not even deliberately, but at a massive scale. I believe they have an absolute responsibility to deal with this, just as they have a responsibility to deal with the radicalisation of (mostly) young (mostly) men via extremist videos — of any political persuasion. They have so far showed absolutely no inclination to do this, which is in itself despicable. However, a huge part of my troubled response to this issue is that I have no idea how they can respond without shutting down the service itself, and most systems which resemble it. We have built a world which operates at scale, where human oversight is simply impossible, and no manner of inhuman oversight will counter most of the examples I’ve used in this essay.

I spent a little time looking at some of these videos, and they are beyond weird. They combine Second Life-level clunky animation; the kinds of repetition that adults find irritating and toddlers love; that distinctive kids’ music; and extremely strange cuts and changes of scene. About four minutes into one of the videos, the scene shifted from a totally anodyne house to a graveyard in which familiar toys sing a song about how sugar is bad, only they have flayed zombie heads; it was exactly the kind of thing that your mind would cook up as a nightmare.

Automation, leisure, and the problem of avoiding “overwork for some and starvation for others”

In her essay on the meaning of leisure, Washington Post editor Christine Emba notes that Uber recently announced that it would debut self-driving cars in Pittsburgh later this fall. This, she argues, marks another step toward a more-automated world, and underlines our need to think more clearly about the problem of leisure. As automation reduces the number of hours we need to work, we’ll need to be wise about how we spend that time.

But there are very different ways automation could affect leisure, and we can’t talk about “automation” without talking about who controls and benefits from automation. Let’s use the Uber situation to imagine two very different scenarios.

A tiny bit of background: What Uber is doing is retrofitting a bunch of Volvo SUVs with sensors, cameras, lidar, and computers that will drive the car (though a human will still be in the front seat as backup). So they’re not making self-driving cars; they’re making existing cars self-driving.

This is a crucial distinction, because it means this technology could be deployed in two very different ways.

In one scenario, Uber uses the technology itself to automate its own fleet of self-driving cars. In Pittsburgh, the cars prove a success (and if you can imagine any company making the argument that too few people were run over to stop deployment, it’s Uber), and the technology spreads to others cities. A year from now, Travis proudly stands up at the annual meeting and announces that 50,000 people who used to be contractors for Uber are now back on the streets. They were shock troopers in the greatest high-tech disruptions of a service industry in modern history, and now we’ve been able to cast them on the ash-heap of history. Suckers!

In the other scenario, though, Uber sells the self-driving car kits to anyone who wants to drive for Uber. (Once again, the number of kittens and grandmas who get run over during the trials is considered to be within acceptable parameters.)  A year from now, Travis proudly stands up at the annual meeting and announces that 50,000 people who had been driving for Uber are now in the robotics business: they buy cars, outfit them with self-driving car kits, and lease them back to Uber. These people have gone from being mere drivers, to being managers, small businessmen, entrepreneurs, etc.; they continue to work with Uber to push the boundaries of innovation blah blah blah; and Uber benefits from all this technology without having to buy a single damn car. It’s as if Henry Ford’s autoworkers had built their own factory lines, using their money rather than Ford’s.

See how they’re different? In one sense the outcome is the same: the workers drive fewer hours. But there’s a dramatic difference between being thrown out of work by the technology, and being in the robot business.

In his “In Praise of Idleness,” Bertrand Russell wrote about how automation was being unevenly applied, and that capitalists preferred to create “overwork for some and starvation for others,” rather than a world in which we all worked fewer hours and let the machines take care of the rest. This is a problem we face again; only with these technologies, we have a better shot at spreading around ownership in ways that enhance rather than degrade the living standards and livelihoods of workers.

© 2018 Deliberate Rest

Theme by Anders NorenUp ↑