Start Where the Pain Is: Notes on Topic-Selection in Technology Studies
Here's the irony I will explore in this post: When we watch presentations in Science and Technology Studies, very often we see individuals signaling, even peacocking, their politics, which are stereotypically left-leaning and claim to be "liberatory." Yet, when the individuals begin talking about their actual research, it turns out they are focused on hyped, "emerging" technologies, which may not even be causing actual harms today but only speculative future ones, rather than attending to the pains of ordinary human beings happening right now. Increasingly I've come to believe that this chasm arises first and foremost because we choose research projects by starting with technologies when instead we should start with pain.
Maybe we don't spend enough time talking about the intellectual, moral, and political dimensions of how we choose research topics. It's not an easy thing to generalize about, actually. Often enough, we might not naturally think a research topic is attractive until we hear someone else announce it, frame it in a fascinating way, and draw all kinds of juicy connections between it and other enthralling issues, and then we think, "Goddamn, that is HAWT." Inducing this reaction is one aspect of the craft of research.
Maybe I will write about this more down the road, but I generally believe that the best research projects come not from focusing on contemporary relevance but from deep engagement with existing scholarly literatures. It involves finding something interesting to study and say within conversations that have been happening for years. The topic-selection is driven by what makes sense within a field rather than what is immediately relevant in popular culture, including because the winds of popularity shift so swiftly.
But many people want to focus on topics related to pressing social and political problems. And believe me, I get that. My dissertation and first book, Moving Violations, came out of working with an interdisciplinary research group on climate change policy. My second book, The Innovation Delusion, co-authored with Andy Russell, examined how obsessions with innovation-speak leads people both to neglect doing maintenance work and to look down on and often poorly compensate Maintainers, people who do the maintenance, repair, and care work that keeps our world going. And my current book project, A Good History of Shit Jobs, explores why ~40% of American households, including ~30% of working households, have trouble making ends meet. So far, all of my work has been driven by concerns for contemporary problems (although I will also say that my next planned historical project, while it deals with heavy topics like dispossession of indigenous lands and slavery, is largely irrelevant when it comes to today's concerns, and that . . . makes my heart sing).
But if we want to focus on problems that are causing human affliction, how should we choose our research topics?
I started thinking about this question in a deeper way recently when two things happened simultaneously:
First, I started becoming skeptical of what I call the Anti-Big Tech Mutual Admiration Society (ABTMAS), a loose confederation of "critics" who like to hang out on social media and complain about how Silicon Valley-style technology companies and their leaders, like Elon Musk and Peter Thiel, are bad. I increasingly came to feel that some of these ABTMAS folks used a lot of hyperbole, were willing to make arguments without strong evidence, and failed to contextualize and assess how much the digital topics they were obsessed with are causing actual human suffering right now today as opposed to other mundane factors, which I'll say more about below. Some members of the ABTMAS crew certainly have their points, and I am more than open to the idea that "tech" firms should be regulated, though as a historian of regulation, I often find calls for regulation to be incredibly vague and lacking structural specificity. It also increasingly seemed to me that ABTMAS characters failed to call each other out even when members of their clique did demonstrably shoddy work and even made erroneous claims in public. The gang had confused a false and cheap sense of solidarity with pursuit of truth, goodness, and what have you.
Second, I began work on A Good History of Shit Jobs and spending increasing amounts of my time thinking about poverty and economic hardship in places like Flint, Michigan; rural West Virginia; my hometown of Joliet, Illinois; homeless camps throughout the American West, especially in California, with Los Angeles being a clarion example; the Southside of Chicago; and a blown out former textile and furniture manufacturing town, Pulaski, Virginia, which lies 25 miles down the road from my current home in Appalachia. I began to ask myself and the spirits who would listen, How much of the human agony in these places is actually caused by, like, "Big Tech"? I mean . . . let's be real. (An old joke in my friend group: Yes, it's true that Elon Musk has talked a bunch of bros into juicing his overvalued companies. Do you know who else he has suckered into paying attention to him? You.)
It's as if my eyeballs were going back and forth between what academics were talking about on social media and what I was seeing in places where people face real hardship, and the visual sensation makers were finding a gap between what many scholars are attending to and what the material conditions causing most human pain in the world are. Not a small gap, a giant one. Just bananas.
This experience built upon something that I had been thinking and writing about for a long time: how researchers in Science and Technology Studies tended to focus on hyped, "emerging" technologies. Too often, work in this vein has wasted everyone's time - the writers, the readers, and anyone else who can be imagined. If you've read stuff I've written on this topic before, you know that writings on the Ethical, Social, and Legal Implications (ELSI) of nanotechnology are key examples for me. Literal vacuums of social meaning and value - they suck it from life.
For several years, I would read conference programs for the Society for the Social Study of Science (4S), the primary STS association, to drive the sharp point deeper into my heart: If a technology was named in a paper title, it was almost always a hyped, "emerging" technology rather than an existing one, even though most human torment obviously comes from the latter. I concluded eventually that I needed to knock this program-reading behavior off for my own psychological health. Hype was built right into the heart of the field, and there was simply no hope that would ever change. The best hope for sanity was to let hope die. Just let it go. It's fine. I promise.
As I've written about in several places, including "You're Doing It Wrong," scholars end up chasing hype-y technologies with a lot of social fad energy around them for several reasons, including, probably most of all, a) the scholars fall prey to the fad themselves and aren't even "critical" enough to resist that kind of pull and b) lots of incentives, including piles of money in granting agencies and internal shifts within universities, push them in that direction. The end result is that a lot of research begins by starting with technologies, often "emerging" technologies that barely even exist, and then claiming the project is going to focus on the potential social/moral/political issues around these promissory notes. Criti-hype, or criticism that takes boosters' claims that new technologies are powerful and are going to revolutionize the world at face value and then flips those claims over to tell a dystopian adaptation, is endemic in such arenas.
But there is an obvious alternative to starting with technologies, which is to start with pain and examine the material factors that are causing it. Before I go on, maybe I should make a few points explicitly, though they are the kind of things that are almost embarrassing to put into words. There is a long tradition in the humanities and social sciences of focusing on what are often called social problems, but we could also just describe as hardships in human life. Almost always in our societies, these problems are marked by and arise from various inequities, inequalities, and injustices, historical and present. In future posts, I will talk a lot more about what materialism is and why it should undergird technology studies, but for now, I'll just say that, if we are materialists (and we should be), then we will assume that, if we go to a site of human affliction and examine the pain there, we will find material causes of that pain. These material causes will include things like lack of access to safe housing, clean water, clean air, healthy food, energy, medicine, waste management (e.g. avoiding exposure to human shit and piss), accessible transportation, leisure, headspace, and so on as well as a whole host of factors that induce stress, which leads to all kinds of terrible physical and mental health outcomes. If we start where the pain is and then do materialist analyses, we will better understand the actual agonies in the world around us.
Mercifully, the approach I'm arguing for here is not some fantastical view from nowhere. There are works in technology studies that start from problems in ordinary life and go from there rather than fixating on hyped objects. And THANK GOD for that because otherwise I would be even more depressed about the state of technology studies than I already am.
It's tough to choose examples of the good shit, but I will focus on three that I think are laudable. You will be hearing more about them here and on the podcast in coming months.
- Raquel Velho's Hacking the Underground: Disability, Infrastructure, and London's Public Transport System provides a fascinating ethnographic investigation of how disabled people navigate a transportation system that is far from accessible. Velho finds disabled passengers constantly hacking and finding workarounds, including lots of fix-y maintenance tasks, to get from one place to another. While these workarounds involve obvious creativity, they are also the products of an unequal system and the failure to enact a more-thoroughgoing and radically-transformative redesigning of public transportation systems in the name of accessibility.
- One of my favorite books in recent years is Julia Ticona's Left to Our Own Devices: Coping with Insecure Work in a Digital Age, which examines how precarious workers "use their digital technologies to navigate insecure and flexible labor markets." (You can hear my podcast interview with Ticona here.) Now, I should be clear that technically Ticona started by following technology, but it's how she did it and where she goes from there that makes all the difference: She literally recruited subjects by hanging out at phone stores and other places, like gas stations and convenience stores, where working-class people buy phones, and then she interviewed them about how those devices fit into their everyday work lives. Appropriately, Ticona got onto this project partly by avoiding a highly-hyped topic of the moment, namely "sharing economy" apps, about which so much has been written in recent years, much of it, in my view, of dubious quality that I suspect will not stand the test of time. And Ticona discovers a kind of irony by examining how working-class people use smart phones: While so much attention in recent years has focused on the costs and harms that arise from the "digital divide" - that is, lack of access to Internet-connected digital technologies - there are also real harms that arise from being included in a system that is fundamentally unequal and designed to profit from being so. In addition to being wonderfully and clearly written - I literally teach writing using this book as an example - it shows us so much about how digital technology use fits within working-class struggles and tribulations.
- Finally, I've just recently begun learning about my Virginia Tech colleague Theo Lim's exciting community-based planning work on addressing urban heat issues, a deadly problem that only worsens with climate change. It's kind of hard to get more ordinary and grounded than the amplified heat urban communities face during extreme weather events, and if you watch or listen to Lim's presentation to my department's seminar, you will learn that even Lim's focus on heat arose from his work with community members to find what they identified as a problem. Moreover, true to basic materialist insights, the places facing the most intense heat in Roanoke, Virginia, where Lim works, are traditionally black neighborhoods created through red-lining. In other words, red-lining maps and heat maps . . . map to each other in deep ways. This is research that is insightful, beautiful, and engaging because of what it shows us.
In future posts, I will draw these works and others together to put forward a picture of materialist technology studies and how to do them. For now, I will end by saying this: It's really sad what has happened with the words "critical," "criticism," and their relatives in academic circles in recent years. At it's worst, it can seem like "critical" means, "I like to sit in my bedroom and post lefty takes online." Now, I could got into a genealogy of the word "critical" and how it runs from Kant through Marx into various traditions that pierce our moment, and maybe someday I should, but I believe a common denominator of all of them is that to do the kind of work we need to do, we have to think. If we claim that human suffering is one of the things that we care most about in life, we actually have to fucking study actual human fucking suffering. I mean, DUH, folks. Right?
The things we have to say these days.