Against Narcissistic-Sociopathic Technology Studies, or Why Do People *USE* Technologies?
Why and how do people USE technologies?
This question should be at the center of any thinking about human life with technology. The ways in which people use technologies, after all, determines the social, economic, etc. effects they have. (Someone can invent something that then sits on a shelf and has no effects at all.)
As I attempt to make sense of the consequences of Generative AI, social media, and other contemporary technologies, I've too often felt that people in or adjacent to the fields I work in seem uninterested in why and how ordinary people who are not themselves are adopting technologies into their lives and why they are choosing to do so.
Several kinds of experiences lead me to feel this way. For example, I once saw a critic of Generative AI that I often find otherwise pretty astute say that there is nothing at all to the technology, that "the emperor has no clothes," that "there is no there there." Now, maybe she was using hyperbole, and I certainly believe that there's a whole lot less to this basket of technologies than boosters assert - a WHOLE lot! - but, as I'll flesh out more below, saying "the emperor has no clothes" when it comes to Generative AI is crazed. It undercuts our credibility. We know there is some kind of there there because, if we listen, other people who are not ourselves and who have no interest in hawking the technology tell us they are using it, and, moreover, we can observe them doing so in mundane settings of daily life.
Or take philosopher of technology Evan Selinger's interview in the Boston Globe with critic Nicholas Carr:
Selinger: If social media is so bad, why does everyone keep returning for more?
Carr: Human beings often desire things that make them unhappy or make them feel dissatisfied. Psychologists call this mis-wanting. We might be happier if we went out and took a quiet walk, but instead we sit scrolling through feeds filled with trivial or abrasive messages. We get hooked on the stimulation. Companies like Meta, X, and TikTok have been very adept at exploiting this weakness in us.
DAMN! Is THAT all you have to say about why people use social media?!
There is something off here. What I will argue below is that too often people are confusing their roles as critics of things they think are bad with the work of increasing our and others' understanding of what is happening in the world.
I will make this argument in three moves: First, I will argue that we should resist the temptation to do what I call Narcissistic-Sociopathic Technology Studies. Second, I will briefly say that we already have studies of why and how people use social media that are far deeper than Carr's analysis. I argue this to point out that this problem goes way beyond AI, which has all kinds of false senses of uniqueness about it right now. But then, third, I will describe how I hanker for grounded studies of how ordinary folks are adopting Generative AI, and I will say that the degree to which people are doing so has surprised me and upended my own initial skeptical take on these tools.
Against Narcissistic-Sociopathic Technology Studies
I have been using the kind of (typical of me) half-jokey phrase Narcissistic-Sociopathic Technology Studies to describe thinking about technology where people make one or both of two errors: 1. Confusing their judgment of the world for the world itself, especially what others are thinking and doing in it. 2. Substituting their assumptions about what others are up to for actually observing and talking with them. Both errors are rooted in a lack of genuine curiosity about and compassion for others.
Now, I feel no need to argue that anyone thinking about technology today is a pure representative of Narcissistic-Sociopathic Technology Studies. Rather, I want to hold it up as a kind of nonideal type that we should hope to avoid.
How have we ended up in a place where this concept is useful, where people in technology studies are not curious about what others around them are up to? That is a complex story with many factors at play, I think. I hope to explore them more in future posts. Briefly, I will say that I believe there are at least two important historical traditions leading to this outcome: The first is what I describe as Cultural Pessimist Technology Criticism (CPTC), a line of thinking I track back to late 19th century Germany and, ultimately, romanticism, which has blamed technology and industry for sociocultural decline. Paradigmatic CPTC thinkers include Jacques Ellul, Langdon Winner, Neil Postman, and . . . Nicholas Carr. Often people play out the tropes of this tradition without even knowing it.
The second tradition is the social formation that Nirit Weiss-Blatt and others have called the Techlash, in which it became popular to express negative emotions and judgments about digital technology firms. A lot of statements made in the Techlash mode are less thinking and more the expression of ideology driven by the speaker's position in social space. (And lest you think I'm finger-wagging in a condescending way here, part of my thinking about Techlash ideology is driven by my self-analysis and self-critique of parts of my own co-authored book, The Innovation Delusion.)
The coming together of these two trajectories and others has led to some mournful consequences. A big one is that a lot of thinking about technology today is marked by a simplistic and, at times, downright bizarre form of Manicheanism - a black and white view that splits the world into opposing forces of good and evil. One would hope that humanistic and social scientific inquirers would be immune to such a pitfall, given that irony, nuance, and self-challenge are central pillars of those thinking traditions, but alas . . . There are times I check in on Bluesky, that great bastion of liberal anxiety, see people talking about Generative AI, and wonder to myself, "Hold on, are you talking about software that spits out text, images, and code or are you talking about literal demons?!?!" The zero-sum worldview of Manicheanism, which I suspect has been encouraged by simplified, binary-like forms of social media communications, has a number of negative effects. One of the more important ones is that it leads us to be un-curious about others, who are obviously bad and worthy not of attention but scorn.
But even if we find ourselves being seduced by a Manichean worldview, we can avoid falling into the specific thinking errors of Narcissistic-Sociopathic Technology Studies by taking the interpretive methods of humanistic and social scientific inquiry seriously, including loving embrace of social multi-perspectivalism, and doing the work.
The two thinking errors I have in mind are these:
Narcissism
Sometimes people wrap themselves so tightly around their judgments of the world that they are unable to see that others have a different take on things and to be curious about those other perspectives. Here I am mostly focusing on negative judgments because that's what I see most in my social world, but this equally happens with hype-mongers and boosters. If other people enter the frame, it is as toys or playthings that the narcissistic person uses to tell stories about what is wrong with the world. I sometimes see this in how people talk about students using AI. The students become symptoms of a sick system, rather than seeing them as pragmatic agents trying, like all of us, to get through the world and becoming curious about how those AI-using students see things.
We do not have to give up our moral beliefs and principles to go beyond narcissism, and we can advocate for our views as we will. But to increase understanding through humanistic and social scientific inquiry, we do have to embrace a kind of irony that sees our viewpoint as merely one amongst many. We have to become more curious about multi-perspectival social reality, the other people, and the plurality of values and ends in it than we are about ourselves (and the endless cycle-like rehashing of our own thoughts that any meditator comes to see with great clarity).
Sociopathic Social Science
Why do we need a second term here? What I am calling sociopathic social science is related to but distinct from what I just described as analytical narcissism. I take the idea from John Levi Martin's The Explanation of Social Action. Martin describes how Sigmund Freud and other early social scientists created authoritarian images of analysis in which the analyst's judgment was seen as superior to and more trustworthy than the accounts of the person being analyzed. As Martin puts it, Freudianism contained "a sociopathic epistemology that allows analysts to say that one thing is 'really' something else." (112)
This epistemology is deadly when combined with the snob's habit of looking down on the actions of those dubbed impure, a view that's all too common in technology criticism. With narcissism, we can only see the world through our own perspective; with sociopathy, we come to believe that we know what is going on in other people's minds better than they do without asking or otherwise studying them.
From this perspective, we can see how outrageously condescending Carr's response to Selinger is above. Why do people use social media? Because they are deluded and don't know what is good for them. This is hardly serious inquiry into our technological world.
There are many things to criticize about the technological arrangements of societies and how they impact people, and, at this point, we have long traditions of such critique. Thinking critically, in the sense of critical theory, is essential, and all of my work has that dimension. But, when it comes to technology studies and much else, I am primarily interested in empirical approaches, first and foremost, aimed at understanding and explanation. Because human activity with technologies largely lays beyond our direct perception - there are people all over the world doing things that we don't know about - we must turn to forms of inquiry that aim at taking us beyond our own experiences. Put simply, understanding any social phenomenon, maybe especially human uses of technologies, requires you to work at, to put it colloquially, getting over yourself and getting tuned into others. There simply is no other way to understand how people use technologies around us. We have to be curious about them and to do hard forms of self-work to ensure we are doing our best to understand them. Such inquiry, which involves going beyond or suspending our own judgments, is a form of care.
Social Media
All I want to say here is that we have way, way, way better work on the social dimensions of social media than Nicholas Carr, and given this reality, it is hard to see why people would take him seriously. First off, it is hard to underestimate the degree to which social media use, including what some people call political hobbyism or slacktivism, fills a part of human life that we would have once been called entertainment, a point Robert Gordon nicely makes in his book, The Rise and Fall of American Growth. Indeed, one of the more amazing things about many contemporary societies is that peak hours for social media platforms are during the traditional working day, proof positive that many people have a different relationship to time and effort than did their agricultural and industrial forebears. Now, for sure, there is a kind of Protestant stream in our culture that looks down on and sneers at mere entertainment as debased, but, at least sociologically, we wouldn't want to think from that perspective, would we?
When we go beyond the basic fact of entertainment, however, we enter rich, ever-growing literatures on how and why people use social media. To my mind, the classic and paradigmatic text here is danah boyd's It's Complicated: The Social Lives of Networked Teens. First off, boyd's book is a methodical masterclass. She hung out with teens and, from that work, is able to show that young people use networked technologies as a form of sociality to connect with peers and to construct identities. She listened to and sought to understand the teens themselves rather than what adults said about them.
boyd's attention to sociality and identity construction has held up over time. To give one example, when faculty members asked Andrew Rosenthal, a video game studies scholar and doctoral student in my department, what video games are about today, he said something like, "Oh, hanging out with friends" - which given how my peers, students, and children use many video games socially is exactly right.
To give another example, in September we'll release the Peoples & Things episode featuring my interview with Ashleigh Greene Wade, Assistant Professor of Media Studies at University of Virginia, about her fascinating book, Black Girl Autopoetics: Agency in Everyday Digital Practice. Like boyd, Wade hung out with young people, specifically black girls, to understand how they use digital platforms, and in the process, Wade uncovers acts of creativity and self-fashioning - or as the book description puts it, "self-making [that] creatively reinvents cultural products, spaces, and discourse in digital space."
Two other ongoing projects represent the best kinds of social scientific examinations of digital media use to my mind: First, Lana Swartz, Associate Professor of Media Studies at University of Virginia, has recently started an empirical research program examining how young people are using social and other digital media to learn about investing and other financial decision-making. (I guess I'm in promissory note mode - you can hear my interview with Lana about this and other projects in an episode that will drop in December.) You can think of memestocks, r/wallstreetbets, and the like, but, IIRC, Lana is hoping to sample young people and learn about their financial lives in ways that go beyond mere niche subcultural participation, though that will likely be captured in the study too.
Second, as part of a much larger research project that attempts to go beyond how digital technologies are adopted in businesses by thinking about homes, schools, and churches, Erica Robles-Anderson has been examining how conservatives are using digital platforms, including social media, to create alternative venues for humanities education and activity that fall largely outside of traditional universities. Such offerings include Hillsdale College's free online offerings, various homeschool initiatives, Zena Hitz's Catherine Project, and other, largely decentralized efforts. But these activities have had a number of real world impacts, including, for example, the creation of alternative standardized tests focused on "classical learning."
My point is that we have lots of existing and ongoing research on social media that has excellent empirical foundations that goes WAY beyond asserting that people who use social media are deluded, the way Carr does. To be clear, criticizing the deleterious effects of social media use, which seem real enough, is fine and welcome, but criticism is only compelling if it starts from a realistic picture of the way the world is. And to get such a picture - particularly when it comes to diffuse forms of technology use that go far beyond any one of our experiences - you actually have to take other human beings seriously. Even more, you have to work to understand them.
Generative AI
My first experience with the current wave of Generative AI came about a week after ChatGPT 3.5 was released in 2022. My buddy and frequent jousting partner, Zach Pirtle, a philosopher of engineering and Engineer and Program Executive at NASA, sent me a script that was purportedly a debate between me and Jacques Ellul, one of the great Negative Nancys of technology studies. Here's the thing: The output sucked. Neither Jacques, nor I sounded anything like ourselves. The text was insipid and lacking in any insight or humor. It blew.
For this and other reasons, my own initial reaction to ChatGPT was a yawn. I have played with Generative AI here and there. (Honestly, the place I use it most is in the weekly Dungeons & Dragons game some friends and I have played for years.) I am not opposed to using Generative AI in my work, but neither am I drawn to it. Indeed, going beyond my own experiences, many of my friends and colleagues and colleagues of colleagues have reacted to the technology with a giant meh, even after experimenting with it extensively.
Moreover, I am confident that we are in a significant technology bubble at the moment. The stocks of companies related to GenAI are overvalued. I am not surprised by a report out of MIT yesterday claiming that 95% of Generative AI pilots at companies are failing (though at least in interviews the authors emphasized the tools are useful at an individual level, just, so far, less so at at team and enterprise levels). Executives and managers all over the world are high on their own supply when it comes to this technology, and doing stupid ill-advised things left and right. OpenAI, Microsoft, Meta, and other firms are blowing money like mad expanding digital infrastructure with no clear vision in sight for how they will make these efforts profitable, and business news outlets, like the Wall Street Journal, are constantly reminding these firms that investors are wondering. (I think this video interview with Jeffrey Funk on the current state of the AI bubble is pretty good; I also found the balanced perspective in this Substack post by Derek Thompson admirable.)
But here's the deal: Given my kind of negative set up - in terms of both personal experiences and my perspective as a scholar of hype and bubbles - I have been routinely surprised by how many people in my life find Generative AI useful and are using it on a daily basis. This is truly one of the great beauties in life - to be surprised by what shows up, and to recognize and honor those developments that shirk our expectations. We should learn to cherish those moments where discover we are wrong. It is even good advice to learn to turn relishing your wrongness into a kind of kink.
I am grateful for the "stochastic parrots" paper and similar works that take a first principles approach to the limits of GenAI and examining how far this technology is from human reasoning. Among other things, these first principles approaches suggest that the very idea that GenAI firms are getting close to "AGI" is . . . well, it's LOLOLOLOLOLOL. BUT the risk of such an approach is that, when it comes to the sociology and economics of technology, it can underestimate how these tools can be imperfect, full of hallucinations and whatnot, and still be useful in any number of settings.
I think we are still in too early days for there to be strong sociological studies of GenAI adoption. Moreover, following Paul David's great paper, "The Dynamo and the Computer," we can be assured that it will take years for use of the technology to settle down and become clear. Still, I think many "critics" are overlooking everyday adoption of this technology.
For the purposes of discussion below, I am going to mostly focus on examples of people using GenAI to generate text for white collar work, setting aside use cases where people generate images or code, though it very well may have bigger impacts in these latter areas. I am also mostly going to set aside the whole hairy issue of students using GenAI to do schoolwork, though I am happy to talk about my views on that topic if people are interested. (I am a proud member of the American Historical Association committee that wrote the organization's recently released "Guiding Principles for Artificial Intelligence in History Education.")
My friend who is the most active user of Generative AI that I know is a technical documentation lead for a software company. He routinely uses GenAI tools throughout his entire workday. This dude is a hardcore realist. He will tell you that GenAI is currently a giant bubble, and that OpenAI may very well become the next Netscape. He also routinely says things like, "GenAI is a boilerplate machine. It just so happens that I write boilerplate all day." This is important. Something "tech critics" - who often come out of the humanities and, for all their kvetching, do not spend the entirety of their working lives doing paperwork - miss is how much our bureaucratic culture runs on highly standardized and boring documents, the kind of thing my friends tell me that GenAI, with supervision obviously, is pretty good at.
Similarly, one of my loved ones told me that if she was giving an Oscars-style acceptance speech for the job she recently landed, it would begin, "I'd like to thank ChatGPT for all it did for me . . . " Because she used it to generate the first drafts of nearly all of her application materials, which she would then revise. She also used it to create mock interview questions, which she came up with responses for. She reports that the first question ChatGPT asked her was the first question she was asked in her actual interview. It helped her feel prepared. In her job, she now uses ChatGPT to write social media posts, website copy, and emails, something she long found psychologically difficult and anxiety-producing. She finds the tool subjectively useful. But fascinatingly - and here's the bad news for OpenAI - when I ask if she would be willing to pay $20 a month for access to ChatGPT, she says, "Not unless it became a LOT more useful." This response raises real questions for our current AI bubble: Some people could find it handy for certain tasks but still not be willing to pay for it and make it financially sustainable.
One final example: In my experience one of places where GenAI is being adopted the fastest is in K12 education, including by my siblings and the many K12 teachers I am lucky to know. This adoption makes sense. We know that teachers are stressed and overwhelmed. To leave no doubt, I would prefer to live in a world where teachers were better paid and less stressed. But given present reality, it is no surprise that teachers at least try adopting a tool that they at least perceive will make their work more efficient.
As one of the Lego robotics coaches at Margaret Beeks Elementary School here in Blacksburg, Virginia, I am very lucky to work with Ms. Elizabeth Larson, an award-winning STEM teacher and doctoral student in education. Larson uses ChatGPT for both her graduate studies and teaching. For her studies, she reports using it for smoothing outlines, producing bibliographies, finding sources, and revising prose, the latter of which is often mentioned by students in our graduate program. (Critics may point out that GenAI introduces errors when doing these things, which is totally true. But then people revise. As one of the grad students in my department once put it, "It turns out that when I use ChatGPT, I also use my mind.) For her work as a teacher, she most often uses the tool, as she puts it, "to help with grant writing and other things that are very methodical and tedious. I will just load all of my information into it and say this is the grant that I’m applying for, this is the project details, can you write a cohesive statement?" Which, again, she then revises. She also uses GenAI to produce a variety of types of images that she uses in her STEM classes. She uses these tools, but she's no slouch. Indeed, Ms. Larson was the teacher of the year last year at Margaret Beeks.
People will tell you things if you ask them. Often, you'll find out they are different than you are. Scholars from many traditions have justly and truly reminded and re-reminded us that marginalized subjects - women, people of color, the disabled, the queer, the colonialized, and so on - are the least likely to get listened to, but the amazing thing about Narcissistic-Sociopathic Technology Studies is that they lead us to listen to no one who isn't already like ourselves at all.
As I said, as a mostly non-user of GenAI, I have been repeatedly surprised by the people in my life that tell me they find the thing useful. There's a lot for us in technology studies to learn here, and I really look forward to the empirical studies of GenAI adoption that will inevitably come over the horizon.
Now, personally, I have, to date, not found the various ethical arguments against GenAI use very compelling. I don't use it because . . . well, I don't find it very useful for what I do. But it is essential to point out that you can hold any ethical principle you want and still do the interpretive work of trying to understand other people who are not yourself. Indeed, this kind of work is the very foundation of humanistic and social scientific interpretation.
Why do we need to remind ourselves of this task at this point in time? Have we forgotten it? As I said above, I have thoughts, but I will leave it there for now.