Bubbles Are Shitty Information Environments, and All We Can Do Is Wait
By Lee Vinsel
On LinkedIn, Rajesh Veeraraghavan bemoaned our current information around Generative AI and tagged me as someone who might have thoughts:
Maybe this is me, but this is what I constantly hear... LLMs are hype. LLMs are taking jobs. LLMs are causing harms. LLMs are not conscious. LLMs are normal tools. LLMs are wonderful tools. LLMs and ethics? Nah. LLMs and justice? Nah. LLMs and capitalism? And then people commenting on all this: Why are people hyping LLMs? Or why are people criticizing LLM use? Will LLMs destroy jobs? Are companies hiding behind LLMs and laying off people? The discourse is all about claims and counterclaims, while many people affected by LLMs are outside this discourse.
But we need more empirical work, not just elite discourse... Otherwise, we talk about harm and call that resistance?
Here's where I'm at with this stuff:
Goldfarb and Kirsch's _Bubbles and Crashes: The Boom and Bust of Technological Innovation_ is my lodestone for thinking about technology bubbles. We are in bubble with GenAI in two senses - a narrow financial one, and a broader cultural one. Financially, the amount of money being poured into this technology and stock valuations related to it almost certainly outpace reasonable short-to-medium term revenue expectations. By a cultural bubble, I mean that there is a ton of social energy around the technology because, as Goldfarb and Kirsch teach us, people have bought into dramatic narratives, both positive and negative, about what the technology will bring.
As I explained in my MIT Sloan Management Review piece, bubbles are *TERRIBLE* information environments. Almost everything we are hearing - again, both positive and negative - is a kind of off-gassing, no doubt exacerbated by social media's incentives and cultures of "takes." Nearly all of it involves some kind of forecasting on the basis of personal experience, hearsay, whatever numbers come around, and, well, vibes. As Derek Thompson just nicely put it, Nobody Knows Anything, especially, I would emphasize, when it comes to how people will adopt this technology over the next decade or more.
A great example of this is how, last year, the factoid went around that AI capital expenditures accounted for half of US GDP. I repeated this idea because I thought the data was good! It now looks like this factoid was bunk. We just cannot rely on what we are hearing.
In lieu of good evidence and strong thinking, what we get from people is *IDEOLOGY* - riffing driven by where the speaker fits in social space. My thinking about reactions to new technologies is more and more influenced by John Levi Martin's picture of ideology, which can be summarized as "sides + self-concept=opinions": Your opinions are a result of your social networks and how you think about yourself.
Now, obviously, lots of people, including AI business leaders, investors, certain kinds of coders, social media influencers of the "12 Great GenAI Hacks" variety, and so on, are socially positioned to give us positive hype. We see this all over the place.
But there are also folks who are socially positioned so that they reliably give us negative, dystopian pictures, which often masquerade as thinking (often called "critical" thinking) but, again, is really just ideological off-gassing; it is UN-thinking. Something I'm writing about is that, for historical reasons, humanities and humanistic social science scholars often fall in this camp.
This negative ideology comes in a few varieties. First, we have criti-hype, visions where, for example, corporate AI use will lead to mass layoffs and immiseration. Interestingly, in this round of AI hype, boosters of the technology, like Sam Altman, Elon Musk, and more, have also weaponized criti-hype for strategic ends.
The other variety of negative ideology we see a lot of today is what we might call anti-hype, people who claim AI is nothing or that it's a "scam." I sometimes get lumped in with these folks because I have been writing about and criticizing hype for years, including in pieces I wrote with Jeff Funk. But, honestly, these are the folks who are disappointing me the most these days. As I've argued elsewhere, they are ignoring all the people, in very different industries and professions, who are talking about the various ways they find GenAI tools useful in daily life. (I also frankly don't buy ethical arguments against GenAI use, which I just find unconvincing on several levels; folks' tone of moral self-righteousness around it certainly doesn't help; but that is a topic for another time.)
The anti-hype crowd also tend to cherry-pick data that supports what they already think anyway. A good example is all the attention they larded on an MIT report that found 95% of GenAI pilot projects were failing, claiming that this showed the technology is empty and useless. But that's not what such findings show at all. As former IBM executive and eminent computer historian James Cortada pointed out on LinkedIn, loads of failure is normal for new technologies, especially bubbly ones with lots of social energy around them. Trial and error, with lots of the latter, is TYPICAL.
A lot of the anti-hype we are seeing is just stupid, and it is driven by the fact that even though its speakers claim to be "experts" on topics of technology, they actually know very little about the history, sociology, and economics of technical systems. They are ignorant af.
So we are in a bad place, but sadly . . . THE ONLY ANSWER IS TIME.
Cortada's point about normal failure fits with Paul David's classic argument in his famous 1990 article, "The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox," that it typically takes decades for new technologies to diffuse into society through adoption, to settle down, and become a stable part of everyday practices. Rajesh rightly asks for more empirical work, which I have done, too. But this too is going to take *TIME*. What we most need are studies, including surveys, interviews, and ethnographies, of how people are adopting these tools in organizational settings. We already have some of those on the way. Until then, like Rajesh, I think the best empirical study I've seen so far is Gabriel Alcaras and Donato Ricci's paper, "Configuration Work: Four Consequences of LLMs-in-Use."
For the sake of everyone and everything, including my sanity, it will be great when air starts to go out of our current bubble. That, too, will improve the quality of our information environment. Until then - when it comes to knowing what's going on - we can just twiddle our thumbs and wait.