Silicon Valley’s Very Online Ideologues are in Model Collapse
Like an AI trained on its own output, they’re growing increasingly divorced from reality, and are reinforcing their own worst habits of thought.
The ideologues of Silicon Valley are in model collapse.
To train an AI model, you need to give it a ton of data, and the quality of output from the model depends upon whether that data is any good. A risk AI models face, especially as AI-generated output makes up a larger share of what’s published online, is “model collapse”: the rapid degradation that results from AI models being trained on the output of AI models. Essentially, the AI is primarily talking to, and learning from, itself, and this creates a self-reinforcing cascade of bad thinking.
We’ve been watching something similar happen, in real time, with the Elon Musks, Marc Andreessens, Peter Thiels, and other chronically online Silicon Valley representatives of far-right ideology. It’s not just that they have bad values that are leading to bad politics. They also seem to be talking themselves into believing nonsense at an increasing rate. The world they seem to believe exists, and which they’re reacting and warning against, bears less and less resemblance to the actual world, and instead represents an imagined lore they’ve gotten themselves lost in.
This is happening because they’re talking among themselves, and have constructed an ideology that has convinced them those outside their bubble aren’t worth listening to, and that any criticisms of the ideas internal to their bubble are just confirmation of their ideology, not meaningful challenges to it. They’ve convinced themselves they are the only innovative and non-conformist thinkers, even though, like an AI trained on AI slop, their ideological inputs are increasingly uniform and grounded in bad data and worse ideas.
Model collapse happens because structural features of the training process, intentional or unintentional, mean that AI-generated content is included, at an increasing frequency, in the training data. The AI “learns” from sources that don’t correct its mistakes and misconceptions. Structural features of a similar sort are playing out in the far-right corners of Silicon Valley.
First, there’s what I’ve referred to in the past as the “Quillette Effect.” Because we believe our own ideas are correct (or else we wouldn’t believe them), we tend to think that people who share our ideas are correct, as well. Thus, when someone who shares our ideas tells us about new ideas we’re not familiar with, we tend to think their presentation of those ideas is probably accurate. Quillette is a website that has often published articles explaining ideas on the left to its predominantly right-wing audience. If you’re part of that community, and share the generally right-wing perspective of Quillette authors, but don’t know much about the left-originating ideas they discuss (critical race theory, postmodernism, etc.), you’ll likely find their explainers persuasive, not just in terms of being a reasonably accurate presentation of those ideas, but also in their conclusion that those ideas lack merit. But if you do know something about those ideas, you’ll find that Quillette presents them poorly and inaccurately. In other words, the “Quillette Effect” is an example of an ideological community tricking itself into believing it has learned about ideas outside of its tribe, when in fact it’s just flattering and reinforcing ideas internal to its tribe. And Quillette is far from alone in this. Bari Weiss’s Free Press, quite popular in online right-wing circles, plays the same game.
Second, there’s the structural issue of wealth dependency. When you’re as rich as Musk, Andreessen, or Thiel, a great many of the people you interact with are either of your immediate social class, or are dependent upon you financially. Your immediate social class, especially the people you interact with socially, are likely to share your ideological priors, and so not challenge you at anything like a deep level. And people who are financially dependent on you are likely to reflect your ideas back to you, rather than challenging them, because they don’t want to lose your support—or they are hoping to gain it. Thus your ongoing training inputs will reflect your own ideological outputs. (The recent story of the Trump campaign buying pro-Trump ads on cable stations near Mar-a-Lago so Trump will see positive messages about himself—even though this is wasted money from a campaign strategy standpoint—is an example of this dynamic.)
Third, the structure of social media not only means that very online people tend to be flooded with ideologically confirming views, but when they encounter contrary positions, its in a way that makes them easier to write off as unserious and fringe. The nature of a social media feed tricks us into thinking our ideological community is much more representative of the broader conversation than it really is.
""For someone like Elon Musk—a guy who spends so much time on Twitter that it seemingly represents the bulk of his engagement with people outside his immediate circles—the odd little far-right world of his Twitter feed comes to feel like the whole world. Terminally online, heavy social media users don’t realize how much nonsense they take to be fact because that nonsense, to them, looks like majority opinion, disputed only by a discredited (by their community’s imagined consensus) and unserious minority."
(That passage is from a longer essay I wrote digging into how this works, and how this cognitive illusion damages our politics.) Further, because so much of the online right is concentrated on Twitter, people who are active on Twitter come to view the ideas internal to the online right as closer to the mainstream than they in fact are, and so get dragged to the right, often unintentionally. This means that the “training data” of very online ideologues looks increasingly uniform and is just restatements of very online right-wing perspectives, and data outside of that perspective is treated with growing suspicion because it is mistakenly believed to be fringe, and so not worth taking seriously.
The result of these three features is an insular intellectual community, talking increasingly only to itself, and increasingly cut off from the kinds of conversations that would correct its excesses, or, at the very least, give it a more accurate perspective on what the world outside its bubble looks like. Hence their surprise, for example, that the nomination of JD Vance led not to a widespread and enthusiastic embrace of neo-reactionary philosophy, but instead to an entire, and apparently quite successful, Democratic campaign build around “those guys are weird.”
The problem with model collapse is, once it goes too far, it’s difficult to correct. The solution to model collapse is to train on better data. But accomplishing that, and undoing the rapidly radicalizing right-wing ideology of these titans of the Valley, means undoing the structural causes of that self-referential and self-reinforcing cascade. And that’s no easy task.
https://www.reimaginingliberty.com/silicon-valleys-very-online-ideologues-are-in-model-collapse/