This week’s topic is a bit of a detour in that it’s ostensibly about a rather narrow topic, which is the platform I’m writing this on: Substack. But the things I want to unpack are in fact connected to the broader theme of Dear Partisan, which is the importance of diverse viewpoints argued in good faith. In particular, it has to do with the issues of free speech, content moderation, algorithmic promotion of extreme content, and media business models. All of which are things that makes Substack rather unique in what I think are largely good ways.
The proximate cause of this post are two recent Substack-related events: their launch of a micro-blogging product called Notes, and a subsequent podcast interview with Substack CEO Chris Best which created some controversy in the tech media because of his answers (or non-answers) to questions related to content moderation.
A great deal of the buzz around Notes stems from its similarity to Twitter, and the perception that this marks a fundamental shift in product strategy from a writer-focused newsletter service to a consumer-focused “social media platform.” And because of this similarity, these commentators are projecting the problems and challenges of social media onto Substack Notes, and demanding answers to the types of thorny content moderation challenges that have plagued Facebook, Twitter, etc.
But I’m not sure the comparison is quite so straightforward. Yes, Notes is a microblogging feature that allows you to follow other Substack users, who may or may not write long-form articles on the platform, and who can share all kinds of content. You can react with a “heart” or comment, and “restack” it to share with your subscribers. So far, so similar. But generally speaking, it’s not the mere hosting of offensive content that gets social media platforms into trouble. It’s the promotion. The “algorithm.”
Any platform like this which ends up with more content than a normal user can reasonably keep up with is going to need some automated decision-making software (algorithm) to decide how to prioritize that content, so users don’t have to dig to find something interesting. Simply showing everything in reverse-chronological order doesn’t scale well and unfairly advantages the most prolific users. So, ultimately, the platform needs to decide what they want to prioritize. And this is where I think Substack’s insistence on generating revenue through subscriptions, rather than ads, is key.
In a for-profit business, the business model is ultimately what drives the incentives. Ad-based businesses thrive on the ability to show you lots of ads (“impressions”) for products you might actually want, which they determine by gathering a lot of data about your interests and preferences as revealed by your activity on the site. In order to show lots of ads, they need to keep you on the platform and scrolling. So, they refine their algorithm to optimize for this metric. The Washington Post had a good article about this process at Facebook, including the fact that for years they resisted reducing the weight of the “angry” emoji even though they know it was associated with toxic content. The article doesn’t state this, but I presume this stemmed from a fear that it would reduce engagement, and therefore, ad impressions.
It’s no wonder that an incentive structure that encourages sharing of content that triggers quick responses - including negative ones - could result in some unintended consequences. And by controlling the algorithm that decides what content to promote to users, the platform owner becomes somewhat culpable for that content. If someone posts a racist screed on the internet that isn’t promoted any higher than the sea of other content, it’s not likely to travel far. But if an algorithm sees that post getting engagement, it might amplify it and help it spread. Hence, the need for moderators.
In such an environment, moderators are effectively hired staff whose job is to fight the very algorithm that fuels their company’s business. They’re needed because the company doesn’t want to ultimately get rid of the incentive structure that often results in the unintentional promotion of offensive content, but they do want to show that they take offensive content seriously by trying to remove it. But this is an expensive game of whack-a-mole that wouldn’t be so necessary if their algorithm wasn’t in the habit of boosting such material. This is where Substack may have an opportunity to operate differently by virtue of their subscription-based business model.
In that interview I mentioned near the top, Nilay Patel gave Best a hard time for not being willing to say that they’d remove a particular racist comment from their platform. I think Best could have answered this better - his rote “I don’t engage in such questions about moderation” was certainly unsatisfying, but I think some of his answers before and after this exchange explained why he considers this somewhat beside the point.
As stated earlier, the toxicity of a platform is not primarily a function of what content it allows, but what it encourages and promotes. Some platforms actively market to people who’ve been banned elsewhere, as have the various right-wing Twitter clones that launched in recent years, with predictable results. But Substack isn’t marketing itself as the place to go if you’ve been censored elsewhere. That may be why some writers go there, but the words “free speech,” “cancel,” and “censor” don’t even appear on their homepage (as of this writing). They’re trying to attract a diverse array of writers by helping them serve and grow loyal audiences willing to pay them.
Because they make money when writers get paid for long-form articles, they have different incentives for deciding what content to boost. If they attract a ton of Notes-only Twitter migrants, and resist the temptation to put ads on Notes, their business incentive will be to downrank these non-revenue-generating users’ posts (unless they’re sharing Substack articles). Substack wants you to find writers you like enough to pay money to read and support. They don’t make money when you spend all day scrolling Notes and heart-ing random shit-throwers’ provocative hot takes. This incentive structure emphasizes positive relationships, and profits when readers actually show support with their wallet. Perhaps this is why you can only “heart” things on Notes - it’s not a product limitation, but a conscious decision about what types of interactions they want to encourage.
As long as Substack holds their ground by maintaining a business model that does not depend on engagement, impressions, clicks and time spent scrolling the Notes feed, they can build their algorithms to establish a different type of experience and culture which avoids becoming a cesspool of misinformation and abuse. And this I think is the key difference that makes Substack’s controversial “light touch” (and low-cost) approach to content moderation potentially viable. As long as they don’t start boosting hateful content beyond its pre-existing niche audiences, they can be a “free speech” platform without increasing the harm or reach of that content to the detriment of society.
And free speech is a worthy goal. Patel conceded that protecting speech from government censorship is important. But influential communication platforms aren’t a totally different situation - they too are run by imperfect people with considerable power over public discourse. Which is why moderation is a difficult area: once you get into that business, it can become hard to know where to draw the line. That’s not to say there isn’t a line to draw - and Substack does - but the more subjective and greyer it becomes, the greater the scope for bias and mistakes to creep in. Best said it well in that interview:
I think the place that we maybe differ is you’re coming at this from a point where you think that because something is bad, […] that therefore censorship of it is the most effective tool to prevent that. And I think we’ve run, in my estimation over the past five years, however long it’s been, a grand experiment in the idea that pervasive censorship successfully combats ideas that the owners of the platforms don’t like. And my read is that that hasn’t actually worked. That hasn’t been a success. It hasn’t caused those ideas not to exist. It hasn’t built trust. It hasn’t ended polarization. It hasn’t done any of those things. And I don’t think that taking the approach that the legacy platforms have taken and expecting it to have different outcomes is obviously the right answer…
Personally, I hope that Substack doesn’t aspire for Notes to be the next Twitter. It’s not clear why they’d want to be, after all, given its controversy and financial difficulties. And I think if that’s not their goal, and it remains a feature meant to augment their core product of long-form publications, then it can operate differently than we’re used to social media operating. It might give greater weight to posts from a variety of writers who produce long-form content, since that’s the purpose of the platform. And it might also weight posts from people who pay for content, since that’s a strong signal that they are serious about supporting writers, which is also the purpose of the platform. In fact, they’ve already introduced a feature that allows Notes users to restrict comments to paying subscribers, which on a platform like Twitter would be against their business interests, but on Substack it’s a way to encourage civil discourse and incentivize subscriptions.
These are profit-motivated design decisions which are aligned with the values of the company and the culture they’re trying to promote. And this gives me hope that they may in fact be building a new kind of “social media platform” - one where thoughtful, respectful interactions thrive without the need for aggressive moderation. This may prove to have been naive, but I think it’s a worthwhile experiment and I wish them well.