- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
More than 200 Substack authors asked the platform to explain why it’s “platforming and monetizing Nazis,” and now they have an answer straight from co-founder Hamish McKenzie:
I just want to make it clear that we don’t like Nazis either—we wish no-one held those views. But some people do hold those and other extreme views. Given that, we don’t think that censorship (including through demonetizing publications) makes the problem go away—in fact, it makes it worse.
While McKenzie offers no evidence to back these ideas, this tracks with the company’s previous stance on taking a hands-off approach to moderation. In April, Substack CEO Chris Best appeared on the Decoder podcast and refused to answer moderation questions. “We’re not going to get into specific ‘would you or won’t you’ content moderation questions” over the issue of overt racism being published on the platform, Best said. McKenzie followed up later with a similar statement to the one today, saying “we don’t like or condone bigotry in any form.”
Thank you for the detailed response.
Substack can host nazis given the legal framework in the US. But why shouldn’t I speak up about their platforming of evil? Substack can do what they want, and I can tell them to fuck off. I can tell people who do business with them that I don’t approve, and I’m not going to do business with them while they’re engaged with this nazi loving platform. That’s just regular old freedom of speech and association.
Their speech is not more important than mine. There is no obligation for me to sit in silence when someone else is saying horrible things.
It feels like you’re arguing for free speech for the platform, but restricted speech for the audience. The platform is free to pick who can post there, but you don’t want the audience to speak back.
You’re conflating laws and government with private stuff. The bulk of this conversation is about what can private organizations do to moderate their platforms. Legality is only tangentially related. (Also it doesn’t necessarily follow that banning nazi uniforms would ban BLM t-shirts. Germany has some heavy bans on nazi imagery and to my knowledge have not slid enthusiastically down that slope)
A web forum I used to frequent banned pro-trump and pro-ice posts. The world didn’t end. They didn’t ban BLM. It helps that it was a forum run by people, and not an inscrutable god-machine or malicious genie running the place.
I’m also not sure I understood your answer to my question. Is there a line other than “technically legal” that you don’t want crossed? Is the law actually a good arbiter?
I don’t think they’ve actually been trying very hard. They make a lot of money by not doing much. Google’s also internally incompetent (see: their many, many, canceled projects), Facebook is evil (see: that time they tried to make people sad to see if they could), and twitter has always had a child’s understanding of free speech.
A related problem here is probably the consolidation of platforms. Twitter and Facebook as so big that banning someone from it is a bigger deal than it probably should be. But they are free to move to a more permissive platform if their content is getting them kicked out of popular places. We’re not talking about a nationwide, government backed-by-force content ban.
I’m not sure what to do about coordinated disinformation. Platforms banning or refusing to host some of it is probably one part of the remedy, though.