I'm curious if the original study your colleague mentioned, the one that suggested that people couldn't remember if what they read was in the comments section or in the article, took into account whether or not the comments people remembered were actually factual. For instance, for the sake of argument lets assume 10% of the comments in the comments section are actually factual. Do people remember the comments that were factual or do people remember the ones that are just trolls?
Isn't part of the reason for the pendulum-swing of social media company policies on COVID that they over-moderated in some cases, incorrectly suppressing legitimate discussion of disputed issues as misinformation?
I'm thinking here particularly of the lab leak hypothesis, which IIRC Facebook at least suppressed posts about for some time in 2020-21. Eventually they turned around on it because the evidence that that hypothesis _might_ be true (and "might" is still the right word!) became too strong, and because principled, knowledgeable people like Zeynep Tufekci called them out on it. But the damage was done, in the sense that they've handed a talking point now to all the people who might say "what ELSE doesn't Big Media want you to know, hmmm?" or the like.
This is certainly the argument that Clegg and the rest are making now; that they overdid it, and wound up over-enforcing. I can understand that argument in the abstract, but it reads to me as a lopsided excuse for pulling back on a function no one at the top of these companies has ever wanted to have, which is responsibility for the accuracy of what gets posted. I'd love to have more Zeynep Tufekci's in the world, on every issue. But I don't think that instance is why the pendulum is swinging (or reason enough to swing it). In my experience the leaders of these companies, and their policy departments, never do anything for a single reason. Their reasons are always a Venn diagram of strategic incentives, and in this case the overlap winds up creating an intellectual argument for doing less, which is easier, and cheaper, and more politically popular with the incoming administration/House/Senate. Many reasons, and none of them just (or at all) a matter of ethics or social concern.
When I speak to trust and safety people at these companies, they say they're so totally overwhelmed by just the amount of out-and-out dangerous, exploitative, and illegal content that they can barely get their arms around the people who abuse their platforms for a living. The casual misinformation artists are way, way beyond their ability to handle. But that's not an excuse for companies to throw up their hands. I think they ought to be able to handle what happens on their platforms. That was my training as an editor — you don't publish things you can't stand behind, or that haven't been vetted as in some way contributing to the national discussion. The philosophical basis of social media as an industry has been that those standards should be abandoned.
I think where I wind up differing from a lot of people who cover tech for a living is that I don't find that the scale and success of these companies is at all relevant to whether they ought to be allowed to do what they do. And when I say that out loud, as you can imagine, the exec gets up to leave, and the evening is over.
(May I also say here that I so appreciate your thoughtful feedback! It's incredibly rewarding to have you reading what I post!)
Thanks for posting all this stuff! It's a conversation we need to have, never more so than now.
Agreed that social media execs have plenty of less than honorable reasons for not wanting to be editorial arbiters of truth. I have to say, though, it's still not clear to me what successfully doing that arbitration would look like, and I'm pretty sure it would *not* look much like your editorial function at Popular Science (where I completely agree you were right to turn off comments, fwiw). These platforms are a weird new animal whose nature we're all still figuring out. They're not really like a postal service, nor a bookstore, nor a conventionally edited magazine or news website... so nobody's old metaphors really map well.
I'm curious if the original study your colleague mentioned, the one that suggested that people couldn't remember if what they read was in the comments section or in the article, took into account whether or not the comments people remembered were actually factual. For instance, for the sake of argument lets assume 10% of the comments in the comments section are actually factual. Do people remember the comments that were factual or do people remember the ones that are just trolls?
Isn't part of the reason for the pendulum-swing of social media company policies on COVID that they over-moderated in some cases, incorrectly suppressing legitimate discussion of disputed issues as misinformation?
I'm thinking here particularly of the lab leak hypothesis, which IIRC Facebook at least suppressed posts about for some time in 2020-21. Eventually they turned around on it because the evidence that that hypothesis _might_ be true (and "might" is still the right word!) became too strong, and because principled, knowledgeable people like Zeynep Tufekci called them out on it. But the damage was done, in the sense that they've handed a talking point now to all the people who might say "what ELSE doesn't Big Media want you to know, hmmm?" or the like.
This is certainly the argument that Clegg and the rest are making now; that they overdid it, and wound up over-enforcing. I can understand that argument in the abstract, but it reads to me as a lopsided excuse for pulling back on a function no one at the top of these companies has ever wanted to have, which is responsibility for the accuracy of what gets posted. I'd love to have more Zeynep Tufekci's in the world, on every issue. But I don't think that instance is why the pendulum is swinging (or reason enough to swing it). In my experience the leaders of these companies, and their policy departments, never do anything for a single reason. Their reasons are always a Venn diagram of strategic incentives, and in this case the overlap winds up creating an intellectual argument for doing less, which is easier, and cheaper, and more politically popular with the incoming administration/House/Senate. Many reasons, and none of them just (or at all) a matter of ethics or social concern.
When I speak to trust and safety people at these companies, they say they're so totally overwhelmed by just the amount of out-and-out dangerous, exploitative, and illegal content that they can barely get their arms around the people who abuse their platforms for a living. The casual misinformation artists are way, way beyond their ability to handle. But that's not an excuse for companies to throw up their hands. I think they ought to be able to handle what happens on their platforms. That was my training as an editor — you don't publish things you can't stand behind, or that haven't been vetted as in some way contributing to the national discussion. The philosophical basis of social media as an industry has been that those standards should be abandoned.
I think where I wind up differing from a lot of people who cover tech for a living is that I don't find that the scale and success of these companies is at all relevant to whether they ought to be allowed to do what they do. And when I say that out loud, as you can imagine, the exec gets up to leave, and the evening is over.
(May I also say here that I so appreciate your thoughtful feedback! It's incredibly rewarding to have you reading what I post!)
Thanks for posting all this stuff! It's a conversation we need to have, never more so than now.
Agreed that social media execs have plenty of less than honorable reasons for not wanting to be editorial arbiters of truth. I have to say, though, it's still not clear to me what successfully doing that arbitration would look like, and I'm pretty sure it would *not* look much like your editorial function at Popular Science (where I completely agree you were right to turn off comments, fwiw). These platforms are a weird new animal whose nature we're all still figuring out. They're not really like a postal service, nor a bookstore, nor a conventionally edited magazine or news website... so nobody's old metaphors really map well.
as to the fate of most substack authors: the winner take all society by cook and frank