Matthew Feeney & Will Duffield join the show to talk about the impact the pandemic has had on techlash.
Shownotes:
There are some things that even a pandemic cannot stop. One of those things is political pressure to “do something” about Big Tech. Paul checks in with Matthew Feeney and Will Duffield to get an update on the state of the techlash. Furthermore, this year many of the major social media platforms have ramped up their fact-checking operations in an attempt to combat disinformation about the pandemic and partisan politics, but it is possible that they have opened a Pandora’s Box of unintended consequences by doing so.
Transcript
[music]
00:03 Paul Matzko: Welcome back to Building Tomorrow, a show about tech innovation and the future. Have a bit of a throwback today. It’s been a while since we’ve done a Building Tomorrow roundtable. So we have some blast from the past, Will Duffield, Policy Analyst at Cato’s Center for Representative Government, and Matthew Feeney, Director of Emerging Technology at the Cato Institute. Welcome to the show, Will; welcome to the show, Matthew.
00:26 Will Duffield: Great to be back.
00:27 Matthew Feeney: Glad to be back.
00:28 Paul Matzko: Alright, so a lot has been going on, obviously, in the tech sector this year. A bit of a mixed bag, too. I mean, obviously, everyone’s been focused on the pandemic, on everything from research teams, what the big pharmaceutical companies are pouring resources into doing, lots of delays, I think in general, tech research on a lot of fronts as attention turns, as capital shifts in the market. So I wanted to check in, see where we’re at in the tech sector. And in particular, I think let’s start by talking about the techlash. I had Alec Stapp come on the show a few months ago early on during the pandemic. And I asked him, “Is the techlash over or not?” And in the early stage of the pandemic, people’s, public opinion about big tech, about the tech sector had actually been goosed by the pandemic as it tied people together who were in isolation during the COVID crisis. Does it look the same way a few months out? Or are we back into the techlash conversation? What do you think, Will?
01:40 Will Duffield: Unfortunately, I think that the techlash is over, is over.
01:45 Paul Matzko: That’s good. [chuckle]
01:47 Will Duffield: As the pandemic has spun on, we’ve really seen two phenomena that I think have cut it, that nice Goldilocks period we were enjoying in the spring and early summer. First of all, as platform moderations really stepped up to combat disinformation, you’ve seen a reset of expectations. Those who hope to see platforms do more to combat extremism, nasty sentiment, that sort of thing now have this coronavirus misinformation model that they can say, “Well, why can’t you come down just as hard on whatever it is that’s upset us from antiquities trading to terrorism?” So that’s unfortunately provided a bit of a model or a desirable model for some of what moderation could look like elsewhere. As well, you’ve seen these platforms, Twitter or Facebook, YouTube, increasingly used as the lockdowns have continued, as the pandemic is churned on to express complaints about how this has been going, the effect of lockdowns. And as platforms have attempted to manage disinformation in light of these complaints, which will often weave legitimate concerns about the effect of lockdowns with conspiracy about why they’ve been imposed. You’ve seen a lot of concerns on the right about how that impacts political speech and it’s again, energized those who want to see platforms do more because they’ve seen this upswing in radical speech.
03:38 Paul Matzko: I suppose this is… There was a, there’s a Pandora’s box effect here, which is that if they can do it in this one area, if they can combat lies and disinformation, etcetera, as they’ve… I think there’s been mixed results with how effective that anti-COVID disinformation campaign has been, but now, every interest group has its own fish to fry. It says, “Well, if you can do it there, why can’t you combat X?” That makes a lot of sense. If you are one of these big social media platforms, if you’re Facebook or Twitter or whoever, what’s your response to those calls for even more action? How do you think they’re going to field this pressure?
04:22 Will Duffield: I think you have to parse the pandemic out as something truly novel. You have to look at, perhaps, where states have been able to step back from the state of exception in the past without sacrificing liberties going forward. And that’s always been difficult there. There aren’t too many success stories that way. But the posture adopted by platforms in the wake of this pandemic has really been akin to that of states in a time of war. I think unfortunately, because it has churned on for so long, then disinformation concerns and pandemic-specific disinformation concerns have become ever more entangled with broader politics. And therefore, it becomes harder to make that case for separateness, for the uniqueness of the moderation that that disinformation demanded. But unfortunately, I think we’re almost past that, that point.
05:27 Paul Matzko: I have wondered, maybe we can revisit brass tacks a bit here. I do think that the optimists in the techlash conversation, people like Caleb Watney and Alec Stapp, they’re right that the broad public doesn’t have strong negative feelings towards big tech. I mean routinely, Amazon is at the top of the corporate goodwill charts. People love Prime and getting to different packages, they don’t have to go out. And well, some companies, individual companies suffer by those metrics, things like Facebook, which has a popularity issue, a growing popularity issue over the last couple of years, in general, people… There’s not some broad public… Public opinion behind taking on big tech or government regulators taking on big tech, which raises the question and I frankly don’t have a single good answer for this. Where is the tech-lash coming from? If it’s not some groundswell of public opinion that’s driving this issue, why the techlash?
06:29 Matthew Feeney: I think there’s a interesting political answer to that question. And it’s not interesting just because I’m saying it. But by which I mean… I think once you put the current tech environment into a broader political context, then more of this makes sense, which is I think we’re in the middle of some kind of political realignment where rather elite politicians on both sides of the aisle see populism as a worthwhile avenue. And especially today, not so much in content moderation but in antitrust, I think you do see there being some agreements. Some of my optimism about tech policy, for a while, was motivated by the fact that, at least, in content moderation, the left and the right couldn’t seem to make up their minds about what the issue was. And it seemed unlikely that you could get some sort of bipartisan content moderation legislation through court, first amendment problems notwithstanding. However, with antitrust, you do have concerns from the left and the right, that I think, do yield themselves much more favourably to some kind of bipartisan push, which is somewhat concerning.
07:41 Matthew Feeney: And I don’t think it’s a surprise that most Americans aren’t thinking much about the nitty-gritty of antitrust policy. And we should also remember, there’s no… It’s not mutually exclusive to say that you like Amazon Prime, but also think that they might be too big, right? This is, I think, probably quite a decent chunk of America do enjoy and use YouTube, Google, Amazon while also having their own concerns. And that I think, is something… Regardless who wins the election next month, I think we will continue to see discussions about not only content moderation, but also antitrust.
08:20 Paul Matzko: Well, this gets to Will’s… I think you were going here Will, with the technocratic elites? I mean, at the median the American isn’t concerned with the ins and outs of antitrust policy. But there are voices in the kind of DC technocratic circle which are.
08:36 Will Duffield: Yeah, I was thinking more about the cross-cut partisan-wise of this conversation. In that, on one hand, you do have both major party candidates for president talking about modifying 230 somehow on day one. At the same time, they’re talking about doing it for completely countervailing reasons. And both sides’ partisans would look askance and find unacceptable the solutions proposed by the other side. So that lends it a certain amount of stability, even as you hear these sort of continual complaints from both sides.
09:22 Paul Matzko: Even if there’s not some broad general constituency that’s politically motivated, that gets to the polls out of the desire to take down big tech a notch, there’s clearly some sort of advantage there. I mean, there are…
09:37 Will Duffield: And no punishment.
09:38 Paul Matzko: I think we hear of…
09:38 Will Duffield: There’s no, no cost, seemingly in the face of widespread support for Amazon, you nevertheless have both major party candidates calling for decisive government action to harm the firm.
09:52 Paul Matzko: And you have freshmen senators like Josh Hawley, freshmen Congress people who are raising their national profile by positioning themselves as the defenders of the public, of the commonweal against big tech. So it clearly works politically, even in the absence of… That’s a good point. I also wonder to what extent there’s some anticompetitive, rent-seeking going on here. I have in mind… I think the Supreme Court is currently considering the Oracle v Google case, which is a bit of a… Is more adjacent to this topic. But to what extent this is… Companies jostling, right? Like Apple… The interest of Apple and Google are not aligned when it comes to social media, when it comes to content moderation, when it comes to… And so now we’re starting to see this breakout of internet companies that might have once presented more of a united front but now are going to approach the issue of potential antitrust action or Section 230 reform with different interests, like their interests no longer align the way we might once have assumed, what do you think of that?
11:07 Matthew Feeney: Well, it has been interesting to watch some of the hearings this year on this issue. There was a hearing earlier this year that included the CEOs of Facebook and Amazon and Microsoft. And while… It’s interesting to hear people describing these companies as monopolies, despite the fact that they do and in some sectors compete with each other. One of the overriding concerns here is something libertarians have talked about for a while, which is regulatory capture, right? Which is these companies will oppose regulations and significant reforms up until the point they view them as inevitable. And then they’ll probably have an influence in what the regulations look like after they pass. And that is a serious concern, I think, because something I’ve noticed is not enough Section 230 reformers or antitrust people seem to think that this is a problem, that these companies respond to incentives and in important ways. And that’s concerning to me. And even if I don’t think that legislation is likely, and in especially in Section 230, nonetheless, I think it is still politically viable.
12:20 Matthew Feeney: So if you asked someone on the left, “Did you know that the American right views Facebook and YouTube as giant censors?” They would just laugh at you and you think of how successful many conservatives have been online. But that doesn’t really matter to conservatives speaking to conservatives. And meanwhile, if you’re on the left, you have your own category of complaints. So although these claims of content moderation and censorship might not be well placed or empirically grounded, they’re not going to stop being politically useful for the Josh Hawleys of the world anytime soon. And that, that is a concern.
13:00 Will Duffield: And at scale, it just becomes very difficult to dispose of them because we can point to countless conservatives who are doing well on social media and yet, someone can turn around and point to a host of people who’ve been banned and who aren’t as visible, who’ve suffered harms career-wise. And without a way to compare those universes of anecdotes, you don’t end up with a productive conversation, you can’t prove much of anything.
13:31 Paul Matzko: I think it’s an interesting… It’s a good observation, which is that we’re downstream from these big divides in American society, which is one of which is that we consume our media in very different ecosystems. So folks who are on the right have relatively little information… Media outlet consumption overlaps with people on the left. They often read different papers, they visit different websites, they listen to different voices online and offline and so you can have… If you’re sitting at 10,000 ft, if you’re a tech policy wonk like we are, you hear both conversations simultaneously. And you say, “Hey, wait a second, these are dissonant. They clash. They don’t work together, there’s… ” But you wouldn’t know that in your little media… In your media silo on the right or the left.
14:22 Will Duffield: Well, and so unfortunately, when these companies make content moderation decisions, they end up having to adopt one of those differing contexts over another. When you deem Kyle Rittenhouse a mass shooter as Facebook, you’re choosing to accept that context for his actions, rather than one preferred by his supporters. And that will always land you in political hot water because they’re binary decisions between different sets of facts.
14:54 Matthew Feeney: And you’re not even, if you’re one of these institutions, you’re not even helped if you try and kick this decision to another institution. So for example, in the ongoing pandemic situation, many social media companies said, “Well, look, we’re not qualified to make these decisions about what’s misinformation about COVID, so we’ll use another institution like the CDC as a proxy.” But then of course, that… Will’s written about this before, but that runs into its own issues, but it’s not reserved just to medical situations. With something like hate speech, you could defer to the Southern Poverty Law Center, which itself have its own issues with that. And what long-term will be interesting out of the pandemic, I think, is how long-term these platforms learn about their legitimacy crises, which is there are just some issues where it’s not good enough for them to make their own decisions, but at the same time, it doesn’t seem good enough to rely on other authorities.
15:56 Matthew Feeney: And this is a massive issue at scale. If you consider something like YouTube where it’s hundreds of hours of content per a minute are being uploaded, inevitably, there will be false positives and false negatives in a content moderation system. So that’s more a PR issue for social media, but the fact is that politically, this will be continuous, and we shouldn’t expect this to wrap up any time soon. The answer, of course, if you’re libertarian is, “Look, private firms will have to screw up and there’s… ” This year has been interesting because you’ve had the emergence of supposed conservative social media forum like Parler. And that’s the kind of market, I think, we want, but the worry is that these firms get pressure politically from both sides.
16:44 Paul Matzko: That’s a good point. What we’re basically describing here is an oracle problem, which would you… We usually talk about in the context of cryptocurrency and blockchain, like what’s the third-party independent entity that’s gonna rule that yay or nay, that this is accurate or not, this that has this… But Oracles exist off the chain, too. This is an oracle problem. Facebook doesn’t wanna make this decision about what counts as hate speech or what counts as disinformation, so let’s find an Oracle that’s supposed to be independent and confer legitimacy, thus, on that decision. And they’re having a hard time finding a perfect oracle because no perfect oracle exists and…
17:27 Will Duffield: Well, and any potential oracle is political, and I think to some extent, they have to really think about the process of building legitimate expertise in-house so that they can say, “Our long-standing team, whose work you have respected for a long time, has made this determination, rather than the Southern Poverty Law Center or one of our stable of fact checkers.” And you see the endless debates just about that and the inclusion of the Daily Caller News Foundation as the sole right-wing fact checker. And on one hand, you’ll hear folks who say they don’t deserve to be a fact checker. And on the other hand, you’ll hear people who say, “We need three more like them so that we’ll have a balance between right and left fact-checking within the program.” And I think that gets you right to that. Once you throw it back to that fact-checking suite of experts, well, then who they are matters.
18:27 Paul Matzko: Yeah, that’s a… Again, it’s a mess, it’s a, well, Pandora’s box, to use the metaphor. And I think it also means we should expect this is not a temporary aberration. This heated debate over what counts as disinformation, what doesn’t, who the proper authority should be, whether it’s legitimate or not, I think this means we should expect that conversation to accelerate over the next couple of years. You have any feeling for that?
18:54 Matthew Feeney: Well, yeah. So I think especially, we just don’t know what’s gonna happen in the wake of the upcoming presidential election, which I think, Will, just adds more fuel to the fire of content moderation. There’s going to be a lot of discussion about what kind of, especially explicitly political content, is legitimate online. You already have a number, and while we’re recording this in early October, but there’s already plenty of commentary out there equating content moderation with election interference, which is this astonishing, and very interesting, I would say, move because it’s… You’re not just portraying content moderation as ideologically-motivated. Here, that the attempt really is to say that these private firms mostly based in California are a threat to civic institutions themselves, namely that if these companies are allowed to run however they want to run, they threaten democracy.
19:52 Matthew Feeney: And this is… That’s quite a claim, and I don’t blame people for getting upset when they hear something about that. I think 2020 look has been defined by this once a century pandemic, and that’s very important. And the content moderation issues there are important to analyze, but this attitude that content moderation is election interference will continue if the President is not re-elected because there will be a significant number of people, including some on Capitol Hill and in the press who will insist on portraying content moderation as an interference in an election, which is quite a serious charge.
20:30 Will Duffield: Well, it’s really in a way, an echo of the claim we heard in the wake of the 2016 election, that platforms failure to moderate specifically foreign disinformation, but also a lot of domestic alt-right speech in favor of the President constituted, if not interference per se, but an allowance of interference, an irresponsible decision to let people interfere. So I think it’s kind of interesting as platform moderation has evolved, perhaps become more restrictive or aware of particularly foreign interference, that that claim has made its way all the way around the political axis.
21:17 Paul Matzko: Well, and there has been the, I just saw the other day, a report out of the UK which official government agency put it out saying that they saw no evidence that social media foreign interference played a significant role in Brexit. ‘Cause that was another topic…
21:34 Will Duffield: Oh that’s Cambridge Analytica. They were speaking specifically about Cambridge Analytica in Brexit. And it was always kind of… Well, I got a pitch from them back… Right after Brexit that fall, but before the American election, I was working for a cannabis policy group at the time, and even then, it just felt like smoke and mirrors, there’s very little insight into how their data system actually worked and all of the emphasis on the gimmicks, on the soccer betting lotteries that they use to get people’s information, but again, nothing interesting about what they were doing with it. So it’s nice to see that confirmation, but it’s also unfortunate now, four years later, that’s a separate narrative out there in the world, and this report isn’t going to… You know, Carolyn Caldwell isn’t changing her mind any time soon.
22:29 Paul Matzko: Right. Yeah no one’s gonna… It’s not gonna move the needle here. It is also interesting to me. I think it’s another reminder that this is a mine field that’s going to continue and arguably worsen, because different platforms made different decisions. Twitter dropped all political advertising, right? Facebook went with more of a medium route where they will continue to take paid for campaign advertisements. All those decisions have been unpopular or controversial in some corners. There is no response from the big social media platforms that makes everyone happy.
23:04 Paul Matzko: So I think we’ll revisit this every four years, so far it’s been peaking at presidential elections, but it looks like it’s gonna be something that’s with us for the near future. Oh in terms of shifting, moving the needle on public opinion, once a narrative gets sunk down into the public consciousness, it’s really hard to change it, and no official report is gonna move the needle on that kind of thing… I have seen, this is anecdotal, but in my own social media streams, I’ve seen a lot of chatter among complete non-policy people, just ordinary folks in my Facebook, old college acquaintances, that kind of thing. Lots of chatter about the Netflix documentary, The Social Dilemma. And it’s annoying chatter because I’m not a fan of The Social Dilemma argument, but it’s been striking to me, is probably the single thing from our tech world that I’ve seen normies interact with this year in terms of tech policy conversations.
24:10 Will Duffield: Well, that sucks [chuckle]
24:12 Paul Matzko: Yeah, that’s what I’m saying. That’s below the top-level notice. That’s going on like…
24:18 Will Duffield: That’s really dispiriting. Yeah, I’ve watched The Social media Dilemma, it’s Tristan Harris and a host of other sort of former tech employees turned tech dissenters. But the documentary really makes our problems with social media out to be problems with algorithms, and therefore misses most of the picture in my mind.
24:48 Paul Matzko: Can you re-capitulate their argument for us.
24:51 Will Duffield: Well, they focus throughout the documentary on how social media’s algorithmic provision of content, be it organic content or advertising, manipulates us and causes us to adopt new beliefs or become more radical. But throughout the documentary, it ignores non-algorithmic social media almost entirely. There’s no mention of WhatsApp and the kind of lynching mobs based on rumors that you see there, and that’s not an algorithmic product, that’s purely messages from other people who you follow or are forwarded to you by your contact.
25:36 Paul Matzko: That’s a big problem in India, right? The… Yeah, yeah.
25:38 Will Duffield: Yeah. Yeah. And there’s very little evidence that these algorithms are radicalising, that it’s a supply-side phenomenon rather than a demand-side one. If as they put it in the documentary, the tools that are being created today are starting to erode how society works, it’s because these tools allow us to communicate with one another in real time at scale in a way we’ve never been able to do before, not because the algorithms are tilting us towards things. The internet may have revealed a certain amount of institutional failure and created a terrain in which legacy institutions have struggled to compete or adapt, but it’s not a simplistic matter of YouTube recommendations turning your child into a Nazi. And the documentary really seems to push that and to use fear to do so, which is kind of ironic in a documentary that purports to explain how and the evils of social media are selling us fear.
26:50 Paul Matzko: It does seem to fixate on the algorithm serves up you searched for a conspiracy video, and the algorithm figures out that you like conspiracy videos, so it feeds you more, and you go down this rabbit hole. And before you know it, you’re part of QAnon, right? Like, that’s their arc. But I’m also struck by how they don’t touch on, which is that… Yes, sure, it can play a role in the radicalisation of folks. It serves up a content that folks who are predisposed towards that way of thinking are looking for.
27:25 Will Duffield: Yeah, but it’s persuasion based on that predisposure. You’re worried about your children and vaccines, you’re a kind of vegan natural mother, and you get to QAnon that way, because they speak to your concerns about chemicals in your food and vaccines harming your children.
27:45 Paul Matzko: Yeah, but it can also play the flip roll, which is it can deradicalise it can… I mean, it is giving you the information you look for better than a blind search would. And that can be good or bad, that can radicalise or deradicalise. The whole term radicalisation is problematic because as a concept, ’cause we mean something particular by it, which is, it comes close to brainwashing, it removes agency.
28:11 Will Duffield: Well, and there’s a normative point, radical beliefs are bad. And often they are, but it’s kind of loaded in there.
28:18 Paul Matzko: Yeah, it is loaded in there. Yeah, yeah. Okay so it’s… I see what you’re saying too with the dramatization, it’s dabbling in the very kind of fearmongering it complains about. What’s the response been, I don’t know if you’ve picked up on like TechWonk chatter about The Social Dilemma. It feels like it’s been a lot more critical than the kind of ordinary consumer viewpoint.
28:49 Matthew Feeney: Yeah, I’m sorry that… I don’t know how much I have to offer, ’cause I haven’t seen the documentary [chuckle]
28:54 Paul Matzko: Oh okay [chuckle]
28:54 Matthew Feeney: Maybe in large part because I saw a lot of the commentary on it, and it seems pretty…
[laughter]
29:00 Matthew Feeney: And I said… I’ve also, I saw that this Shoshana Zuboff who wrote this best-selling book Surveillance Capitalism seemed to feature quite prominently, and I’ve been trying to write a review of the book, but I make it about 10 pages and I have 50 pages worth of notes. And it’s become a bit of a Sisyphean struggle to get this thing done. What I will say is, it seems like a documentary I would have a lot of issues with. But I think that the commentary honestly is revealing nonetheless. I mean, the fact that Netflix viewed this as the kind of content to really push and it’s being watched by a lot of people, is telling. People… It goes back to the paradox we talked about earlier, which is people seem to use Amazon Prime a lot, but there seems to be some discontent about it as well. People seem very keen to use a lot of these social media platforms, but apparently under the hood a lot of people also have reservations. But we should, I think, remember, and this might be difficult for some people to accept obviously, but the ideal amount of radicalisation on the internet with a platform that big is not going to be zero.
30:11 Matthew Feeney: Now, the actual amount of what it means to be radical and what these radical views entail is important to define because not all conspiracy theories I think are created equal, right? So if you believe, for example, that the governments around the world are conspiring to make you believe that the earth is round, when in fact it’s flat. You know, if someone says this to you at a dinner party or a happy hour, you view them as nonsensical people, but relatively harmless. I mean, QAnon has led to actual crimes, right? And people have actual harm. So that’s you know…
30:48 Will Duffield: Someone tried to run a train into the hospital ship in New York.
30:52 Matthew Feeney: [laughter] It’s great.
30:54 Will Duffield: And he drove a train at it.
[laughter]
30:55 Matthew Feeney: Well, right. And so there’s that. I believe there have been reports of kidnappings and other things, right? So, all that is very disturbing to hear about, of course, but again, it goes back to the problem at scale, which is… You have a lot of these problems, which is you have not just the content that is directly associated with this sort of thing, but then what about reporting on that content? Or what about people who are within the movement and critiquing it internally? What are… All of that is rather difficult to sort out.
31:27 Paul Matzko: It’s a good point, and also it surrendered that as we lower the costs of organization, because we’re talking about social movement organization, QAnon is a social movement, a pseudo-religious social movement. Well, so too is BLM, right? Like, all of these contemporary social movements have had the cost of access, the cost of organizing lowered by new social media forms. And you can’t only have the good and not get the bad. I mean, it is the kind of price we pay for… The freedom to organize in causes that we agree with also provides the freedom to allow people to organize for causes we don’t agree with, and I don’t think we can pick and choose. And that’s… We can ask the platforms to do a better job of deciding, like I’m fine… It’d be potentially fine for a platform to say, “Hey, we’re fine with BLM content, we’re not fine with QAnon content.” And then they can be rewarded or punished by their consumers for taking that editorial stance, right? But the same system, the same web that makes both possible, you really can’t tweak the entire web without hurting both, if you will.
32:42 Matthew Feeney: I would just mention that I think portraying a lot of the biggest players in this space as being negligent is a little unfair. But look, I have lit my criticisms of certain content moderation decisions for sure, but look, YouTube, which is owned by Google, has taken advantage of the company Jigsaw, this project within Alphabet, which seeks to identify people who are going down these rabbit holes of radicalisation. It was initially designed for Islamic extremists, but is being used more recently on White supremacist content. So I think we have to be careful in these discussions to be accurate in portraying… In describing what these companies are actually doing, ’cause I think describing them as completely negligent is unfair.
33:32 Will Duffield: I found that dynamic that you were describing in terms of tools used to combat Islamic radicalization now turned to domestic ethno-nationalist extremists very interesting, because it’s such a mirror of what we’ve seen in the broader war on terror. In that whatever security mechanisms are initially developed and deployed and seen as acceptable in a foreign context are eventually seen as at least useful in the domestic one and are brought home. And just as in seeing that plot yesterday broken up, kind of bumbling fools attempting to kidnap Governor Whitmer of Michigan, it echoes many of the plots you saw from kind of nihilistic young Muslims looking for meaning through violence, egged on and followed in some ways by law enforcement back in the early aughts.
34:35 Paul Matzko: This is something that Evelyn Dueck, she presented a paper at the Knight First Amendment Center conference at Columbia University last fall, where she talked about, yes, the tools that big tech companies have developed in cooperation with national governments in New Zealand and Australia, how they’re being turned… Their focus is turning from Islam extremism to domestic terrorism, so that’s someone to look up on Twitter if you’re interested in this topic.
35:06 Paul Matzko: The other thing I’m reminded of is that this is a very old question, which is that institutions that are created for the suppression of one form of social movement activism or of speech will often be swung round to bear on very different groups. So it doesn’t have to just be an online thing. In US history, it’s like the House Un-American Activities Committee, HUAC, which most people remember as an anti-communist weapon used in the 1950s to go after suspected but mostly not communist spies infiltrating the federal government. Well, it was actually instituted in the 1930s. Its predecessor was the Dies Committee and it went after fascists, Silver Shirts, and American far-right rather than far-left figures, and then they got swung round to bear on the opposite side of the political spectrum.
36:08 Paul Matzko: So these tools have a way of often changing targets and sometimes in a radical way, and then they have unintended consequences and there’s, I guess, collateral damage so… ‘Cause it can be hard to tell, especially at scale, is someone… Are they… To use the historical metaphor: Is someone simply sympathetic to communist beliefs on their personal time or are they spying for the Soviet Union? Those are two very different questions that are very hard to tell the difference between on the outside in the 1950s. To put that in a contemporary context: Is someone advancing White supremacy or are they just being, I don’t know, a conservative who is supportive of the Kenosha sheriff? You know what I’m saying? Like, whatever the specific example is, at scale, it’s really hard to tell the difference, and so when the tool gets swung round to bear on a new target, there are folks who aren’t necessarily problematic, if you will.
37:16 Will Duffield: At scale, it’s impossible for moderators to understand the ins and outs and norms of every sub-culture. There are too many of them. I mean, just looking at English language political sub-cultures gets you… You can’t hire an expert for every single one of them, and they come and go so frequently. How would you manage that?
37:37 Paul Matzko: Let alone if it’s Thailand and you’re dealing with the ins and outs of ethnic tribal struggles, and etcetera, right?
37:43 Will Duffield: Yeah, and the challenge is coming up with universal rules that work despite that lack of knowledge, which is really hard.
37:53 Paul Matzko: Speaking about things that are really hard, so, Matthew, why don’t you update us on where we’re at with Section 230 shenanigans? We’ve mentioned it a few times here, but it’s been… I think we’re hitting a bit of a fever pitch right now. If those who follow the President’s Twitter account, he said it in all caps that we should repeal Section 230. You don’t get more clear than that, so, yeah, where are we at with Section 230 right now?
38:21 Matthew Feeney: Well, yeah, this could have been a podcast unto itself, so to try and summarize [chuckle] a year of Section 230. So look, unless you’ve been hiding under a rock for the last year, you already should know a little bit about Section 230, but basically it’s the law that protects websites for the vast majority of content posted by users. Now there have been a plethora of suggested amendments from both sides of the political aisle. Earlier this year, the President signed an executive order mandating the NTIA to ask the FCC to look into this, and it’s just been a bit of a mess, mostly because the “politics” of this seem to be very disjointed.
39:14 Matthew Feeney: Getting Democrats and Republicans to agree on Section 230 reform is difficult, but this is actually a rare piece of agreement between Joe Biden and Donald Trump. They both… Now, for different reasons, obviously, but at the moment, I would say that some kind of Section 230 Amendment is unlikely, only because I can’t see the Senate and the House, and Republicans and Democrats in those two bodies coming together to agree on what should be done, but the problem is that it continues to be good political fuel. So I imagine that we’ll see the regular common complaints about big tech being tied in with Section 230, but at the moment, it seems like it won’t be going anywhere. The one exception to this I should mention though, is an attempt by a bipartisan group of senators led by Senator Graham to condition Section 230 protections on certain activities aimed at targeting child sexual abuse material. This was introduced under the so-called EARN IT Act. It hasn’t passed yet but in the most recent amendment to Section 230 was tied to attempts to tackle human trafficking. I think if you can hook Section 230 Amendment to something like combating this awful material, then it has a better chance. But at the moment, given where we are politically, I don’t see that happening before a new Congress is sworn in. But yeah, that’s my take on the lay of the land at the moment.
41:00 Will Duffield: I do find some of the quite recent… Really unlikely to pass sort of extreme proposed modifications interesting in the world view that they reveal. I’m thinking particularly about the recent Senator Kennedy on the Senate side and Gosar and Gabbard in the House, Don’t Push My Buttons Act, which essentially prohibits via removing 230 protections, any automated delivery of content, any push or provision of content to a user that a user does not explicitly select for themselves, click on. And it feels like the Butlerian Jihad or that, “Don’t speak to me computer.” meme as legislation, but I think, obviously, something like that is unworkable and simply could not pass. But it bespeaks a real concern if the extent to which computers and algorithms are understood, rightly or wrongly, to be making decisions for us today about what we see, about who we communicate with, and kind of just takes as blunt a hammer as you could possibly imagine to it. But it is novel in its breadth and seeming ideological bent.
42:45 Paul Matzko: Yeah. We had Max Sklar come on to talk about bots, ’cause he’s a bot programmer, and talk about artificial intelligence. And it was a reminder that if you say bot to an ordinary internet consumer, it has a negative connotation for them. They think of Russian election interference. They think of… I don’t know. They have negative connotations. And yet bots are everywhere already. We’re gonna have more of them and they’re not all bad. So part of the problem is that as we create these new mechanisms on the internet that are… They’re below notice of ordinary consumers and they miss the extent to which their experience of the internet relies on these largely [43:31] ____ phenomenon, whether it’s bots or algorithms. But the only time they ever raise this to a level of consciousness for them is a negative case is, this is bad thing that something needs to be done about.
43:47 Will Duffield: The language we use to describe them matters. Bot sounds like an alien agency, as though it’s making decisions of its own in a non-human fashion. Tweet scheduler, however, reminds you that there’s a human in the loop and yet, any kind of tweet scheduler is going to count as a bot by most definitions of the term.
44:16 Paul Matzko: Yeah. And I do think, too, that even… I agree with you, Matthew, that it’s unlikely that we’ll get any kind of serious Section 230 reform in the near future, but I do think what’s disconcerting to me… And we’ve been talking about Section 230 in Building Tomorrow since some of the earliest episodes and back then, Section 230 was notable because of how obscure it was. I don’t think the ordinary internet consumer had ever even heard of this law that shaped the modern internet. Now, more and more people have, whether it’s because of things like the social dilemma or because of all the tweeting by conservative politicians or by Joe Biden, whoever, it’s now creeping into the public consciousness and being associated with negative things and so, the Overton Window has shifted from Section 230 reform.
45:11 Paul Matzko: That’s crazy. Only real outliers like Josh Hawley would propose a government agency that would condition a Section 230 liability shield on non-discrimination. But more and more Congress people are getting behind Section 230, some sort of government agency or government role in Section 230. So what I’m trying to say is the Overton window both for politicians and for policy wonks and for ordinary consumers have started the shift over the last year or two and it feels like we’re laying the bedrock for a potentially future reform to Section 230. We’re kind of laying the public opinion groundwork for that.
45:53 Matthew Feeney: Yeah, maybe so. I guess watch this space.
45:55 Paul Matzko: That’s right.
45:58 Matthew Feeney: We can revisit in a future episode. My only comment on that, I suppose, is neither side… I’ll say it with some comfort, neither side is gonna get what they want out of this. So remember… Look, so Trump and Biden agree on getting rid of 230. But Trump, because he views these social media companies as taking part in an anti-conservative crusade and Joe Biden and his allies view these companies as irresponsible because they allow too many right-wingers who are crazy to run rampant on the platform and also allow for too much election interference. Show me a bill that addresses both these concerns that could pass the House and the Senate, then maybe… Like I said earlier, once you hook it to something we can all agree is awful like human trafficking or child sexual abuse imagery, then maybe we’re getting somewhere. But I do think lawmakers need to consider the actual implications of reforming a law like this substantially because the risks are significant.
47:04 Paul Matzko: Need to, but will they? That’s why I asked the question. Well, gentlemen, thank you so much for your time, appreciate you coming on the show. And for our listeners, until next time, be well.
[music]
47:24 Paul Matzko: This episode of Building Tomorrow was produced by Landry Ayres for libertarianism.org. If you’d like to learn more about libertarianism, check out our online encyclopedia or subscribe to one of our half dozen podcasts.