E406 -

Neil Chilson joins the podcast to dissect all of the backlash against Big Tech.

Hosts
Trevor Burrus
Research Fellow, Constitutional Studies
Aaron Ross Powell
Director and Editor
Guests

Neil Chilson is a senior research fellow for technology and innovation at the Charles Koch Institute where he spearheads the Institute’s efforts to foster an environment that encourages innovation and the individual and societal progress it makes possible.

Shownotes:

As governments and tech platforms seek to address the concerns driving the “techlash,” Neil talks about the lessons that provide guidance on how to avoid the worst pitfalls that could adversely affect efforts to improve the human condition online.

What is “legibility”? What concerns drive the “techlash” and what should platforms and governments do to address them?

Further Reading:

Transcript

[music]

0:00:07.2 Aaron Ross Powell: Welcome to Free Thoughts. I’m Aaron Powell.

0:00:09.4 Trevor Burrus: And I’m Trevor Burrus.

0:00:11.0 Aaron Ross Powell: Our guest today is Neil Chilson. He’s a senior research fellow for technology and innovation of the Charles Koch Institute, and prior to joining CKI, he was the Federal Trade Commission’s chief technologist. Welcome to the show, Neil.

0:00:23.5 Neil Chilson: Great to be here.

0:00:25.5 Aaron Ross Powell: Why are we suddenly seeing so much of a backlash against big tech?

0:00:29.2 Neil Chilson: Well, I think there’s a bunch of different reasons, but I think the most fundamental one is that internet technologies have really increased the legibility of the world around us, they’ve increased the ability to see what people are doing in more and more aspects of their lives, and this is a very powerful tool, but it also raises a lot of concerns because it changes a lot of things.

0:00:56.1 Trevor Burrus: You used use the word legibility, which is not your coinage, so why did you use that word?

0:01:00.7 Neil Chilson: Yeah, so legibility is a word that I think I borrowed the usage from James C. Scott, who wrote a book in 1998 called Seeing Like a State, which is a case study of grand government plans to change and improve society and how they often went wrong. And he does a lot of historical case studies, some which are sort of small in scale and Effect, and some of which are enormous and had hugely tragic consequences. His term legibility as he uses it, he’s talking about the need for a government who’s going to embark on ambitious plans to understand the territory that it’s regulating, so it has a purpose for doing something, and it needs to gather data about that purpose. And legibility is the word that James C. Scott uses to talk about When government increases the information that it collects, it makes that system more legible, and there’s lots of examples I can get into many ones, but I would say the main thing that he emphasizes, one of the main things he emphasizes is that legibility is directed towards a single purpose and so it tends to ignore every other type of information in the system that isn’t dedicated to that purpose.

0:02:31.7 Neil Chilson: So if you’re trying to do a census, and this is a historic problem, you’re trying to do a census and people don’t have last names, if you’re trying to do a country-​wide census, you might have to force people to choose the last name, and that imposes some legibility and it ignores many other things, and it also was unnecessary in the small town necessarily to have a last name ’cause people knew who you were, so that’s just one small example of legibility and what James C. Scott means about it.

0:03:02.2 Trevor Burrus: One of my favorite examples he gives is the local knowledge of… So if you grew up in Jonesville, and the next town over is Smithberg. The road that goes from Jonesville to Smithberg in Jonesville is called The Smithberg Road. And then in Smithberg it’s called The Jonesville Road, which does not suit the purposes of a state that has, as you pointed out, some goal in mind to make it County Road 151 or something, which probably has to do with property, delineating property and then figure out how to tax it, but there’s a lot of examples about this where certain things used to be… You used to measure rope by the distance between your thumb and your elbow, you used to measure what a bushel was, and all these kind of debates, and it was functional on a local level, but states have different purposes in mind, correct?

0:03:53.3 Neil Chilson: Yeah, I love that example. And the key thing about that is that if you’re trying to make a map that’s generally usable to people outside of those towns, you have to pick one of those names or maybe a different name, but it’s actually more useful if you’re in those towns and you know the road, it’s actually more useful to have the simple name about the destination of that road, and so you can see that legibility can be very subjective, that label was completely understandable to the people who lived in those towns, but didn’t serve the purpose that would be needed by a general map maker.

0:04:30.3 Aaron Ross Powell: It seems like there are maybe two ways to think about making legible, and I wonder if both apply or if one plays a larger role in this, the first is what we just described, which is you have a piece of information that exists about this road, but namely what it’s called, and it’s public knowledge, but it’s not very clear because it’s in this case, the Road has two different names, and so you pick a third name, and so there isn’t really more information available to the government, it’s just the information that was is less confusing. And this would be kind of like making legible in the sense if I pick up a book that’s in Russian and I can’t read Russian, so I learn Russian and read it, but it also seems like there’s potentially making us legible in the way of accessing more information, about us, that wasn’t available or wasn’t widely available, and this would be like learning more about me not just understanding what I already put out there. Is there a distinction there? And do those play a role?

0:05:35.2 Neil Chilson: I do think there is a distinction. You can think of this as… You could split this up lots of different ways, but one way would be the example, one of the examples I use in my paper is the microscope, which opened up a whole world of information that people had never seen before, even though physically it was there, that information was there, it was available, the signals that were coming off of microscopic cells of light, they existed prior to the microscope, but the microscope allowed us to see those, and I might call that discovered legibility. And then the other type, the type that has to do with maybe with roads would be imposed legibility, so you’re putting a label on something or you’re simplifying something into a box, you’re categorizing it, and in order to understand it better for a particular purpose, and I would call that imposed legibility, and I think both of them are pretty related and they can overlap. Scott’s book emphasizes imposed legibility because he’s talking about what governments do when they’re trying to pursue different purposes, but his book has a flavor of describing discovered legibility as well.

0:07:00.4 Neil Chilson: Part of what he calls high modernism, which is this relatively modern idea that government should do much more than just collect taxes and protect from invaders and stop rebellions, which is what Government did for most of history, this idea that government’s job is to make society better, led into this idea in the late 1800s, early 1900s of high modernism, and that was a direct result of the great successes in science, people took the scientific model of discovered legibility and they said, Hey, maybe we can do this for society as well, maybe we can understand the underlying physics of the world and design solutions that will work in a sort of scientific way. Now, Scott is pretty critical of this approach as am I, and he points out that it’s not a scientific approach, it just uses the language of science, that it is in many ways a very strong belief that we need to remake the world and that we can, that we have the capability to understand enough about these complex systems to redesign them from scratch and make them better, and as Scott points out over and over, that had really terrible consequences.

0:08:18.4 Trevor Burrus: Now, your paper is called seeing platforms like a state in terms of internet platforms, and it struck me as I was re-​reading a part of seeing like a state this morning, and in the very beginning, he writes that the main reason he went toward this way of thinking was him studying the way that governments all over the world and human history have had a huge problem with itinerant people, gypsies, for example, like people who just don’t settle down or keep moving, and it just sort of frustrates them, and that’s how we got thinking about this. And this seems to be if you’re applying this to platforms broadly construed, there’s a kind of an analogy there of what’s frustrating people on platforms, like what’s making them… It sees people who are not legible in a proper way, maybe they’re anonymous or something like this. So, Is there something that struck you about platforms that first made you start thinking to apply James C. Scott’s work to this?

0:09:16.3 Neil Chilson: Well, I think a lot of it grew out of my work on privacy, and there, the thing that the internet has done has made the disparity between those two things so much greater, so what I mean is, on the internet, you can be extremely legible, or at least your activities are extremely legible by design in the system to the other computers that you’re interacting with, so all the bits that are flowing out on my computer to your guys’ computers right now, your computers can see those, and there’s no new discovery that needs to happen. So if you impose an anonymous communications on top of that very legible system, the disparity is really distinct, and so in the normal world, we have lots of places where you’re not really anonymous, but you sort of are like you’re walking around in public square and people don’t know who you are. And so you’re sort of anonymous, but if for some reason, people need to know who you are, or if you’re like walking through the middle street or you get hit by a car or something, and people need to figure out who you are, they can.

0:10:26.9 Neil Chilson: And so that blend is really different in the real world or in the offline world than it is in the online world, and I think that’s what makes people uncomfortable, so you can have situations in which people didn’t realize how legible their communications were, and then the flip, you can have people who are extremely anonymous in a way that’s not really achievable in the offline world as well, and so that much bigger range, I think makes people really uncomfortable because the analogs to real life, they’re not that strong. We don’t know how to do that. We’ve been dealing with privacy in the offline world forever, we wear clothes for a very particular reason, that has to do with covering up parts of us, and that’s much harder to do online in many ways, and when you do it though, you can cover yourself up completely. So I just think we don’t have good mental models for that yet, and society hasn’t really totally grasped the full implications of that, and so I think that’s driving a lot of the concern.

0:11:28.1 Aaron Ross Powell: The government has obviously a clear incentive to make it citizens and non-​citizens legible, because it wants to be able to tax us, it wants to be able to regulate and so on. What’s the incentive of the platforms to try to make us more legible than just whatever we say, choose to post on them?

0:11:49.9 Neil Chilson: Well, there’s a bunch of different ones, I mean, part of it is this, it’s just like I said, the legibility is built into the underlying technology, so the packets are there, you can build applications on top of it that might just hide some of what the location of the packets, or where they came from, or the identity, but it’s an extremely legible environment, so there doesn’t even need to be that motivation to make it more legible. Now, I think what the platforms are using this for varies quite a lot, often they’re trying to serve content that is interesting and engaging to their users, and to do that, they need to know, they’re paying attention to what you liked and engaged with in the past, so that’s a big one. Advertising, part of the value of many of these platforms is that they connect consumers with products that nominally, the consumers would be interested in, that to do that, you need to know something about what the consumers are interested in, and so they have that incentive as well. Those are two probably of the strongest motivating ones for the types of social media networks that we’re talking about.

0:12:56.1 Neil Chilson: There might be lots of other reasons that you might wanna do that if you’re trying to prevent spam or hacking or if you’re just trying to have a more secure environment, you might wanna understand and observe patterns online, if you’re just trying to design a system that’s effective, that doesn’t buffer too much, that serves people efficiently, you might wanna understand how information is flowing. So there’s lots of reasons to try to understand it, but the big platforms, I think the ones that most concern people are the ones that are around collection about people’s information specifically.

0:13:28.9 Trevor Burrus: So, if you’re making a positive point that James C. Scott’s work has some interesting applications to internet platforms and this ongoing debate about this, what is the normative? What do we get normative-​ly out of? If we see platforms like a state.

0:13:46.2 Neil Chilson: Well, I think there’s at least two lessons, the first is that, Scott talks about the problems with imposing legibility or imposing visibility on complex systems, and it’s pretty clear that because they involve the interactions of millions and millions, billions of people, these platforms are complex systems, that has lessons both for governments who want to regulate these platforms, but it also has some pretty strong lessons, I think for the companies themselves who are trying to figure out how to manage the platforms and serve customers while also complying with local laws and dealing with political pressures and all of this. And so I think the big normative takeaways are, James C. Scott has sort of four lessons, he has four lessons about legibility, he characterizes them as essentially the four conditions under which things can go really horribly, but you can negate the statements and come up with four tactics you might use in order to keep things from going horribly when you’re trying to do something and those are reduced simplistic legibility, try not to over-​simplify things or don’t use legibility imposing mechanisms. The second is to temper these big grand schemes, do incremental changes rather than trying to erase all of history and redo something.

0:15:21.5 Neil Chilson: You can also, I think, reduce the power of the central authority to impose legibility, and then the fourth is to empower the citizens or the participants in the system to push back, to have feedback mechanisms of some kind. And I think all of those are tools that platforms as well as the governments who are regulating them could look to. Those are methods that the companies and the governments could look to to reduce some of the harms that might come from imposing legibility in an online space.

0:16:00.9 Aaron Ross Powell: As I was reading your paper, I was struck by… You talk a fair amount about how people are worried or angry or fearful about increasing legibility in when it comes to online platforms, and by that we mean mostly Facebook, Twitter, Google, so on. But obviously, this metaphor began with an assessment of the state, and the state has strong interest to make us legible, and it’s odd that we have this bipartisan seeming consensus about the problems of legibility when it comes to Facebook learning a ton about me in order to show me, not a silly, more relevant, but more engaging posts and to also put ads in front of me, but you don’t see the same sort of uproar except for weird civil libertarian corners about the government doing precisely the same thing and in fact doing it with a lot of the same technologies that these guys are using, and it strikes me, is obvious that the government making us legible is far more of a threat and creates far more harms than Amazon or Google making me legible.

0:17:15.9 Neil Chilson: Yeah, I think that’s right. Obviously, governments, the penalties that they can impose on you and the constraints that are on them, especially even setting aside the US where we do have some constitutional constraints on what government can do to individuals, but if you go to someplace else, it’s more authoritarian, I think that the stark-​ness of the divide between Facebook showing me an ad I don’t like, or even the misinformation concerns and the privacy concerns compared to the government using a platform or information collection techniques in order to sensor a political speech or to suppress dissident ideas, the threats there are much bigger scale, and one of the concerns I think that people often raise is, if we make it easy or even desirable for platforms, technological platforms to implement political pressures or to bend to political pressures in the US, which they’re facing that from both, the right and the left here in the US, what are we telling other countries about how much their government should force who may not have the same concerns with individual liberty that we have, we may be giving them a justification, if we in the US are saying, Hey, government should force these platforms to do x, y and z, you can just see the huge danger of putting these types of surveillance tools in the back pockets of authoritarian regimes. And I’m really worry about that.

0:19:00.6 Aaron Ross Powell: It’s interesting that we’re recording this a day after the breaking news about the NGO group, a Israeli basically malware company that sells software that you can get on to people’s phones and then monitor everything that they’re doing on them, and how these dangers we talk about, like it was what? 50,000 phone numbers were released of who we think were targets of this, and it’s a lot of journalists and activists in other countries who in many cases, were assassinated within days of having the software installed on their phones. And that seems to like another reason to potentially worry about these platforms making us legible, is that we don’t necessarily voluntarily set out to give all of the information about ourselves to the government, like we’ll fill out the bare minimum of DMV forms, but we’re not excited to. But we happily give an extraordinary amount of information to Google and Facebook, and then if that stuff becomes… Or we put it all on our phones, which is owned by Google or Apple, and if that stuff becomes legible, either because Apple gets subpoenaed, or because the NGO group puts malware on there, then that’s a huge part of new information that the state would have had a harder time in talking us into giving away.

0:20:20.3 Neil Chilson: Yeah. I think technology has been trending in that direction for a long time at this point, the ability of law enforcement to require cell phone companies to give records about where somebody has been, that’s a gold mine for law enforcement. And in a way that has shifted the balance quite far towards tracking. And it’s why I’m always… When I hear people on advocating that the government needs more access to encrypted protocols, etcetera, I wonder, I feel like they don’t acknowledge the quite significant shift that has given government surveillance more power over time. I do think it’s really important to recognize the power of these tools in a commercial setting, and in a communication setting to human prosperity, I do think they have done a lot. But the power of them is part of the reason why how government uses them needs to come under very particular scrutiny, the constraints that are on Google or Facebook or the hundreds of thousands of other companies that are online collecting information about what their users do, are the types of constraints that are on market actors.

0:21:46.7 Neil Chilson: Not only are they the local laws already, but you can just see the public campaigns against some of these various uses, the feedback that happens when something goes wrong, and often that feedback is genuine app support on the very platforms that people are criticizing, and so they have constraints in a way that a law enforcement agency that may never even make public the information that it’s using does not. And so, especially something like, and we don’t have to get into Fourth Amendment law at all, but something like the third party doctrine that lets the government get easier access to information that I’ve shared with a third party just seems extremely obsolete at this point, and has really shifted the balance towards government uses of this data almost subsidizing it in a sense, in a way that I think is really concerning for citizen privacy.

0:22:46.2 Trevor Burrus: So back to Scott’s book, he has a lot of examples. Some he focuses more on of what happens when the government comes in to pursue legibility, and what fails when that happens, and so you kinda mentioned some of the normative points that he makes, and as I was reading your essay, I was trying to make some sort of like, is Reddit analogous to a town or something? So is the analogy here that there’s some town with a bunch of informal but traditional property rights, and some government comes in and wants to tax the property, and therefore needs to know where it is, so it starts imposing rules onto the people in order to figure out what’s going on, for its purposes, as we said, it’s directed toward a singular purpose usually, and all these problems develop. And is that kind of what you’re talking about if we think of Reddit? That Reddit is like a town or maybe a sub-​Reddit or maybe Reddit’s a bad example, or something you would prefer, and that if the government came in or even Reddit came in and sort of trying to mess with things, this is where we should be thinking about this analogy in these kind of hypothetical situations. Is that accurate?

0:23:54.7 Neil Chilson: Yeah, that’s exactly right. I think Reddit is a really interesting example. And I’ll just do the sort of compare and contrast between something like Reddit and Facebook as far as how they deal with some of these challenges. So Reddit in some ways I would consider to be like a lot of little small towns, right? Sub-​Reddits have communities where they set their own moderation rules and you have local consensus-​building mechanisms about What the group is for, what kind of conversations can go on in there, and I know we focus a lot on misinformation and other things, but it doesn’t even have to be issues like that, it could be just people in a knitting sub-​Reddit saying, “Hey, we talk about knitting here, we don’t talk about other things,” and that makes a group more useful to people who are interested in knitting, in a way that if you just had every group being like a free for all, it wouldn’t be nearly as useful for people. So those types of social norms that can pop up, I think what’s pretty clear if you spend any time exploring across the different Reddit, sub-​Reddits, and I think that Reddit has grappled with this in a way that is much more nuance in some ways than some of the bigger platforms, that one-​size-​fits-​all rules just don’t work.

0:25:15.5 Neil Chilson: The communities are so radically different, something that would be extremely offensive in one context to one group might be understood in a completely different frame, or would be more relevant and less irrelevant than it would be in another group. And so I think Reddit is a model of a decentralized governance platform, governance system, in a way that tries to deal with these problems in a way that just the general feed on Facebook or the general Twitter feed is not. That’s a community of all, of everybody all at once. And Twitter is trying to impose one set of rules across that entire community, for example. Something like Facebook groups obviously, they have some mixed version of those two things, but they’re not quite as nuanced I think, or there’s not quite as many different types of regulatory approaches, I think the tools that Facebook provides moderators are just more limited than they are on Reddit, for example. And so, I think some of the biggest platforms could learn a lot from trying to push decision-​making down further to the communities, and I know some of them are thinking about that in ways that are even more aggressive than what Reddit might do.

0:26:41.0 Neil Chilson: So, I know Mark Zuckerberg had talked for example, about shifting a lot of the business model to encrypted group communications, things like the Facebook Messenger or some of the WeChat type models, where you have groups that are not visible to the outside world, and in many cases, not even visible to the platforms, because they’re encrypted direct between the users. That’s one of the examples there is in the paper of a sort of imposed illegibility that might relieve the companies of the responsibility, or I should say the political pressure to try to do something about what people are communicating online. They can say, “Hey, we just can’t access this, so quit asking us for it.” That might be one way that they look to do that. But I think they’re running into lots of different barriers there as well.

0:27:40.3 Trevor Burrus: Yeah, is that gonna get what people want out of Facebook, and that kinda seems to be like a meta-​point to your argument, where you have to look at what the goal is, because some people want Facebook to crack down on misinformation and make sure that it’s playing in the political game in some sort of neutral Arbor kind of way, and some people don’t want that at all. And so does that mean that they would both pursue entirely different sort of… Like each of those different opinions would prove to a different type of legibility or maybe illegibility, if they’re trying for what they’re doing? And Reddit has a completely different goal, not even related to any of these other goals, so we have to look at the goal to understand the legibility, and then the problems are gonna rise from the different goals that are being imposed or discussed for these platforms.

0:28:26.6 Neil Chilson: Yeah, I think that’s absolutely right. You can see this, I think, in the fights over content moderation, what everybody wants, people often say, “Well, just go build your own platform, but what people want is, they don’t just want a platform that has their ideas out on it, they want a platform that has everybody on it, so they can have the biggest audience possible, and so those two goals are in sort of direct conflict, if you’re gonna have a broad range of ideas on a platform, you’re gonna have to have a broad range of people, and if you want a broad audience, you’re very likely to be bringing in a broad range of ideas. And so I think those two purposes are intention, where people are trying to get the benefits of having an enormous platform for them to speak on, but also want to impose some controls on what other people can say on it. These are fights that are really old, actually in media technology that go away before social media, all the way back to the printing press, when you have new ways that people can communicate, it is very disruptive, and politicians and people who want to shape society have long realized that being able to influence those conduits for thought and communication is a key way to control society, or to affect society at least.

0:29:53.9 Neil Chilson: And James C. Scott doesn’t talk a ton about those types of media controls in there, many of the impositions that he has are actually much more physical architecture, things like building constructed towns like Brasilia, and very dramatic for that very reason, but I think many of the same principles apply to trying to control information.

0:30:18.8 Aaron Ross Powell: I’m stick to our old colleague, Adam Bates, has remarked too on that. I don’t think it’s just that they want a big audience, but the trolls want to be where the non-​trolls are, so this shows up with like every time a kind of Trumpist social media thing starts up, it peters out because the Trumpists don’t wanna be where Trumpists are, even if it’s a huge audience, they wanna be where the Libs are so that they can upset the Libs with what they’re saying. And I think that’s a really strong dynamic, it’s the interaction too. I wonder if there’s a… We have seen the internet began as basically a series of open protocols. So it’s TCP/IP that the information is going over, but then things built on top of that, whether it’s email protocol or whatever, these are just open protocols, and then usage popped up in the form of these small towns of you had like your Usenet provider, and they gave you access to certain Usenet groups and might have certain rules and so on, and there was a lot of ad hoc common law rule formation in these places, and then we moved… AOL didn’t win, but the AOL model seems to have won out of…

0:31:46.8 Aaron Ross Powell: Now, the internet is a handful of very large platforms that all of the data flows through and we use them, and that mirrors kind of this argument about federalism versus centralization in government, and the Libertarians pushing for getting back to less centralization, more on the ground, whether it’s hierarchy in information or local knowledge, law making and so on. Is that the path then that we should potentially be looking to take, is kind of turning the Internet back into just a series of protocols that people can interact on as they see fit?

0:32:29.2 Neil Chilson: So the internet has moved through sort of waves in this. As you pointed out, AOL was very much a walled garden, but even before AOL, Copyserv and Protege were walled gardens that brought lots of people on online, because they were safe, they were limited, they were easy to use, there was an incentive in that, before that, using a Usenet group was not super intuitive, especially setting one up took… And it was mostly used by pretty nerdy people who tended to be the main people who were online, and that’s true for a bunch of other communications technologies early on. Things like Copyserv and Protege simplified that, brought it to a mass audience. AOL did that, but then the webs blew that up sort of. So websites, anybody could put a website up, that decentralized this a lot, and now we still have that parallel web architecture where anybody can really set up a website for almost for zero cost today. But then we have these, as you said, these big gateways, which are extremely useful, because they have these network effects, lots of people are on them, you can get an audience very quickly. And so those, like I said, those things come in waves, and I think what we’ve seen is that we’ll continue to see that for certain types of uses, these platforms will be very popular.

0:33:54.1 Neil Chilson: I think what we’re seeing is that the challenges that they face when they become extremely popular, and they become influential in the, I don’t mean the platforms themselves, but people are able to influence ideas and influence dialogue through them. So there have been proposals to move how these platforms, for example, select what content shows up in your feed, to move that to a protocol approach rather than a single algorithm that the company provides. And when I say single algorithm that’s dramatically over-​simplifying it, you can’t really think of machine learning as a single algorithm in many ways, it’s a bunch of custom algorithms for lots of different people, but a protocol approach, I think Mike Masnick has written on this, and I know Jack Dorsey has talked about it, and I think even set up a project to do something like that with Twitter. I don’t know where that’s gone or how far it’s been pursued, but people are looking at that for the very limited purpose of the Facebook feed or the Twitter feed, for example. I don’t think that solves all the problems, it certainly doesn’t change the misinformation concerns, I don’t think really, I mean, what all it would do would be sort of hide, it might let people pick the bubbles that they want much more effectively, which I think could be a plus or minus, right?

0:35:23.5 Neil Chilson: It could make it a more comfortable environment for people, but sometimes being uncomfortable is a big part of the value of engaging with other people. So I do think there are trade-​offs there, and I expect that there will be innovation in that direction, you’re already seeing platforms who are trying to do something like this, and I think there was just a… I’ve read an article about a gentleman who was trying to build a protocol stack that is blockchain-​based essentially to do something similar, where you could move your social network, so all my connections on Facebook, I might be able to move them from one place to another, or I’d have them stored on the blockchain. I think it raises a bunch of concerns, again, there’s a trade-​off there, but I think there is a lot of experiments, experimenting in that space, and I think we can expect to continue to see that, and that’s a good thing, and I think we could see the wave away from more centralized platforms and back towards decentralized use cases that we’ve seen in the past in internet history from new technologies.

0:36:33.8 Trevor Burrus: They unveiled last year, the Facebook Oversight Board, which I’m still trying to figure out precisely what this thing might be doing. Our colleague, John samples, is on that board. And then you also write in the paper about case by case versus standards in terms of adjudication, and you seem to prefer case by case in some ways. So first, can you explain that sort of distinction and the virtues of each of them? Is the Facebook oversight board, you think is that trying for a case by case kind of system rather than a standard system?

0:37:07.9 Neil Chilson: Yeah, so the distinction that I draw between case by case and standards is largely based on my experience, having spent a lot of time doing telecommunications law with the FCC, and then having spent time at the FTC, and these are really different organizations. The FTC is an enforcement agency that looks at specific facts and applies very general principles of unfairness and unfair methods of competition and deception to those facts, to decide whether or not a company has broken the law, whereas the FCC writes industry-​wide rules that govern how people will act going forward. And I’ve written about this in some other places, including a book I have coming out in September, that include some of these ideas from this paper. I think the big difference is the knowledge problem, the hierarchy and knowledge problem between those two systems, especially for complex systems, gathering enough information to let you anticipate a one-​size-​fits-​all answer for a wide industry is very challenging, whereas if you have general principles such as, don’t be deceptive, and you work those out through common law, like case-​by-​case approaches, the knowledge problem is reduced quite a lot, because you’re looking at a very specific set of facts in front of you, and you’re trying to apply those, and you’re also looking at what the specific harms are rather than trying to hypothesize what harms your regulatory regime might be trying to stop ahead of time.

0:38:48.8 Neil Chilson: So it’s not to say that rules, I mean, rules are important, I think, but often they are derived, even when we have broad-​based rules that we pass, they’re often derived from best practices that industry or that individuals have learned about in the past, and so in the privacy space, I think notice and consent is one of the best practices that’s been going on for a long time, and is also embodied often in quite a lot of different laws, and so I think the trade-​offs are there. The one thing that’s nice about prescriptive legislation is you can give some certainty, at least in the short run to businesses or to consumers about what’s going to happen in the space, it doesn’t work so well in fast-​moving technological spaces because you can write one set of rules and then the technology changes and those rules can be very confusing to figure out how they apply to that new technology. And also you can try to anticipate harms ahead of time before they happen. One of the big criticisms of case by case is that you have to wait till something bad happens and then you have to bring a case.

0:40:11.0 Neil Chilson: That can be overstated a little bit, because obviously precedent does have a constraining effect on how companies and individuals act in the future, once a court says, “Hey, this type of practice is kind of out of bounds,” then presumably the liability risk goes up to a company if they don’t pay attention to that, so that’s sort of what I mean by the case-​by-​case versus rule-​based approach, the Facebook Oversight Board is, in some ways doing that. Now, again, it’s not a government entity. And Facebook is doing this voluntarily, the Oversight Board is separate from Facebook, but Facebook is saying, “Hey, we’re gonna submit things, here’s our promises about how we’re gonna treat this organization, and what we’re gonna do with the information they give,” but ultimately, the Facebook oversight board can’t force Facebook to do anything. What they are doing is, I think, digging in deep into various specific situations as a way to educate both Facebook and the broader community about what the challenges are in any one particular decision, as a way to learn for Facebook about how they might do things better in the future. So it does look a little bit like case by case, even though it’s not precedent-​setting in the same way that a court case-​by-​case decision might be.

0:41:44.8 Aaron Ross Powell: Facebook has well over a billion users, and I think that’s active users, people who are logging in daily or on a regular basis, which is a lot more than exists in the United States or Europe or wherever else, like it’s international, and that international-​ness is one of the huge benefits of these platforms. On Twitter, I can talk to people in India or Albania or wherever. But that would seem to cause problems for both legibility and either case by case or rule-​making, in that on the legibility side, these platforms, what they’re trying to govern is interactions between people, and there’s legal implications to that, but there’s also social implications to that, because their business is getting people to use the site and want to use it and want to come back. And so it has to be socially satisfying to them, but that kind of stuff varies tremendously across countries and cultures, and so what is acceptable in the United States is not acceptable in Saudi Arabia, from both a legal standpoint and a social morality standpoint, just moral like mores I suppose. And so even if you learn what those things are, it doesn’t necessarily help you solve this meta-​problem.

0:43:18.7 Aaron Ross Powell: And then on the case by case, it seems like a billion users posting, who knows how many billion posts every day, means that case by case is just simply impossible on all cases, you have to pick and choose, and the Facebook board took months to make a decision on a single case in the Trump instance. So it doesn’t scale to the size of the platforms, and you can have multiple competing interests in the form of India says, “You can’t criticize the regime in the government.” In the US that’s like our God-​given duty. And so how do you deal with… I mean, it’s one thing like when a state is trying to regulate the people within its geographical territory, and it can just do this top-​down, but top-​down doesn’t seem to make sense on a global level.

0:44:09.5 Neil Chilson: Yeah, it really doesn’t. This is a problem and an opportunity, I would say in some ways, obviously, when you have competing ideas about… Like you pointed out, India doesn’t wanna criticize the regime, but maybe a more classic example is Germany bans posting of Nazi memorabilia even on to sell or to collect or even to criticize often. And those types of restrictions, I think, can be hard to figure out how you might apply them across many different communities, and that’s why a decentralized approach does make a ton of sense, part… One of the big problems for these platforms is they can have hundreds of thousands of users in a place where they don’t even have a single content moderator who knows the language, right? Or even if they know the language, they don’t know all of the social context to know when something is satire and when something isn’t. And so the more you try to centralize that, Facebook has something like 30,000 content moderators, and it’s clearly to your second point, not even close enough, enough to be scalable to encounter, you’re always gonna have tons of content that is under-​enforced by their community rules, as determined by their community rules, and you’re always gonna have tons of mistakes that are made just ’cause the volume is so high.

0:45:38.7 Neil Chilson: Part of that, I think is one of the lessons from seeing like a state is that this is just a situation, and the only scalable response is something that is highly decentralized, and social norms are things that are like that. Now, I said there was an opportunity here as well, one of the opportunities is that platforms like Facebook that have a global reach could help export some of our pluralistic norms to other countries, there is the opportunity to do that, and I think that largely, when you see these platforms pushing back against censorship from the government in other regimes, they’re trying to do something like that, I don’t think they get very much covered for that here in the US, and in fact, I think a lot of the attacks that have been placed on some of these companies, I think help those totalitarian regimes point to examples about how the US isn’t standing up to its own standards in some of these areas. So I do think there is an opportunity to build more tolerance and pluralism on online. I don’t think it’s one that the companies have pursued very thoroughly, it’s one that I think is challenging, but I think the only answers are gonna be decentralized approaches to this.

0:47:12.7 Neil Chilson: I mean, I think that’s what’s happening now. We just don’t think of it that way, because it’s not Facebook making the decisions. A good chunk of the censorship on Twitter is like when people block somebody else, or you just stop engaging with somebody, you don’t follow them. These are the types of billions and billions of decisions that are being made every day that are changing the media environment online for each individual and overall, but they’re not centralized, and I think we only notice the ones that are centralized and those are the ones that raise the biggest problems as you pointed out.

[music]

0:47:58.7 Aaron Ross Powell: Thank you for listening. If you enjoy Free Thoughts, make sure to rate and review us in Apple Podcasts or in your favorite podcast app. Free Thoughts is produced by Landry Ayres. If you’d like to learn more about Libertarianism, visit us on the web at www​.lib​er​tar​i​an​ism​.org.