E262 -

Free Thoughts meets Building Tomorrow with special guests Matthew Feeney & Paul Matzko as we discuss whether or not to fear emerging tech.

Hosts
Trevor Burrus
Research Fellow, Constitutional Studies
Aaron Ross Powell
Director and Editor
Guests
Paul Matzko
Tech & Innovation Editor

Paul Matzko is a research fellow at the Cato Institute and former Technology and Innovation Editor at Lib​er​tar​i​an​ism​.org. He has a PhD in History from Pennsylvania State University and recently published a book with Oxford University Press titled, The Radio Right: How a Band Of Broadcasters Took on the Federal Government and Built the Modern Conservative Movement.

Matthew Feeney is head of technology and innovation at the Centre for Policy Studies. He was previously the director of Cato’s Project on Emerging Technologies, where he worked on issues concerning the intersection of new technologies and civil liberties, and before that, he was assistant editor of Rea​son​.com. Matthew is a dual British/​American citizen and received both his BA and MA in philosophy from the University of Reading in England.

Like economic policy, it can be hard to judge the relative freedom of tech policy. Depending on the tech policy we are referring to, the United States is still a massive hub and innovator. That is not to say that we do not have current regulations that may inhibit innovation of certain emerging tech sectors. Naturally, with new technology, comes fear of the unknown and we have to make sure that we do not succumb to those fears. Listening to fears could result in limiting our ability to develop the tech to the fullest extent.

How do we address the federalism question when it comes to tech policy? When it comes to emerging tech, are we forced to imagine threats? Should we be concerned about the level of pervasive private surveillance? What threat do Amazon, Google, and Facebook pose since they centralize our data?

Further Reading:

Transcript

[00:00:07] Aaron Ross Powell: Welcome to Free Thoughts, I’m Aaron Powell.

[00:00:09] Paul Matzko: I’m Paul Matzko, filling in for Trevor Burrus. I am host of Lib​er​tar​i​an​ism​.org news podcast, Building Tomorrow.

[00:00:18] Aaron Ross Powell: Joining us today is Matthew Feeney. He is director of the Cato Institute’s new project on emerging technology. Welcome back to Free Thoughts, Matthew.

[00:00:26] Matthew Feeney: Thank you for having me.

[00:00:28] Aaron Ross Powell: What is the project on emerging technology?

[00:00:31] Matthew Feeney: Yes, the project on emerging technologies is Cato’s relatively new endeavor. I’m trying to count now, I think it began a couple of months ago, June or July, I should probably know that, but it’s relatively new. I am running it, it’s a project of one at the moment, but the goal of the project is to highlight the difficult policy areas that are raised by what we’re calling emerging technology. This is always a difficult thing to define, and of course, emerging tech is not just changing technologies, but new things arriving on the scene. What I’ve done is to try and highlight a couple of issues where I think Cato has a unique capability to highlight interesting libertarian policies associated with new techs. Some of the policy areas that we are focusing on include things like artificial intelligence, driverless cars, drones, data, and privacy issues, and others. There are a lot of tech issues that have been around for a while, so I don’t think net neutrality is going anywhere any time soon, nor are the numerous antitrust issues associated with big tech companies. We’ve certainly, at Cato, had people write about these issues before, but this new project is confining itself to five specific areas, but I’m sure that as the project grows and develops, the list of issues we’ll be tackling will grow.

[00:02:06] Paul Matzko: How did you choose those five in particular?

[00:02:09] Matthew Feeney: Yes, the five were areas where I thought Cato didn’t have enough people writing about, and also areas where I think Libertarians have something new and interesting to contribute. For example, the first couple of years at Cato, I did write about the sharing economy, I also wrote a little bit about drones, body cameras, new tech issues, but my work with drones, for example, was just on law enforcement use of drones. Specifically, the concerns associated with drone surveillance, but I wasn’t writing at all really on the commercial use of drones. The exciting world of Taco delivery drones, and building inspection drones, and that’s a whole different policy area, really, compared to drone surveillance. That was an area where I thought we should really have someone who can direct a project that will commission work on those kind of issues. Artificial intelligence is something that I think is very exciting but poses difficult questions to libertarian. Libeterian commentary on that space has been not nearly as robust and as loud as it could be. That’s another reason why I picked that, but basically the five I think fulfill a criteria of being focused on new and emerging tech, that the libertarians have something interesting to say about, and that Cato is in a good position to tackle.

[00:03:44] Aaron Ross Powell: Can you give us example of what you mean Libertarians have something new and interesting to talk about? Because a lot of tech policy in the past has taken the form of regulatory policy, and it’s, should this thing be regulated or not. Typically and then what form should it be regulated in. That tends to break down along the standard line, do you have the people who are opposed to regulation, you have people who are generally pro-​regulation. What’s uniquely libertarian in the way that you’re approaching technology issues?

[00:04:19] Matthew Feeney: Yes, I don’t think that the way that we’re approaching the project is much different to how a lot of us here in the building approach our other policy areas. For me, it’s to tackle the issues raised by this tech, by embracing a presumption of freedom and trying to minimize coercion. Number one, on the presumption of freedom, that we should act in a way that allows for innovation and entrepreneurship and make sure that people working in this space are in a position where they’re asking for forgiveness more often than they’re asking for permission. As far as minimizing coercion, this goes back to some of the work I discussed earlier, when we’re talking about data privacy and drones, we should be wary of some of the government use of the technology and making sure that exciting new technologies like drones can be used for really cool stuff like deliveries and other private applications, while also trying to make sure that the scary aspects of it like surveillance are being put under lock and key as much as possible. Something like artificial intelligence might be another good example, that we want to make sure that people working in the space are free to innovate and to explore new ideas, but we want to make sure the government use of it, especially when it comes to autonomous weapons and automated surveillance, that we ensure that there are privacies and keep those threats checked.

[00:05:52] Paul Matzko: Emerging tech by its nature is still yet to come, it’s already not yet, it’s kind of here but it’s still in prototype or developmental form. A lot of the potential benefits, as well as potential risks, are still in the future. As you’re trying to decide what should be regulated and what shouldn’t be regulated or in what ways it should or should not be regulated, what’s your rule of thumb for trying to decide on something that hasn’t actually happened yet?

[00:06:24] Matthew Feeney: I suppose the Libertarian response to this is comparatively straightforward, right. That we should proceed with caution when dealing with imaginary threats. Let’s think of a good example. Maybe only because I work on it on my own research right, but I think it’s fair to say that in the coming decades that we will see more and more government use of unmanned aerial surveillance tools. I think that’s a fair assumption. I also think it’s fair to say that that technology will improve as much as it proliferates, and as I did. I wrote a paper saying, “Look, we should, in preparation for this world, we should have the following policies in place”. What I’m very hesitant to do, and not that it should never be done, but we should be hesitant, I think, to develop new rules because of a new thing coming on to the block. Drones, for example, raise interesting privacy concerns but it’s not clear that they’re necessarily unique in the way that a lot of people think they are. We don’t like the fact that drones could be used by people to snoop on us on our bedrooms, or to fly over our barbecues, and we don’t like that police could use them to do surveillance. We already have laws, peeping Tom laws, we have a taught system that can do awful lot of these complaints. While the supreme court precedent on things like drone surveillance is particularly not very satisfying, it is the case that states can and have gone above and beyond what supreme court requires. Going forward, I think, we should be hesitant to think of well, we need a driverless car policy, we’re going to write down, or we need a drone policy. We should think about the kind of threats that come from these fields but resist the temptation to write a lot of regulation and anticipation for the proliferation of the technology.

[00:08:33] Aaron Ross Powell: Isn’t that the problem that because these are emerging technologies, they’re not technologies that we either as citizens or just ordinary people in our lives or as lawmakers or legislators, or regulators, we don’t have any experience with them. We haven’t used them, we haven’t seen how they shake out, and so that notion of saying, “Well, we shouldn’t just imagine threats”. Isn’t that what we’re kind of forced to do? One of the things that distinguishes emerging technologies now from emerging technologies in the past is the pace at which they can become all-​pervasive, the pace at which they can spread, so either they are network technologies that’s just, in a matter of years, suddenly everyone is on Facebook, whereas the printing press took a lot longer to get books into everyone’s hands. That, don’t we have to be anticipating threats because with a lot of this stuff, if we don’t, and we don’t protect ourselves now, it might be too late?

[00:09:36] Matthew Feeney: Well, too late for what? This is the question. I think history has enough examples of people exaggerating threats that we can learn from. One of my favorite examples of this is the British 1865 Locomotive Act which required a vehicle that not pulled by an animal, so a steam powered locomotive, if it was on a road and towing something, it was legally required that you would have a man 60 yards ahead of it with a red flag, right? Because people were anticipating certain threats, that these new technologies are going to cause accidents. And so what we need is, it’s obvious, right? We need a man running ahead of these things with a red flag to alert people that these very dangerous things coming across. I don’t know if that’s the right kind of approach to dealing with emerging technology issues, right? We can anticipate that with the emergence of the locomotive, that there will be occasional accidents and some people will get hurt. The early years of flight, for example, are just full of people killing themselves in these new flying machines. And you might, it sounds a little cold hearted to say, but the price of innovation for something like that is that mistakes get made and people might get hurt. That’s difficult, especially in today’s world where news travels so quickly, that the moment that someone gets hit by a driverless car or a drone lands on someone’s head, everyone’s going to hear about it. I think people are thirsty for news, for bad news, unfortunately, and that’s something we’re always going to be fighting against.

[00:11:17] Paul Matzko: I actually would go on record right now saying I’m in favor of a law requiring that Elon Musk wave a red flag 60 feet in front of every driverless vehicle.

[00:11:26] Matthew Feeney: I guess he has more time on his hands these days. [laughter]

[00:11:30] Paul Matzko: So, I hear you talking about essentially assumption of risk, that with when it comes to tech, we have a long history of people overrating or exaggerating fears of the downsides of attack and having a harder time imagining the beneficial applications. And so a light touch regulatory policy wedded with like a general cultural sense of, “Hey, if you want to experiment with this”, as long as you limit the externalities that damage other people, go for it. Is that kind of the attitude you bring to stuff like unmanned vehicles and the like?

[00:12:05] Matthew Feeney: I think that the barrier for government intervention in this space should be difficult to overcome, right? It has a very high risk of death or serious injury, is basically where I would say. You can maybe argue for some kind of regulation. And again, we’re sitting in the Cato Institute, right? Our approach to regulation, this is a unique approach to emerging technology. I think Libertarians across the board have light touch approach, and I feel like you can have that approach while accepting that there are risks. The problem of course though, is that with a lot of this stuff, an argument can be made that innovators and entrepreneurs might be hesitant to start doing a lot of this work if they feel like they might get in trouble, or they want to wait until there is a safe regulatory space. Amazon decided to test delivery drones in England because they knew that the FAA had not cleared the delivery drone testing here. I can understand why Amazon didn’t say, “Yes, well, screw it. We’ll do it anyway.” People want to be, I think if you want to be a respected private business, you don’t want to get in trouble with the Feds. I get that, but I think that’s an unfortunate feature of FAA regulation, that the FAA should have an approach of-​- You will better be careful because you will be in a position to ask for forgiveness. But I still think that’s a better position than people in the drone space asking for permission.

[00:13:44] Aaron Ross Powell: Going back to the question I asked before, with emerging technology and with the to ‘Donald Rumsfeld, the unknown unknowns’ in at play here, do we want people to be extra special careful in a lot of these areas? Because you’re going to have situations where the story often is told a lot of people like, this is the narrative, is that all of a sudden a handful of people in Palo Alto, well no one was watching broke American democracy with social media, right? Or a situation where everyone’s kind of out there innovating and then suddenly we have a rogue AI and we can’t do much about it. Or gene splicing, crispr, people making stuff in their garages and then we have a pandemic, that kind of threat of regulation or that asking for permission, does that help at least to mitigate against those kind of sudden catastrophes?

[00:14:51] Matthew Feeney: Well, I think you are highlighting something interesting. First I’ll say hindsight’s always 2020, right? That it’s easy to look back and be like, wow, if we had X regulation, Y would never have happened. But it’s easy for people to come up with scenarios. The difficult job is thinking of regulation that would hamper that scenario from ever taking place while also not hurting innovation. Rampant AI. Okay, so this is something anyone who’s watched a science fiction film worries about, but what’s the fix to that? Do we re-​write a law saying no one shall build AI that will run a mark on servers and takeover? Isolating a threat is not the same thing as coming up with a good regulation for that threat. Social media companies ruined American democracy. This is sometimes said by people, but what’s the regulatory fix that would have stopped a lot of the bots and the trolls that got everyone concerned in the wake of the election? That’s a much harder question, it seems to me. It’s easy to get outraged and to get worried about possible threats, but coming up with solutions is much, much harder. I think we should also keep in mind how likely the threat is. It would be a shame if developments in AI were seriously hampered because a couple of lawmakers watched too many science fiction films and got really, really worried about the terminator scenario. [laughs]

[00:16:28] Aaron Ross Powell: Well, how big of a problem is that? Specifically that this is an area where lawmakers, I mean, we put the Cato Institute, we often lament how little lawmakers seem to know about the subjects they plan to regulate. And in fact, we have named our auditorium [unintelligible 00:16:45] auditorium who hike famously offered a theory for why was lawmakers could never know enough about the stuff they wanted to regulate it well. But this seems to be an area where lawmakers are particularly ignorant. It’s often cringe inducing to watch congressional testimony because these lawmakers have levels of understanding of the Internet, of networks, of technology that is substantially worse than the typical middle schoolers. How do we deal with that kind of problem, that we’ve got a situation where lawmakers, there’s this tech, the urge is always to pass a law whenever there’s a threat or potential threat, it’s to pass a law, and they’re doing that because they want to do it, they’re all staring at these constituents demand to pass a law. But that, this is an area where almost like by definition, you can’t know much about it.

[00:17:46] Matthew Feeney: Yes, I defy anyone under the age of 30 to watch anything like Zuckerberg’s testimony [laughs] on the hill and not have their head in their palms by the end of it. It is very worrying that many of the lawmakers on the hill don’t seem to know much about this. That makes sense because a lot of the people who would be qualified to be on staff in these offices to actually give advice and to explain to members of Congress how this stuff works could be paid much, much, much better amost doing anything else actually in the tech industry, and that’s a serious worry. There’s also this worrying inclination among some lawmakers to urge technology companies to, and I quote, this isn’t a phrase original to me, but “to nerd harder”, right? That whenever there’s a problem like end-​to-​end encryption. Well, we don’t like the fact that some terrorists can communicate using WhatsApp or signal, but there must be effects. You must, how can you not fix this? And there’s a frustration. Where we’re sitting, I think that we should maybe spend more time focusing on the benefits of this technology, not focusing on potential costs. Driverless cars will kill some people. They just will. That’s of course regrettable, but we should think about the lives that they could save. The vast majority of auto fatalities in the United States are directly attributable to human error. From that perspective, driverless cars that are better than human drivers, but not perfect, will save thousands and thousands of lives a year. Once a congress eventually gets happy with the proliferation of driverless cars, we should expect that for the next couple of years, there will be headlines of driverless cars killing people, and that’s to be expected, and there will be a big cultural shift. Emphasizing the benefits rather than the cost, I think, is worthwhile. But of course that’s easy for me to say because I won’t be the one sponsoring the Bill that allows these things to run rampant and then who are they going to whack the finger at when the bad things do happen. Like I alluded to earlier, good news rarely makes headlines, and it’s also slow moving, right? It will take a long time for the benefits of driverless cars to be realized in the data, but the accidents and the deaths will be reported instantly.

[00:20:16] Paul Matzko: I hear from you, Matthew, is a sense that our cost accounting, or cost benefit accounting analysis, is flawed, right? It’s easy for us. It’s a seen versus the unseen situation. It’s easier for us to imagine apocalyptic, worst case scenarios, and then to discount the possible benefits. Whether it’s pharmaceutical regulation. Something like the FDA has a notoriously stringent safety requirement that doesn’t really account for the fact that not approving a life-​saving drug costs thousands, even millions of lives. That doesn’t play a role. They just are asking whether or not the drug itself will harm lives. In that sense, we have the ledger, accounting ledger is flawed when it comes to emerging technology. I’m also interested in hearing you talk about ways in which regulators themselves, by regulating too quickly, can actually fulfill self-​fulfilling prophecy, when it comes to downsides of that technology. [00:21:29] Matthew Feeney: A good example of that would be what-​- I just want to make sure I understand the question. I suppose you can imagine a situation where the FAA says, “We haven’t had as many drone accidents as other countries because we haven’t let drones fly”, which probably an accurate statement. We need to keep in mind that while that’s true, and the FAA is tasked with safety, they need to make sure things are safe, we need to also keep into account what we’re losing. I think when you ground drones, you incur a cost. Namely, you are not having as innovative and as exciting an economy as you could have. Yes, a Federal Safety Agency can stand up and say bad things aren’t happening because we’re just not letting people experiment, but it’s not particularly useful thing to say, it seems to me, and it’s also not helpful, because no one who’s rational is denying that emerging technologies will come at a price. We’re just saying that in the long run, the benefits outweigh the cost.

[00:22:36] Aaron Ross Powell: Given that, and given that bad regulation or overburdensome regulation cannot just slow down the pace of progress, but can cost lives, can certainly reduce wealth, economic growth, when is it appropriate, and we’ve seen this happen a fair amount in emerging tech space, when is it appropriate, or is it ever appropriate to intentionally circumvent regulations?

[00:23:03] Matthew Feeney: [laughs] The part where Aaron asks me, “When’s it okay to break the law?” I would like to point out that I think there are a lot of people who do this by accident, right? I don’t know the number, but I imagine there are many people who got drones for Christmas, or birthdays, and flew them without adhering 100% to FAA regulation. I can say that with almost a certainty. The response from the FAA, I think, should not be to bring the hammer down. Now, is it acceptable? I don’t know. Sorry, go ahead.

[00:23:52] Aaron Ross Powell: The classic example being like Uber, which Uber has arguably changed the world frequently in a positive way. Granted, they have their problems as a company, but a lot of that came with them basically ignoring local regulations.

[00:24:12] Matthew Feeney: Okay. In that case, I would argue that at least in some of the jurisdictions, Uber could have made the argument that, we looked at the taxi regulations, and we decided that we didn’t fit the definition of taxi, so off we went. That’s a much easier argument, it seems to me, than a drone operator saying that they’re not an aircraft under FAA definitions. Uber, I think, was doing something very interesting, which was providing obvious competition to an incumbent industry without being actually a very different thing. To customers behind the scenes, I think a lot of people found Uber and taxis to be very similar, but actually they’re very different kind of businesses, and it’s a very different kind of technology. I take your point, and of course, Uber’s opponents would oftentimes portray Uber as a lawless invader. I think at least in some jurisdictions, Uber could make the argument that actually no, we just feel like we didn’t fit into that regulatory definition. Uber does fit into this very, or at least when it began, fit into a very awkward regulatory gray area. In a situation where you’ve taken a look at existing regulations, and you think that you don’t actually run afoul of any of them, I don’t see why people shouldn’t feel free to get into an area and innovate. Airbnb might be another example where you- Okay, well, I took a look at local laws and I figured that I wasn’t a hotel. It seems to be a reasonable thing for people to assume. I won’t say this is without risk. I wouldn’t advise anyone in a private company to deliberately break the law and to hope that you have good lawyers on hand. I don’t know if that’s the the best approach, because local lawmakers don’t like that kind of confrontation for sure.

[00:26:21] Paul Matzko: I suppose some of that question comes down to one’s own ethic, right? Most people would imagine an ethical obligation to break the law when there is some clear cost to life that comes from following the law. Civil disobedience writ large. No one-​- Well, some people did hold them responsible, but when, yes, Martin Luther King Jr. or another civil rights activist blocks the highway for a march on Selma Birmingham or whatnot, right? The idea is that laws are- it’s okay to circumvent them when there’s a clear ethical obligation to do so, that the law is less important than ethical systems. That gets complicated really quickly depending on-

[00:27:14] Matthew Feeney: I will mention here though that Charles Murray, I haven’t read the book, but I think that in one of his most recent books, Charles Murray advocated for a law firm that specializes in protecting entrepreneurs like this, to basically encourage people to go out into the wilderness. Adam Thierer from the cadence who wrote an excellent book called Permissionless Innovation, he categorizes technologies as born free and born captive. That some are born captive into regulatory regimes and others are born free, that truly new and innovative and regulators haven’t caught up yet. If you’re born free, as Adam might call them, I think you better be ready for certain fights. Charles Murray’s recommendation was, yes, we should just basically have a law firm that specializes in helping entrepreneurs with these fights. From the regulators point of view, I think they should perhaps just choose their fights more carefully and not scare away people, but that’s not going to happen anytime soon. The costs that we’ve been talking about, like deaths and injuries, are, I think, easier to discuss, but the problem with a lot of technology or emerging technology discussions are you have these more difficult to pin down complaints about the impact on society, “What’s it doing to our children? Isn’t this making us more isolated? Think about the citizenry.” All that stuff is-

[00:28:52] Paul Matzko: Thank you Tipper Gore.

[00:28:53] Matthew Feeney: It’s interesting because this isn’t a new complaint, right? But nonetheless, remain sticky. I wanted to briefly read out a quote I found from 1992. Neil Postman wrote a book called Technopoly: The Surrender of Culture to Technology. He was on C-SPAN in 1992. He had previously complained about television, right? He was on and he said, “When I started to think about that issue, television, I realized that you don’t get an accurate handle on what we Americans were all about by focusing on one medium that you had to see TV as part of a kind of a system of techniques and technologies that are giving the shape to our culture.” For instance, if one wants to think about what has happened to public life in America, one has to think, of course, first about television, but also about CDs, and also about faxes and telephones and all of the machinery that takes people out of public arenas, and puts them fixed in their home, so that we have a privatization of American life. This is a really interesting kind of complaint, but he goes on to describe a future that we’re kind of in now where he says, “When his people say with some considerable enthusiasm, that in the future, putting television, computers and the telephone together, people will be able to shop at home, vote at home, express political preferences in many ways at home, so that they never have to go out in the street at all, and never have to meet their fellow citizens in any context. Because we’ve had this ensemble of technologies that keep us private, away from citizens.” I hear complaints like this quite regularly. I mean, that’s from 1992. There is still this very persistent worry that emerging tech will make us bad citizens, make us isolated. AI is exciting, but will our children say please, and thank you to the robots? Will the robots become our friends or our sex partners? Isn’t all this stuff making us isolated? This isn’t a new concern, frustrating and it’s not going away.

[00:31:05] Aaron Ross Powell: We have been talking largely about policy making, policy makers, regulators, people who are in the policy world, but how much of that is really just downstream of culture, such that when we’re dealing with these issues of emerging technology, that where the real action is happening is in the cultural acceptance of it? To some extent, focusing on strictly the policy is missing where much of the influence is or will be.

[00:31:41] Matthew Feeney: I certainly do think that it’s important to communicate to the public about this because, like you mentioned, some of these policy concerns are downstream from the public. In preparation for the podcast, I was finding articles from 1859 editorials in the New York Times complaining about the Telegraph, and a 1913 New York Times article complaining about the telephone and how it’s incurred bad manners. All these stuff isn’t new. I think when we’re sitting in a think tank, we should be ready to communicate with the public in addition to regulators and lawmakers. If we have a optimistic forward-​thinking public, then you hope that that will translate somehow to lawmakers, but lawmakers are made up of human beings and the public are human beings and they have a pessimism bias. I think though when you focus again on benefits that maybe more parents would be happy if driverless cars could take the kids to baseball practice, and it would be better for people if their elderly parents have appliances in homes that can monitor if they’ve fallen down or if they have had a medical emergency, it would be good if we were able to travel more safely, to have our homes know more about us. It would be nice to come home and to have the home set at the right temperature and playing the right kind of music. Making sure that people realize the benefits of a lot of this stuff is I certainly think part of the mission. My only audience is not lawmakers. That’s for sure.

[00:33:30] Aaron Ross Powell: All of that, the home that knows a lot about you, all these things, it can predict stuff about you, it keeps track of things about you, there’s a lot of data there. There’s a lot of data gathering, a lot of it depends on devices that can surveil us in one way or another. We as libertarians, we as Cato Institute scholars, we spend a lot of time talking about the problems of government having access to data and governments surveillance programs, but should we be concerned about the level of pervasive private surveillance that that rosy future you just sketched out demands?

[00:34:13] Matthew Feeney: I think we should be worried. You can listen and read a lot of Cato material on the concerns that we have about government access to data, and I certainly don’t want to sound blossy about that. My primary worry is the government mostly because as creepy as a lot of this might be when it comes to Amazon and Google, Amazon and Google can’t arrest me or put me in a cage. I think that is a big difference. People might be a little creeped out by the shopping algorithms, they might be a little freaked out by the fact that these companies do know a lot about us, but I want the heavy lifting there to be on government access to that data. You buy a lot of these appliances, you assume that they will be collecting information about you, but I’m not as worried about Amazon as I am the government for the reasons I just outlined. I don’t think Amazon has an interest in creeping out as customers too much.

[00:35:12] Aaron Ross Powell: Should we be worried, though, about companies like Amazon, gathering all this data, centralizing all this data, and then that data suddenly becoming either through the passage of legislation or through subpoenas, warrants or through government hacking accessible to the government?

[00:35:30] Matthew Feeney: Yes. There’s a degree of trust you have in these big companies. They need to do a good job at being custodians of data. I don’t want to speak to the-​- I don’t know a lot about Amazon’s actual security, just using them as an example. They have a very strong profit-​seeking incentive to make sure that their customers privacy is not violated. There’s not much that they can do when the government comes to them with a valid court order. They are put in a tough spot there. Again, that’s why I think that’s where we should have the focus. We shouldn’t be in any doubt that a lot of these companies have a huge amount of information on us. I think it was my colleague, Julian, who once said that, “If Google was a state, it would be a pretty powerful police state, given the amount of information it has.” My apologies to Julian, if I’m butchering your quote. The point being that they do gather a huge amount of information on us and people, even like me, do incur a cost when you use ProtonMail instead of Gmail, or you use a DuckDuckGo instead of Google for web searches. That cost is that, Google now knows a little less about you and can’t provide you with the degree of service that most people have. That’s fine by me. That’s still a choice. Google’s not a monopoly when it comes to this stuff. People value their privacy subjectively, and maybe I value it slightly higher than the average person. I have no problem with people using Google products to make their lives better. I do worry about government access to that data to conduct investigation.

[00:37:17] Paul Matzko: It feels like forever ago now, but it’s only a few years ago folks were-​- There was a buzz about Mark Zuckerberg running for president [laughs] . That blend of a major tech company with the power of the state wallets unlikely now. It’s not outside the realm of possibility even if it’s not as literal as the head of one being the head of the other. To go back to something you mentioned before, Matthew, you teased a bit about how in Great Britain, I think it was regulatory policy towards unmanned aerial vehicles, was more favorable, so it pushed Amazon to conduct tests overseas. To broaden that out, how would you say like on the net, the international regulatory landscape, how it compares to United States? Where’s the US rank when it comes to relative freedom and regulation of emerging technology?

[00:38:24] Matthew Feeney: I think it’s difficult to say for the following reason, that saying technology policy is a bit like saying economic policy. It’s a huge range of things. Let’s think of the plus side first. The United States is still a global leader when it comes to tech innovation. This country is home to some of the best known largest and most interesting tech companies. Global data recently produced a list of the 25 most valuable tech companies in the world, 15 are in North America, 7 in the Asia Pacific, only 3 are in Europe. That, I think, is not an accident. Europe is, as you alluded, is slightly, I would say ahead of the United States when it comes to drone policy, but they slapped Google with a huge, I think it was $5 billion fine on anti trust. It depends on the technology you’re talking about, but certainly ahead when it comes to, I would say drone policy. But when you’re leveling billion fines with billions of dollars on Google, it’s not a great look. Examine the technology specific policy. I wouldn’t want to go to a big generalization. I would say though, that there’s probably a reason that the United States is still today a massive hub and funder innovator when it comes to technology.

[00:39:57] Aaron Ross Powell: Does competition work in that area? Do you see is there evidence that countries look over at other countries that had better tech policies and so are getting better and bigger country companies, more innovative products, and say, “Well, it’s probably good for me to loosen things up a bit too?”

[00:40:19] Matthew Feeney: I don’t know. I’d have to look at data. I think the problem for a lot of these countries is that Silicon Valley is still a massive talent suck for a lot of these countries. That’s a gut assumption; I’d have to look at data on that. Competition, of course, is an interesting point when you’re talking about big companies like Google, Apple, Amazon, and Facebook because a lot of these companies are big enough that they can buy interesting smaller companies. What would be a good example? YouTube, Instagram, WhatsApp, these are all companies that were bought by much bigger companies. That’s not necessarily a bad thing, and it’s not necessarily something that we should complain about, but for the foreseeable future, I imagine that Amazon, Google, Facebook, and Apple are going to be on the lookout for interesting new companies to buy. One, because they view them as competition down the road, but two, they also feel that they can do interesting things with those companies. That’s not a bad thing necessarily if you are building something that competes with Amazon and you’re presented with a life-​changing amount of money. They’ll be some people who’d say, “No, thanks. I’ll keep plugging what I’m doing.” I believe it’s the case; I’m not a historian when it comes to Facebook, but I believe Facebook faced a buyout option at a certain point, right? Didn’t someone want to buy Facebook? I could be making that up. My point is that there are very large successful companies today that said ‘no’ to buyout [unintelligible 00:41:59] .

[00:41:59] Paul Matzko: Netflix is a famous example. Blockbuster had the offer on the table for some minuscule fraction of what Netflix is valued at.

[00:42:07] Matthew Feeney: Right. Keep in mind that this competition question is something we’re going to hear more of as long as Trump is the president because that’s they perceived anticonservative bias in Silicon Valley that people think is actually affecting the product. I think it’s fair to perceive that most people who work in these big tech companies are probably to the left of the average American, I think that’s fair to say. I’m not convinced that that personal bias among employees has had a direct impact on the product. We’re in a situation where self-​professed conservatives are now saying, “Well, they’re too big and we should talk about antitrust.” When we’re thinking about the big fall of Google and Amazon, Facebook, and Apple, I’m not convinced that these companies are monopolies in the true sense, and I think it would be a mistake to bring antitrust action against them.

[00:43:06] Paul Matzko: The example that comes to my mind of regulatory competition from a tech crunch disrupt down in San Francisco, a number of panels on the idea that when full self-​driving cars, level five, no steering wheel, when that gets rolled out, it’ll be rolled out in China before it gets rolled out in the rest of the world. That will be because, according to a number of speakers, the central government in China has just established by Fiat, “We are going to be open to autonomous vehicle technology.” Actually, by the dollar value investment in China just over the past year in AB technology has matched the rest of the world combined. You’re seeing that they’re shifting to a place because in China, the central party can cut through local and State-​level competition. What that brings to mind for me, though, is a question for you, Matthew, about how emerging tech should be regulated by local and State authorities versus Federal authorities. The question of federalism in emerging tech policy, how do you approach that as someone analyzing emerging tech?

[00:44:22] Matthew Feeney: I’m very interested in a lot of the local regulations that handle industries like Right Sharing and others things you see in the sharing economy. When it comes to a lot of the technologies we’ve just discussed, that are very powerful Federal-​regulated; the FAA, the FCC with bio-​engineering and all that, the FDA. I’m in a position where I am mostly focused on Federal regulations, but I’m certainly keeping an eye on what’s happening at the local level. As we discussed earlier, State and local governments can take it upon themselves to address some of the concerns we’ve discussed, especially when it comes to drone surveillance, was an example I used. There are State and local governments that have been comparatively welcoming to the sharing economy, that they have decided, “No, we’re going to be a home of innovation and entrepreneurship, and that’s what we want.” I think it’s fair to say that, for some of the big issues we’ve been discussing today, driveless cars, and drones, and things like that, ultimately it’s probably going to have to take some Federal leadership to get the kind of regulatory playing field we want implemented. [music]

[00:45:50] Aaron Ross Powell: Thanks for listening. Free Thoughts is produced by Tess Terrible. If you enjoyed today’s show, please rate and review us on iTunes. If you’d like to learn more about Libertarianism, find us on the web at www​.lib​er​tar​i​an​ism​.org. [00:46:07]