E12 -

Artificial intelligence is here and changing our daily lives, but should we be concerned about the prospect of a hostile and hyper-​intelligent AI?

Hosts
Paul Matzko
Tech & Innovation Editor
Guests
Aaron Ross Powell
Director and Editor

Aaron Ross Powell was the director and editor of Lib​er​tar​i​an​ism​.org, a project of the Cato Institute.

WARNING: Listening to this episode may result in a future hyper-​intelligent artificial intelligence deciding to kill you. Or so says a theory / logical exercise popular among artificial intelligence researchers called Roko’s Basilisk.

But if you are willing to assume that risk, listen to our show as we discuss the present and the future of artificial intelligence with special guest host Caleb Watney. AIs are right now being used for criminal justice reform, targeted genetic medicine, and other aspects of daily life, but researchers anticipate a point not that far in the future when artificial intelligence surpasses human intelligence. That moment, sometimes referred to as the Singularity, promises both deep potential and potential peril.

What is Artificial Intelligence? What is “machine-​learning”? How does pattern recognition help our everyday lives? How do you get an AI to be good at something that, by definition, we can’t train it on? Are there concerns we should have about the government utilizing Artificial Intelligence?

Further Reading:

Transcript

[music]

00:04 Paul Matzko, Host: Welcome to Building Tomorrow, a show exploring the ways that tech innovation and entrepreneurship are creating a freer, wealthier, and more peaceful world. As always, I’m your host, Paul Matzko. And with me in the studio today are Aaron Powell, the Director of lib​er​tar​i​an​ism​.org; and our special guest, Caleb Watney, a tech policy fellow at the R Street Institute. Welcome to the show, Caleb.

00:25 Caleb Watney: Thanks for having me on, Paul.

00:26 Paul Matzko, Host: Oh, it’s a pleasure. Now today, we have Caleb on to talk with us about artificial intelligence, the tech that’s behind the global overlord Skynet, responsible for sending Arnold Schwarzenegger back in time to serve as a meta-​joke on Californians, and as a cautionary tale about 1980s pop culture. We actually will talk about AI researchers’ concerns about hostile AIs in a bit. But first, let’s start by talking about the ongoing implementation of less, shall we say, apocalypse-​prone AI, and the ways it’s benefiting human society right now. So Caleb, why don’t you kick us off? What is AI? How does that differ or similar to other phrases that are commonly used in the field, like machine learning or algorithmic learning, or algorithms? What is that?

01:14 Caleb Watney: Yeah. I think if you get 10 AI researchers in a room, and ask them what is the definition of AI, you’re gonna get nine or 10 different answers. It’s a famously divisive question. Personally, I tend to take a pretty broad definition of it. I think it’s helpful as a category for the really broad range of things that are automating aspects of human decision-​making. And so I think that would incorporate most forms of software, that’s like a really low form of artificial intelligence, and go all the way up to really complicated machine learning algorithms. And so some other terms like “machine learning” are then subsets of AI to get you further up.

01:55 Caleb Watney: I think if you think of just intelligence as a way for an agent to apply a complex… Or varied sorts of solutions to a problem they’re trying to solve, the greater their ability to change the kinds of methods they use, the more intelligent the agent is. And so thinking of intelligence as this scale, rather than a binary that you are artificially intelligent or you’re not, tends to be a more helpful framework for me at least.

02:20 Aaron Ross Powell: What’s machine learning?

02:23 Caleb Watney: Machine learning is the ability to train machines to pick up patterns in the data themselves. So if you give it some search function as an algorithm, and a large supply of data, and it’s starting to learn from the data, and recognize patterns itself, and pull those out, that’s kind of in a very broad sense what machine learning is all about.

02:48 Aaron Ross Powell: So it’s like my spam filter.

02:50 Caleb Watney: Yeah, your spam filter is a great example of machine learning ’cause you will frequently give it hints and lessons, and that’s part of how it learns to get better at tagging things as spam or not, is you’re saying, “Hey, this is an example of spam, this is not an example of spam.” It tries to put all those examples together, see what commonalities they have, what differentiates spam in your mind, and of course, what other humans think. And that helps slowly update its priors about what it’s gonna categorize as spam or not.

03:17 Paul Matzko, Host: This is why, Aaron, you haven’t lost a fortune to a Nigerian businessman since the mid-​’90s, your spam filter helping you out there. What it reminds me of too is, I like that non-​binary approach. It’s like robots. Those who grew up in the 1950s or ’60s, you said robot, they thought something from ’50s sci-​fi television, like the… What’s one with the alien, they land in Central Park, and the alien comes out?

03:47 Aaron Ross Powell: The Day the Earth Stood Still.

03:49 Paul Matzko, Host: The Day the Earth Stood Still, right. Classic sci-​fi. That’s a robot. A fully autonomous, and a usually anthropoid metal man. Well, but robots are all around us. They build our cars. There’s a gradation of robotics…

04:07 Caleb Watney: Yeah, exactly.

04:07 Paul Matzko, Host: And the same thing being true for artificial intelligence is I think a useful starting point. So now you mentioned pattern recognition. The first thing that came to mind was that episode of Silicon Valley, where it was hotdog or not a hotdog. [chuckle] Remember that? So pretty good that… And you can actually download an app. As I understand, someone wrote a program, and you can actually make sure something is not a hotdog before you bite into it, or I guess you wanna make sure it is a hotdog.

04:35 Paul Matzko, Host: So pattern recognition, pretty good, though there is that conundrum. One of our columnists, Kate Sills, and the piece on smart contracts, that there’s an issue with… There’s a meme going around about recognizing the difference between a dog, and a muffin. And even some of our smartest algorithms can’t pick up the difference. The patterns just not dissimilar enough, they can’t tell a Chihuahua from a bran muffin. So there’s still even… They’re doing better than we would have thought even just a couple years ago, but there’s still an issue there.

05:09 Aaron Ross Powell: On this pattern recognition, so the value of recognizing a dog from a muffin is not because the AI can now do something that we can’t, it’s that it can do it at scale. It can find… So Google Image Search can find us pictures of dogs, and know not to give us pictures of muffins, and all of that. But it seems like a lot of the more valuable uses of machine learning or of AI is becoming good at pattern recognition, is to find stuff that we weren’t able to find, to identify patterns that we couldn’t identify by looking at lots of data, or looking… Or to try to use AI to figure out what the potential causes of certain health ailments are. So we’re using it for research, as opposed to automation. How do you do that though?

05:56 Aaron Ross Powell: Because as you said like with the spam filter, the way my spam filter works is… And I used to have… In the ’90s, I had one called… God, I can’t even remember, spam… POPFile, I think it was. That you installed on your own computer, and routed your mail through it. And it’s just used to Bayesian filter. And so you just trained it just over time, and it got remarkably good. But that required me training it. So how do you get an AI to be good at something that, by definition, we can’t train it on?

06:26 Caleb Watney: Yeah, so there’s a bunch of different techniques that machine learning researchers will use to try to improve the functionality. When humans are directly involved in telling the algorithm, “This is a good thing to do, this is not a good thing,” that’s usually called “supervised machine learning”. And unsupervised is when you’re trying to give it some more automated functions. So as an example, OpenAI has a number of different programs that will try to learn various games, and so they have one that learned how to play chess. And counter to how previous algorithms have learned to play chess, which is usually from watching a whole bunch of human games, having humans program into it, “These are the kinds of strategies you should be looking for,” rather, it just had the algorithm play another version of itself for billions of hours, and that slowly taught it to become better.

07:17 Caleb Watney: And obviously, it could learn slowly as it played billions of games, that these kinds of moves increase the probability of winning, these ones don’t. And just kind of through that brute force, it was able to come up with patterns and strategies and techniques that humans hadn’t even thought of. And so today, in chess tournaments, one way that you might be able to find out if a human is cheating and using a computer to help them, is if their move seems too original. ‘Cause if their move is too original, then it’s unlikely that a human would have discovered that already.

07:49 Paul Matzko, Host: That’s really smart, actually. Well, and it’s a reminder, you’re looking for deviation from a norm, which is what pattern matching, and the ability to go through that many calculations a second allow you to do. Some of the applications are really quite exciting. We did an episode on DNA databases, things like Ances​try​.com, 23andMe. And when we were discussing that 23andMe had just signed a deal with GlaxoSmithKline, that’s a major pharmaceutical company, and their goal… They’re years away from this, but their goal is to use AI to look at genetic markers. So for each person, there are so many different genetic variants on your basic chromosome, that like to actually parse through that… For a real doctor, a specialist to parse through that, would just be impossible. It’s just too much volume. But it can be done.

08:46 Paul Matzko, Host: So if you can train the AI to look through someone’s entire genetic code, and look for these patterns, and maybe look for patterns that even people haven’t picked up on yet, and then take a basic prescription, tweak it for that individual person, you can make the drug more effective potentially than the generic variation. You can make it have fewer symptoms, have a lower symptom rate. There’s some really cool, exciting stuff just using that basic pattern recognition, and the ability to just absorb, vacuum up data like that.

09:20 Caleb Watney: Yeah, I think one more thing we’re seeing is there are a whole bunch of potentially interesting applications that we don’t have access to right now because the search costs for sorting through so much information is just too high. And so trying to run individual clinical trials to see how this specific drug interacts on these 15 different types of genomes or whatever, that’s just unfeasible. You can’t run clinical trials on that many things. But if you can sort of model what that would look like ahead of time on a computer, and then you can run billions of simulations beforehand, you can find which possible solutions are gonna be the most promising, and then run human clinical trials on those. And so it’s really I think just expanding out the production frontier of what are our search costs, how expensive is it to search for new really informational intense solutions?

10:13 Paul Matzko, Host: That’s really cool.

10:15 Aaron Ross Powell: You’ve written about… So there’s a pattern recognition side of AI, but there’s also the decision process automation side of it, where it’s like, “Make a decision for me, so I don’t have to go through that process. Or you can do it faster, maybe you can do it more accurately.” And you wrote a while back about one that seems incredibly counterintuitive. It was kind of interesting, which was criminal justice. So you tell us a bit about how we might use AI in criminal justice?

10:42 Caleb Watney: Sure. And to take a step back on what’s the broader point here, yeah, AI tends to be very useful for pattern recognition, and then also in automating or helping us to recreate a more rigorous model for how we think through decisions ourselves. So the specific example in criminal justice is in pretrial detention decisions, where essentially a judge has to look at… Before the trial, they have to decide, is this defendant going after post bail? Are they going to just be released without bail? Are there gonna be various levels of community surveillance? Or if we think that they are a very big risk of either running away before their trial, or committing another crime before their trial, we can keep them in pretrial detention.

11:26 Caleb Watney: And as a portion of how large the population is incarcerated, jail has been a large portion of that, especially in the growth in the last 20, 30 years. And constitutionally, these are people that are still innocent ’cause they’re innocent before proven guilty. And so it’s really just fundamentally a risk prediction of what’s the likelihood of them leaving town before their trial, or committing another crime? And it seems like we have pretty good indication that judges are very bad in making those kinds of predictions. They’ll systematically underrate the risk of the very high-​risk defendants, and they’ll systematically overrate the risk of very low-​risk defendants. And so by just having more accurate predictions about what is that likelihood, you can get I think simultaneously lower crime rates and lower jail populations.

12:14 Paul Matzko, Host: If we’re gonna have Minority Report, it might as well be the effective Minority Report. [chuckle]

12:19 Aaron Ross Powell: So last weekend, I watched this before I knew we were having this conversation, I watched an episode of RiffTrax on Amazon Prime. And the movie is just the new version of Mystery Science Theater, was called Cyber Tracker. It was a vehicle for Don “The Dragon” Wilson, who is a short-​lived martial arts star. It was terrible, but the whole premise was, there’s this company that is replacing judges with AI, and there’s then evil senators in cahoots with them, and then lots of kicking. But everyone was up in arms about this because this is like, you’re taking away our humanity. And that seems like a real… That’s a concern, not just with this, but with a lot of the AI stuff that we’ve talked about.

13:10 Aaron Ross Powell: It shows up on autonomous vehicles too, that we seem to be perfectly happy with getting plowed into by drunk drivers all the time, but if we’re gonna get plowed into a whole lot less by a computer, just because it’s a computer, that’s way worse than the mindless humans doing it. So how do we… Is that inevitable? Is there something we can do to get around that? And how much do you think that limits really effective and positive change in both the near and long term?

13:42 Caleb Watney: Yeah. I think it’s worth differentiating between situations where it may be more likely that the computer completely replaces the human, which seems more likely in autonomous vehicles and drivers, versus times when the AI can partner with humans and improve human decision-​making, which seems more likely in the case of judges. So we’re not recommending or advocating here that we remove all human judges from the courtroom, and we just let algorithms run everything out. It’s about trying to give them more accurate baselines of risk. And so judges are implicitly making these decisions already subconsciously. They’re looking at the defendant, they’re looking at their rap sheet, what they’ve been accused of, what’s their background, how they escaped the crime before. And they’re implicitly making a risk calculation already.

14:25 Caleb Watney: And so human decision-​making though is incredibly volatile. We’re subject to all sorts of biases. There’s really good evidence that judges when their undergraduate football team loses that weekend, for that entire week, they’re gonna give harsher punishments, or be more likely to incarcerate, rather than let the defendant go. And in many ways, algorithms allow us to systematize human decision-​making. It’s the process of when you write down the process by which you’re making a decision, it allows you to examine it externally, check it for bias in a way that you can’t when it’s all just happening in your head. And so again, there’s lots of aspects of human decision-​making that you can’t capture in that process, but in so far as that can be in addition to the more subjective parts of the criminal justice reasoning, I think it can be a tool that can improve outcomes.

15:14 Paul Matzko, Host: Yeah, especially if you have a situation that there will be… People keep track of particularly onerous and infamous judges who will come up with inventive penalties, right? And you have to wear a sign, stand on the corner, and be publicly shamed. And this is the judge that always shames people, or the kind of idiosyncrasies of judges which is impervious to inspection, like you mentioned. You can’t look inside the judge’s mind, and figure out exactly why they’re making decisions they’re making. You can pick up on patterns. And some of those patterns are real disturbing. Football teams. Racial, gender prejudices, the subconscious bias will pop up frequently as well.

16:00 Paul Matzko, Host: Essentially, a black and a white defendant, or whatever, accused will receive different bail assessment risks, despite having a very similar profile otherwise. But that process can’t be… You can’t crack that. The human brain is a black box. Whereas an algorithm, it’s designed by people. We come up with that algorithm, at least in theory. And I know this gets into a question of proprietary algorithms and all that, but in theory, that can be examined, and it can be tweaked, it can be changed, it can be debated, it can be discussed in the public sphere.

16:36 Aaron Ross Powell: I wonder how much of a check that would be though. So we just saw the President of the United States tweet out that he’s mad at Google’s algorithms for privileging content that is linked to by a lot of people, over content that is not. So he doesn’t… Google doesn’t publish its algorithms, but we all… It’s pretty easy to read up on the basics, the gist of how they work. So an AI, an algorithm that’s surfacing news stories, we know how it works, and yet you’ve got a huge portion of the country up in arms because they don’t know how it works, and refuse to believe the people who have told you how it works. And so it’s almost like we just kind of… I feel like we might have a tendency to read… All those biases we think that the AI is gonna help us out with, we just read back into the AI’s behavior, and then get mad at it.

17:28 Caleb Watney: Yeah. No, I think that’s a serious problem. And that may be a reason for some amount of humility about how fast some of this change is gonna happen, ’cause there is a cultural change that needs to happen, where I’m much more okay with the idea of not driving myself around, and having an autonomous vehicle drive me around because I’ve grown up on Uber and Lyft. And it’s really not that different.

17:51 Aaron Ross Powell: That’s pretty condescending towards the people who drive via the Uber and Lyft.

[chuckle]

17:54 Caleb Watney: No, no, no, I’m just saying that the sense of freedom attached to personal car ownership doesn’t mean nearly as much to me as it does to my dad, or some of my older colleagues. And I think in the same way, you’re gonna see a cultural shift, where what freedom means becomes less about what you… Like owning the car, and more, can you get where you want, when you want, how you want? That positive aspect to do that becomes the aspect of freedom. And I would imagine that you’re gonna see similar cultural changes around how okay are we with algorithms making decisions, and do we assign them similar levels of culpability as we do for humans, or more?

18:34 Paul Matzko, Host: Well it’s like with, we already… Most low-​cost index funds have robo-​advisors. The ordinary, middle-​class American person that has a 401k is trusting an algorithm with their life savings. And that was something that would have been kind of unimaginable 10, 20 years ago. And now, it’s just ordinary. So evolving cultural norms and expectations, I think. But there’s always gonna be a lag there, and that lag, things can get messy. You get a lot of distrust and trouble. Before we move on, real quick, so we were moving towards bail flight risk assessment and whatnot, and maybe a quick word. Like what has the impact been? I think my home state… I live in New Jersey. So I think New Jersey was leading the way in the algorithm bail system. What have been the effects?

19:23 Caleb Watney: Yeah. So New Jersey took pretty broad sweeping actions. At the beginning of 2017, they passed a bill that completely got rid of cash bail, and instead replaced it with a risk assessment algorithm. And so now judges have the option to either assign various levels of community surveillance. You know, someone checking up on you once a week. As far as ankle monitoring, would be the highest level community surveillance. And overall, this has led to… I think, what was that like? 25%, 28% decrease in the jail population since the legislation has gone into effect. And it’s somewhat difficult to tease out how much of that is getting rid of cash bail, and adding more levels of community surveillance, versus what is more accurate, a risk prediction by the risk assessment algorithm. But I think in some ways, they almost go together.

20:13 Caleb Watney: I think risk assessment algorithms politically enable certain types of criminal justice reform that wouldn’t have been able before. Because if you just told a population, we’re just gonna get rid of bail, and we’re instead just gonna trust the judges to give community surveillance instead, I think a lot of people would have freaked out about that, and not been willing to vote for that. But if you give them a semblance of, we’re replacing cash bail with something actively that’s gonna still try to assess risk in a somewhat objective manner, I think that enables new political possibilities that weren’t available before.

20:44 Paul Matzko, Host: So oddly enough, it’s almost like we’re using it as a political tool. We’re taking advantage of people’s blind trust in a technology that, because it’s new to most people, it’s indistinguishable from magic. And you in an article, in a response article for Cato Unbound, you were talking about a fairy dust view of artificial intelligence. And so in a funny way, you were basically saying, “Hey, it’ll be okay if we get rid of bail because we’ll sprinkle some magic fairy dust algorithm on the assessment process, so you can trust that it’ll work out.” Isn’t that a problem though, the fairy dust approach?

21:18 Caleb Watney: I think there’s different ways of selling it. I think the correct way to try to approach it is as a hammer. It’s a tool that you use for a very specific purpose, and in this case, we have pretty good evidence that humans are really bad at assigning risk. Algorithms seem to do a better job in the simulations that we have. So I think it makes sense. It may also subconsciously be working on people’s trust in technology, but I think just as much a factor on the other end, people are scared of new technology, and don’t wanna go with something. So that may cancel out, to a certain extent.

21:49 Paul Matzko, Host: That’s a good point. Okay, so we made a nod towards the problem of proprietary algorithms, both for cash bail flight systems. But in other avenues: Governments, state governments, city governments, the federal government are increasingly rolling out artificial intelligence for national security surveillance purposes, to track… The facial tracking, and all kinds of applications. What are some of the concerns that we should have as people who love freedom and liberty about the state’s use of artificial intelligence, and its contracts with private providers, and what kind of thing should we be wary of in that regard?

22:32 Caleb Watney: For sure. So I think a mistake I’ve seen some people fall into is to assume that whenever the government is purchasing or using algorithms, we should be holding them to the same standards that we’re holding private companies to. And I think that’s a mistake for two main reasons: One, usually the level of harm that the state is able to do if it messes up is much higher than a private company. Obviously, private companies do not have the ability to send you to jail if they choose to. They don’t have drones that have missiles on them.

23:05 Paul Matzko, Host: Yet. Yet.

23:07 Caleb Watney: Yet. [chuckle] If Tinder makes a mistake, you end up going on a bad date. That sucks, but it’s not the end of the world. The level of harm associated with usually government uses of technology tend to be much higher. And two: There’s different sorts of feedback loops. So private companies are usually in competition with other companies. Again, going back to Tinder, if they set you up on bad dates consistently, and there’s an alternative that doesn’t set you up on bad dates, you’re able to switch to that. And the knowledge of that, there’s competition, inspires Tinder to be really careful about the way that their algorithm works, to constantly check it for bias, or for data error, and to improve it over time.

23:44 Caleb Watney: The government though, if I’m a defendant, I don’t get to choose which jurisdiction I wanna be tried in based on how much I like their risk assessment algorithm. And so there’s not the same sort of feedback loop for improvement, or for transparency, or for accountability. And so I think you can totally justify stronger, if you wanna call them regulations, you can. But I think it’s really just the government using their contract power. They are having procurement contracts with these private companies. And in the same way that Google, if they’re buying from a third party vendor, they’re allowed to put whatever stipulations they want in their contract to ensure that it meets the standards they have, I think the government should be very willing to use the incredible power they have in procurement contracts, to make sure that they have full access to check on the data, and make it accountable, and to the public.

24:32 Paul Matzko, Host: My brain turn off. I heard the word “regulation,” that’s a dirty word around here.

24:36 Caleb Watney: I know, I know.

24:37 Paul Matzko, Host: Your first, Caleb, was in favor of regulations. That’s… No, that’s a good… I think there’s a lot of legitimate cautions there about the state’s use of AI, and it would be a mistake to only focus on the positive potential applications where we see them, and have a cautionary note. So there’s some more, some will say fantastical concerns about superintelligent AIs, with hostile intent towards humankind. So why don’t we dig into two related concepts, or at least the concepts… The first one here, the singularity develops first. And then someone proposed an interesting thought experiment called Roko’s Basilisk, and we’ll talk about that, is second. So the singularity, it’s built on the presumption that artificial intelligence will increase in intelligence at the same kind of exponential rate as other areas of tech, most famously, Moore’s law, about superconductor, the number of transistors you can fit in a square inch, in a square millimeter.

25:46 Paul Matzko, Host: Now we’re talking about molecular level transistors, so that that curve, that basically super semiconductor chips will become more and more transistor dense, and do so essentially an exponential growing rate. That the same thing will be true of artificial intelligence, which means that at some point, not only will artificial intelligences be indistinguishable from human intelligences, they will surpass us. And when that day comes, as they become smarter and smarter and smarter than us, more capable of out-​innovating us, that they might… So there’s a optimistic use case for this, which is the idea that we’ll have a machine learning-​introduced human utopia, where machines will do for us better than what we can do for ourselves. There’ll be the end of pain and suffering. We’ll die, we’ll upload our brains to the cloud. We’ll have…

26:38 Aaron Ross Powell: We’ll have true communism.

[chuckle]

26:39 Paul Matzko, Host: We’ll have true communism. ‘Cause clearly, that is the only true utopia. So there’s the optimistic use case here. And it is worth noting, this does kind of come out of the golden age sci-​fi in the ’40s, ’50s. This is when… It’s not an accident that Turing, Alan Turing, the famous Turing test, will you be able to tell… One of the markers of the advancement of artificial intelligence will be if you can’t distinguish a computer from a person in a conversation, which we still haven’t actually passed. We can kind of fudge the test, but we’re getting there, but we’re not there yet. But it comes out of the post-​World War II, a bunch of mathematicians, sci-​fi geeks, and they come up with, “Hey, this could happen.” They all expected it to happen in their lifetime, and obviously, the pace has been slower than what was expected. But again, there’s this belief of a superintelligent AI that will surpass humankind intelligence. This leads us to Roko’s Basilisk. Now in this regard, I think, Aaron, you brought this to my attention.

27:47 Aaron Ross Powell: Yeah.

27:48 Paul Matzko, Host: What’s your… How did you come across Roko’s Basilisk? ‘Cause it’s been around for a little while.

27:52 Aaron Ross Powell: I have no idea, probably on Twitter or some blog or… It was something that everyone was talking about for a while. And very briefly, it’s simply the idea that a superintelligent AI can turn against us, likely will turn against us in all sorts of ways. So if you task it with making the world a better place, do everything you can to make the world peaceful, well the least peaceful thing on the planet is us pesky humans. And so at some point, does it start… It starts getting upset with people who are going against its particular set of rules, or people who are interfering with it advancing these. And so that gets us to people who are not sufficiently positive about AI would be considered threats to this AI’s accomplishment of its mission because if we’re not sufficient… If we’re not… Society isn’t all keen on AIs, that’s gonna slow things down, slow down advancement, whatever else.

29:00 Aaron Ross Powell: And so the AI might start picking off, or otherwise punishing those people who have not… Who have said nasty things about the possibilities of AI in the past. And because these AIs will then have access to all of the information, because we… If we’re griping about AI, we’re griping about AI on Twitter, and that’s there forever. And so the AI will have access to that. And so then it will start going back and like, “Look, if you’ve been grouchy, Caleb, if you’ve been saying nasty things about AI in the past, there’s a better than average chance you still harbor some of those ill feelings, and so we might as well pick you off.” And so the kind of outcome of this thought experiment is we all better just say nothing nice about AI starting now.

29:41 Caleb Watney: Yeah, so there’s a whole range of possible negative consequences that come about from superintelligence. It could be as simple as it feels totally neutral towards humans, but we give it a poorly-​defined goal, like the famous experiment is… Or the famous thought experiment is paperclip maximizer. If we just tell an AI to maximize the number of paper clips with no specified end goal or any sort of conditions on that, eventually it will just slowly consume all matter in the universe, including all of us and turn us into paper clips. And that’s like one category of possible harms from superintelligence’s poorly-​defined goal systems. Roko’s Basilisk is usually kind of specifically a malevolent AI which might come about. Aaron, as you alluded to, it can search back in the history of various Twitter, podcast, whatever. And if it feels that you were insufficiently devoted to bringing it about faster, it would then go back and either kill you, or if you’re not alive, re-​simulate your mind, and infinitely torture you in some computer simulation.

30:49 Caleb Watney: And so then, for the purposes of avoiding this horrible fate, we should all be focused and dedicating ourselves to helping bring about this malevolent AI, so that when it exists, it doesn’t torture us infinitely. And yeah, I think there’s a lot of potential problems with it, but it’s kind of a fun and interesting bad experiment.

31:07 Paul Matzko, Host: I do often wonder with these conversations, the effect of… So there’s kind of a selection effect, and it comes to people who do AI research. I’m not the first to observe that maybe the population of Silicon Valley is not representative of humankind as a whole, that it lacks diversity, and not just in the literal ways, gender and race and religion and whatnot, but also in like, there’s a kind of person who gravitates towards this kind of research, who’s maybe not the most like… Doesn’t have the densest social connectedness. They’re college-​educated, they’re mobile, they’re moving, they’re not rooted in place in family and tribe and neighborhood. And so my point in bringing this up is part of me wonders if when we worry about AI futures, we’re really looking in the mirror.

31:57 Paul Matzko, Host: It’s essentially you have perhaps a community of folks who are inclined towards a certain level, on the spectrum of well-​adjusted to sociopath. [chuckle] There’s almost an inclination towards, “I don’t really feel like I need people, and so I’m worried that my AI, that I designed, won’t feel a need for people as well.” That some of our concerns come out of a particular community. So the Roko’s Basilisk, it comes from LessWrong. That’s the name of the website, which is a big part of the rationalist community. The community I enjoy a lot, like Slate Star Codex, even Tyler Cowen is rationalist-​adjacent. But again, it’s a community that’s known for… There’s almost the people as atomized individual units, who talk about utils. Like, “I’m gonna maximize my utils. What’s the most efficient way I can ingest substances? I’m gonna drink lots of Soylent.” In other words…

32:56 Aaron Ross Powell: Which is… I mean, to tie this into your thesis, is named after a product. There’s quite literally using other people to feed yourself.

[chuckle]

33:03 Paul Matzko, Host: Yes, and someone… That was supposed to be a bad thing in whatever ’70s sci-​fi movie that was featured in.

33:11 Aaron Ross Powell: Soylent Green.

33:12 Paul Matzko, Host: Soylent Green, yeah.

33:14 Aaron Ross Powell: Based on a Harry Harrison novel, Make Room! Make Room!

33:17 Paul Matzko, Host: There you go. I knew you’d know your sci-​fi references. That was meant to be a bad thing, but now it’s been turned into a branding for a very successful consumer product, which is all layers of irony. But again, maybe that’s… Maybe there’s something unusual about the community that’s doing a lot of AI research or AI, is AI interested right now. And so that would be maybe an optimistic argument, which is to say that as AIs become not just the preserve of a hyper-​select subculture or a small community, that our AIs will look more like people, which is, there’ll be some really good ones, some really shitty ones. It will be the whole gamut of humanity we reflect in our AIs.

34:00 Caleb Watney: So I think a lot that underlines this is a lot of assumptions about what intelligence is, and what it implies as you have increasing intelligence. Whether or not personality, or malevolence, or benevolence are inevitable consequences of increasing intelligence seems very unclear right now. I would probably lean towards no. It seems like there’s a lot of things about human consciousness that we still don’t understand from a purely reductionist perspective. And maybe we will find them out, maybe we won’t. Getting back to your earlier question though about, is there something about these hypothetical, philosophical thought experiments that come out of the community, I think there probably is. I think what it may be is generally as a community, they have a willingness to bite bullets, when they’re thinking through about what are the logical consequences of axioms X, Y and Z. And I think that’s an admirable trait, but it gets you to a lot of really crazy circumstances about the entire universe being devoted to Computronium, which is the hypothetical most efficient processing unit per atom.

35:12 Caleb Watney: And I think it’s worth considering some of those, at least as very small possibility events, but it’s also worth recognizing we have very poor track records in terms of actually being able to predict the future. It seems real likely that’s going to continue. And so having a epistemic humility about what’s the actual likelihood that any of these things, even if from our specific axioms, they seem entirely rational. I think it’s good to take a step back.

35:43 Aaron Ross Powell: It raises one question I have about AI, and AI advancement because… So a lot of baked-​in assumption of this is that there’s a superintelligence that comes out of this. And a lot of the AI that we interact with, like my kids talk to Alexa all the time, and Alexa is Alexa. And Alexa lives in little pods throughout our house, and may live in your houses. But it still is, it’s Alexa, right? But a lot of these AIs are… It’s not monolithic, there might be the AI that drives my car, which only lives in that car, and there’s a lot of the AI processing that goes on in my phone, only happens in my phone ’cause Apple does that for privacy reasons largely. So how… Is it a mistake to think that AI in the future will be even a big superintelligence in the first place, or will it just always remain these kind of low-​level with… That it’s not trying to do everything. This may be superintelligent, but its superintelligence only enables it to drive a car. And so even if it wanted to destroy the world, the most it could do is maybe run someone over, if it could even think like that. But is highly, highly intelligence in very narrow domains.

36:55 Caleb Watney: So I think this is sort of the biggest question about what future do we end up in, is it a bunch of discreet AIs that are each industry-​specific, or is there one amalgamation of an AI that controls everything? The largest assumption underlying that is about whether or not recursive self-​improvement is possible, and what’s the speed of that? In a theoretical world, where you can get an AI, which can then improve itself, presumably, it’s able to improve itself at a rate that’s faster than human engineers can improve it. And then as it gets better, it can presumably improve itself even faster, and becomes this exponential curve where suddenly it’s light years ahead of the competition. In that sort of world then, hypothetically, the first AI that’s able to reach recursive self-​improvement kind of, by definition, becomes the most powerful. Because unless there’s an AI that’s literally a few seconds behind it, then in a matter of hours, it’s going to quickly outpace all other theoretical AIs, cross them out of existence, disable all of our cybersecurity protocols because it’s just infinitely smarter than us.

37:58 Caleb Watney: And I think if you take that for granted, then a lot of those concerns about the singularity and having one AI that runs everything begin to make more sense. But that’s an assumption to be questioned, and I don’t think it all seems inevitable at least, that recursive self-​improvement is possible, or even if it is, that it becomes this exponential curve. There’s a lot of domains where as we get better at something, the next unit of improvement becomes exponentially harder. We’re seeing that in Moore’s Law. Moore’s Law is slowing down because it’s just getting chemically near impossible to just fit more transistors on a micrometer. And it seems totally possible that that would also be the case in terms of self-​improvement for AI.

38:39 Paul Matzko, Host: Yeah, there’s this underlying assumption that shows up in the literature about two kind of scenarios. A fast takeoff versus a slow takeoff, which is all kind of… Do you see that recursive intelligence with a hyper-​exponential upward curve, that just so quickly outpaces anything else, or will they become smarter at more of an evolutionary pace, in the same way that human beings did, albeit on a faster time scale? Because if you see a slow takeoff, you end up with Andrew Ng, who’s a Google AI researcher. He said that if it’s a slow takeoff, worrying about superintelligent hostile AIs is a bit like worrying about overpopulation on Mars. How will we get to Mars, first. And then we’ll worry about the overpopulation problem.

39:30 Paul Matzko, Host: But if it’s a fast takeoff, then well no, you should. Because we’re gonna go from Mars, the overpopulation, in hours, minutes, seconds. Who knows, right? And by then it’s too late. So there is that kind of baseline assumption about AI growth. And I don’t think any of us are expert enough in the field to kind of estimate a guess at what it is, but it’s something for our listeners to keep in mind. I think that’s all. We’ve covered the basic overview of cool stuff going on in AI, some of the interesting speculation about the future of AI. So Caleb, thanks for coming on the show to talk about that. Do you have anything that you’re working on right now that our listeners would be interested in knowing about?

40:11 Caleb Watney: Yeah. In my work at R Street, I’m working a lot on artificial intelligence policy. I’m working on a paper right now, specifically on competition policy in AI, and kind of how do we think about, is it just going to be Google and Amazon who are running all the AI systems, or are there levers that we can pull now to sort of increase at least the odds of healthy competition in the ecosystem? And some of those policy barriers are things like, what’s the supply of data scientists? If there’s only 10,000 scientists coming out of top universities every year, it’s a lot easier for Amazon and Google to grab them all. And specifically, the number that are in the United States. So there’s an op-​ed I recently wrote, that you could link in the show notes if you wanted to, about the importance of immigration in the AI debate, and the fact that we have a ton of really smart AI researchers that are coming through United States universities, and then because we’re so backlogged in high skill visa programs, they’re not able to stay here.

41:10 Caleb Watney: And especially in a world where there is a fixed supply essentially of AI talent, there’s sort of a zero-​sum international competition aspect, where every smart AI researcher we have is one less that China hasn’t. While generally, I think a lot of the China comparisons can be overblown, and it’s probably not as concerning as some people make it out, I think generally, I would prefer cutting-​edge AI to be developed at a democratic country.

[chuckle]

41:36 Paul Matzko, Host: That’s not implausible. I think the number I remember from your article was, or somewhere, that 20 years ago, one in 10 Chinese tech graduate students returned back to China, who were H1B visa holder. But now that rate’s gone down to eight out of 10 go back to China. So we’ve seen that real switch, where there’s kind of the brain drain influx is starting to shift back towards China. This ties in for our listeners to a previous episode a week or two ago about the transformation of China, the tech industry, and how it’s transforming both rural and urban China, and the way in which they’re actually attracting people who would have been engineers or executives at Yahoo and Google and Amazon, who are now opting to leave just because they think the prospects for innovation are better in China now than in the US.

42:28 Paul Matzko, Host: So we’re seeing… And I think that ties back into that conversation that we had before. But until next week, be well… Building Tomorrow is produced by Tess Terrible. If you enjoy our show, please rate, review and subscribe to us on iTunes, or wherever you get your podcasts. To learn about Building Tomorrow or to discover other great podcasts, visit us on the web at lib​er​tar​i​an​ism​.org.