E08 -

This week we discuss the implications of law enforcement’s ability to access genetic information from DNA databases like Ancestry and 23&Me.

Hosts
Paul Matzko
Tech & Innovation Editor
Guests

Matthew Feeney is head of technology and innovation at the Centre for Policy Studies. He was previously the director of Cato’s Project on Emerging Technologies, where he worked on issues concerning the intersection of new technologies and civil liberties, and before that, he was assistant editor of Rea​son​.com. Matthew is a dual British/​American citizen and received both his BA and MA in philosophy from the University of Reading in England.

Aaron Ross Powell
Director and Editor

Aaron Ross Powell was the director and editor of Lib​er​tar​i​an​ism​.org, a project of the Cato Institute.

DNA databases, which have long been used by amateur genealogists, have burst into the headlines as law enforcement uses them to solve cold case murders. For instance, detectives used an open source database called GEDmatch to catch the Golden State Killer, who murdered and raped dozens of women during the 1970s-​80s.

Yet while solving crimes is obviously good, there are concerns about violations of genetic privacy. It is now possible to identify the overwhelming majority of Americans, without their consent, based on merely two DNA samples from second or even third cousins. That said, future applications of these databases could propel remarkable medical innovations such as personalized gene therapies and bioelectronics.

What are DNA databases designed to do? What are the privacy concerns associated with these databases? Are these databases more or less creepy than those of facial recognition? Which of these services are accessible to law enforcement? What is the panopticon, how does it apply to a DNA database? How is this any different than a fingerprint database?

Further Reading

“ ‘Genetic Informants’ and the Hunt for the Golden State Killer”, written by Matthew Feeney

The hunt for the Golden State Killer on GEDmatch.

Parabon Nanolabs is creating a DNA database specifically marketed for police.

How does the 4th Amendment work in the age of warrantless searching of DNA databases?

Carpenter v. United States

Oral Argument of Maryland v. King

Transcript

[music]

00:05 Paul Matzko: Welcome to Building Tomorrow, a show exploring the ways that tech innovation and entrepreneurship are creating a freer, wealthier and more peaceful world. As always I’m your host Paul Matzko and with me in the studio today are Aaron Powell and Matthew Feeny. Now today we’re gonna talk about the hot topic of DNA databases, which have implications for everything from fighting crime, the fourth amendment, constitution, medical innovation and the answer to that eternal teenage question, “Mom, Dad, who are all these random second cousins at the family reunion? Why should I care?” But let’s start with a killer, The Golden State Killer specifically, Matthew, why don’t you kick us off, why does the capture of a serial killer from the ’70s in California matter to those interested in DNA databases?

00:52 Matthew Feeney: Yes, so some listeners might already be aware that earlier this year, police in California, made an arrest of the alleged Golden State Killer. This was a killer who not only killed about a dozen people, but also committed dozens of rapes and a ton of burglaries. And this killer and rapist had been rather elusive for decades, but the police finally arrested a suspect thanks to a website that many people associate with family tree research. So police in this case, they had DNA that had been leftover at a couple of the crime scenes, but the DNA wasn’t really getting them very far. And someone, and I’m sure there’ll be a really interesting, I don’t know, HBO mini series about this in the coming years, decided like, “Well, why don’t we get this DNA and upload it to one of these sites like 23andMe or MyHeritage Ancestry that specializes in family tree ancestry research.

01:58 Matthew Feeney: So they plug it in and the actual name of the website was GEDmatch, in this case. And doing that they were able to identify the suspects great, great, great grandparents. The suspect was not in one of these websites, but everyone listening, should be aware that they have 32 Great-​great-​great-​grandparents. So with that information, and information based on other people who’d uploaded their DNA to the site, they were able to build a family tree or actually numerous family trees, to try and identify who the suspect might be. And whittling it down from the family tree total, they took it down to people who were the right age, the right location, the right profile and they identified one suspect who turned out to not be the right person. But they went down the list to another suspect, waited for what the police have described as, “Discarded DNA,” which would be DNA found in the trash. Whether that’s some hair from a comb or a used toothbrush, we don’t know exactly what kind of DNA that was. And they compared the discarded DNA to the DNA from the crime scene and boom, they had a match. And so this guy has been arrested and the trial is pending.

03:18 Paul Matzko: Yeah, it’s the kind of thing just for our listeners to get a sense of what this looks like. We’ll have to put up in the show notes, an example family tree for how this works. But it’s a matter of triangulation. So you take this database, they are unlikely to have the actual DNA of the offender, but using… You can kind of extrapolate into the past, connections between multiple second or even third or fourth degree cousins. So as long as there’s two within this broad family tree, going back six or seven generations, you can from the similarities between those cousins, you can triangulate the suspect in the family tree. And so with that grandfather, with how many was it? 32 family trees potential there. You can identify which one of those family trees the killer’s in.

04:09 Paul Matzko: And so then you go through a pool of suspects and that really narrows down that pool of say, several dozen or several hundred suspects tremendously. Okay, which one of our suspects belongs in that family tree that we just triangulated from a DNA match? But maybe we should take a step back, so we’ve got police catching the Golden State killer using a DNA database like 23andMe and Ancestry. That’s not what these databases were designed to do. Right, they were designed to do something else. I think Matthew, you said that you are on one of these?

04:43 Matthew Feeney: Yeah, it was really interesting writing about this case, because I actually am a 23andMe customer, I’m also a MyHeritage customer. So, family history is a bit of a side hobby of mine. I’m very interested in history. Not everyone is, thinks this is a rather odd hobby, right? But it interests me. So you… At least with 23andMe, you sign up and they send you a kit, you spit in the tube, you send back the tube and they whirl it around and analyze your DNA. And they can give you ancestry information, so these are the kind of ethnic groups that your ancestors are associated with. You can also sign up for health information. 23andMe, after a few fights with the FDA, I should add, are now allowed to screen the DNA for certain risks, certain diseases that you my carry.

05:40 Matthew Feeney: And using that data, you can upload 23andMe data to MyHeritage, which is… Not only has a ethnicity estimator tool, but also a family tree building tool. And the crucial thing to keep in mind here is that… It’s a cliche but it’s true, but we’re all related, that actually… Or the three of us sitting in the studio we really wouldn’t have to go that far back at all in human history to find a shared ancestor for the three of us. Like we mentioned earlier, you only have to go back a few generations before you have 32 direct ancestors. And within about a dozen or so generations, you have thousands of direct ancestors.

06:26 Matthew Feeney: So we are all related and many, many people you pass all the time will be third, fourth, fifth, whatever cousins. And so building a family tree like this for this investigation, proved really valuable. And what’s interesting is I think a lot of people found this kind of intuitively creepy, but for reasons that are difficult to pin down because they were only able to do this because a lot of this alleged killer’s distant relatives had the same impulse as me and were interested in family history and uploaded this data. And this doesn’t seem to be a violation of their privacy because they were the ones who signed up for the service. It’s clearly not a violation of the privacy of the killer cause you don’t really have an expectation of privacy to DNA you’ve left at your murder scene, [chuckle] right?

07:18 Aaron Ross Powell: Murdered the expectation in other people’s DNA.

07:20 Matthew Feeney: Right! And everyone should be aware that in their own family tree, there are gonna be angels and demons. That you dig far enough, you’ll find people like the Golden State killer in your family tree, but you’ll also find really nice people too. So it’s difficult to pin down where this uneasiness comes from, especially given that the privacy violation seems a little difficult to pin down. The only thing that I’ve discussed with colleagues here is the going through the trash to find the discarded DNA there, but that’s been upheld as constitutional. It’s not an issue, and this is hardly the most sympathetic suspect in a case like that.

08:00 Aaron Ross Powell: What’s interesting about this particular story aside from just the awesome cleverness of the SLO thing, is that it flips the script on how we think about DNA evidence and the role that it plays in investigation because until now, DNA evidence almost existed as a verification technique. So it was like you had largely… You did traditional police work and found your suspect, and then the DNA was what then told you you’d gotten the right guy, right? But the DNA wasn’t itself what really led you to the suspect because you didn’t have a database that you owned… To some extent that did exist but very small scale, and so in most cases, the DNA could not lead you to the suspect. But instead what we have now is, it almost makes it look more like the kind of movie-​style criminal profiling like serial killer profiling where we put this stuff out, we get some information about our killer we don’t really know what it means, or where it points and then we put it out there and develop a picture that then narrows down the field of people and so the kind of person who commits this crime is probably over 45, and white, and must be this tall, and so on and so forth. And so we’ve turned DNA into the FBI profiling like that show, was it Mind Hunter?

09:25 Paul Matzko: Mind Hunter. Yeah, yeah, yeah that’s a better show.

09:26 Aaron Ross Powell: That sort of thing. And I wonder if that is part of the creepiness. Is this notion that we’re now all kind of in this… Potentially in this massive system that the government can just reach into and pluck us out of, instead of having to figure out who it is that at any time the stuff that we’re leaving behind, that we don’t really have any control about leaving, we’re always leaving hair and skin, and it would be awfully hard to not leave DNA trace everywhere. That stuff now becomes this thing that can be used at any time in this very automated system. That’s what this was is they just take a sample, they upload it and the computer spits out an answer that it almost de-​personalizes the whole process in a way that I think that that level of depersonalization often creeps us out that we’ve removed what looks like the human agency and the cat and mouse chase and instead now it’s just like, you kind of feed the information into a computer and then it clacks away and the lights flash and then this print out, slides out and says, “Here’s your man.”

10:44 Paul Matzko: It’s like the Hitchhikers Guide to the Galaxy, what’s the answer to…

10:50 Matthew Feeney: Life, the universe and everything.

10:52 Paul Matzko: Life, the universe and everything. And what was it, 42 or… 42, yeah, so it just spits right out. Well, when it comes to the inherent creepiness to this, there’s a literal building a composite image that’s baked into the process. So there’s a new DNA database called the Parabon. So most of these like 23andMe, Ancestry, they were created targeting kind of amateur genealogists like Matthew. Some of them are open source, people download their information from 23andMe and upload it to things like GEDmatch, which is like the Firefox to 23andMe’s Internet Explorer, I guess or something like that. And that’s what the police access. But this new Parabon is saying “No, we’re gonna actually create a product specifically targeted for law enforcement agencies”, but one of their things, if you go in the website, we’ll put link the show notes, that they generate phenotype-​based composite images. So, you’re this percentage of Middle Eastern, you’re this percentage of North African, this percentage of West Europe, and we’ll create a composite image that looks like you. And it’s creepy ’cause it’s… The hit and miss rate is all over the place.

12:08 Paul Matzko: They look… It’s like a lot of police sketches. [chuckle] There’s some infamously bad ones. My favorite is when they’re doing one of those crime shows and they sit the victim down with the police sketch artist and they come up with this blobby picture of someone and, “Does this look like the person who attacked you.” And [chuckle] sometimes you get that look in their eyes where they’re like, “Not really at all. But I’m gonna say so because that’s I’m expected to do at this point in time.” But there is something kind of creepy about that, that just based off of my genetic information, an image that may or may not be an accurate reflection of me is being generated and that’s what the police are now gonna look for. And there’s of course the potential for mistakes and abuse going on there as well that like, “Oh, this person looks a blend of North African and Middle Eastern, that’s who we’re looking for. Okay, go get him guys.” There’s a lot of potential for police bias in there as well.

13:05 Aaron Ross Powell: But doesn’t it have a baked in check against, not the abuse, in the sense of them hunting for all sorts of people and using this thing maybe more than they should, but in getting the wrong man because the only way that this… That they start this process in the first place, is that they have some of the murderer, the rapist or whoever else’s DNA and they upload it into the system. So even if the system spits out a image that happens to look like you, and so they think that now you’re a suspect. They then have the… The obvious step is to then check your DNA against the original source, and that’s going to clear it up right away. So you can’t… Getting wrongfully convicted out of this particular investigative technique seems much more difficult than other.

13:54 Paul Matzko: Well it only gets you so far. So, Matthew alluded to this, that with the catching the Golden State Killer. They first actually did a false positive ID of a guy in a nursing home in Oregon who they then managed to exclude out by doing a direct DNA sample, but there is the potential here for… Depending on how good your match is. So if it’s a second cousin and a second cousin, the overlap of their DNA is fairly close to you. You can do pretty well. But if it’s like a second cousin and a third cousin, the DNA overlap, and I don’t have the exact numbers in front of me, but the farther away they are, the less the match is and the less certain the match is. You’re really narrowing it down to like a family or a multi-​family… The closer the match the farther down the tree you can go closer to the roots, but the less the match the higher up you are. So you have a real potential for false positives of family members.

14:49 Paul Matzko: The historical example would be, it’s like the example of President Thomas Jefferson who… Well, it is widely believed that he had an affair with Sally Hemings, his slave, that whether it’s an affair or rape is questionable, given the consent with the slave being problematic. Well, we actually don’t know that for sure. What we know is someone in his family, there’s probably eight male Jefferson’s of his generation, or the next generation. One of whom had sex with Sally Hemings. So there’s a problem there, right? It can lead us to a group, but then it can actually confirm our existing biases about which one we think it must be. So it’s a tool, but it can be over-​predictive. Does that make sense?

15:39 Aaron Ross Powell: I wonder if the result then is… So one of the libertarian concerns about these kinds of databases is, if they become useful from law enforcement’s perspective that the government will then want us all to contribute our DNA to them. Because they become more useful the more people who are in them, and so the government will like… Does it just become part of… I can see the kind of nightmare libertarian scenario of [chuckle] them just gathering, like your elementary school. You send your kids to elementary school and it’s part of physical fitness day or whatever, they take a DNA…

16:08 Paul Matzko: Presidential physical fitness. [chuckle]

16:10 Aaron Ross Powell: Sample of this kid. But I wonder if the very problem you’ve identified, almost solves that problem for the government without it having to institute a program of forcibly getting people’s DNA because if the result is as more and more investigators use this technique, so we have this case, it’s been used a handful of times, but it’s not like a standard investigatory technique. But as it becomes one, where they’re just like, we’ve got DNA evidence, the first time we find DNA evidence, we’re gonna upload it and we’re gonna see what we get.

16:39 Aaron Ross Powell: It’s just a standard step. Then the likelihood that any one of us will at some point get… The cops will come to our door and say, “Hey we, you know, think you might be a match. We need to get a direct DNA sample from you.” That’s gonna go up to the point where you can imagine a world where any one of us has been harassed in this way multiple times. And so you get sick of getting harassed and so the way you stop getting harassed is to say, “Well, screw it. I’ll upload my specific DNA to the database. So then they don’t even come to me in the first place because they’ve already checked their sample against mine and ruled me out.” And so we all kind of opt-​in just to get rid of the harassment of people wanting us to check it out.

17:23 Matthew Feeney: Yeah, there’s this very creepy prospect of law enforcement just slowly building the family tree of the United States [laughter] with all this cooperation of sharing files that actually… It’s by no means here yet, but that’s the nightmare scenario, right? Is that they’ll be able to do this. And I think Aaron alluded to this earlier, which is, this is something where you can be identified despite being absent from the dataset, right? That they can find you in virtue of you never having uploaded the DNA yourself and again Golden State Killer, not particularly sympathetic character by any means, but like Aaron said, we leave our DNA all over the place, and it’s rather odd to think that even if you’re the sort of person who takes steps to think, “Okay, I’m only gonna pay in cash, and I’m going to use encryption, and I’m gonna do everything I can to be invisible to the state,” just by walking around, you’re going to be leaving traces in virtue of biology, and that is… That’s an interesting and disturbing prospect, because your privacy here is being… Many people will feel that their privacy is being violated by hundreds of their distant relatives being interested in history, which is a strange idea.

18:44 Aaron Ross Powell: Actually, quick question about that, then. Do we find that more or less creepy than widespread surveillance, like facial recognition and cameras everywhere? And if it’s more or less, what is it about that that is more or less creepy? Because they’re both similar. This facial recognition is you out in the world leaving information about yourself that the technology is now making accessible.

19:09 Matthew Feeney: So I think it goes back to what you mentioned earlier that it depends on the investigatory techniques. So both of these can be used in different ways where you can imagine a situation where you have a suspect on a videotape, right? And you think, “Okay, I wanna find out who this is, so let’s plug this face into whatever facial recognition databases and something will spit out.” This isn’t really like that. It’s like, “Let’s have a canvas of faces and find… And we’ll see whatever hits, just throw in these databases and we’ll get a lot of pings. And using that, we’ll investigate it more.” I do find facial recognition, the prospect of that being ubiquitous, very creepy.

19:52 Aaron Ross Powell: More creepy than the DNA stuff?

19:54 Matthew Feeney: At the moment, only… But I think I might only think that because facial recognition is actually more prevalent than I think many people realize. And in fact, about half of American adults are already in some kind of facial recognition database. So, yeah, that’s my gut feeling at the moment, but that will change as this kind of technique becomes more widely used.

20:15 Paul Matzko: And not that this would allay your concern, but it’s… There’s the inevitability to this, which is that the percentage of people you need to actually opt in to a database is very, very small. So 23andMe has five million samples uploaded. You can already get perhaps, by one estimate, 94% of the US population could be identified based off that, by a second cousin to a reasonable degree of certainty from five million. To get the entire… I mean, five million as a percentage of the national population is very small. I can’t do the numbers off the top of my head, but it’s a very small percentage. You do not need many more to essentially give a 100% identification ratio to a possible degree of certainty to a second cousin. And that’s gonna happen, so you don’t need the federal government to roll out a database ’cause we’re already almost there with a privately provided database.

21:11 Aaron Ross Powell: Well, then… And this is a question for Matthew, then, on the legal side, because the… What is it, GED…

21:18 Paul Matzko: Match.

21:18 Aaron Ross Powell: Match.

21:19 Paul Matzko: Mm-​hmm.

21:19 Aaron Ross Powell: That’s like an open source system that anyone can just pop in and access and look things up, but 23andMe is not. Like, I can’t log in to 23andMe and just start looking up people by DNA information. So, what’s the legal framework here? Like, if they… 23andMe may be able to identify 94% of the population, but is that technique accessible to law enforcement right now?

21:45 Matthew Feeney: Well, in fact, with a lot of these services, you can find relatives, and you’ll get automated emails that say, like, “Oh, we found a estimated… ” As was mentioned, “an estimated fourth cousin. Do you wanna connect?” And it’s like a social media site. You connect and you can compare whatever segments of DNA you share. And there, it seems like really difficult to make a privacy argument, right, because both of you have volunteered for the service for that reason. It’s not as if we uploaded it not for family history but because we wanna find everyone who has a certain gene for, I don’t know, brown eyes or whatever. And that’s the really interesting thing about this case, is that the privacy argument seems rather weak because the only reason someone would have uploaded that information to GEDmatch is to do the kind of thing that was done, which was to identify distant relatives. And yeah, the legal argument is not helpful if you’re the sort of person creeped out by this. There was one… There was some discussion about this case, obviously among privacy scholars after it was announced, and there was some discussion about, well, is there a possibility that maybe the police violated the law because they had to fake being someone in order to upload…

23:03 Aaron Ross Powell: The terms of service violators…

23:04 Matthew Feeney: Right. But it turns out that that’s totally fine. And actually, there are provisions of the Computer Fraud and Abuse Act that allow police to do this. So, yeah, not much to go on legally as far as protections go.

23:16 Paul Matzko: So there’s a comparison, I think, here to other Fourth Amendment issues, like cellphone tracking, which, Matthew, you’ve written about some here so I’ll get your input in a second, but where your cellphone is constantly pinging against towers, and that information is… You can’t… It’s very hard to avoid living in the modern world without your cell phone pinging on towers. And for a long time, law enforcement argued that they did not need a warrant to get access to that information.

23:46 Matthew Feeney: Yeah.

23:47 Paul Matzko: That just because you left that information out into the world, it’s now fair game for the police. Like your trash is behind your house, the police can rifle through that without a warrant. Recently courts have said, no, you actually do need a warrant to get that ping data from providers, at least in some cases. So I would propose that there is a similar situation here. One of the problems with GEDmatch is that the police didn’t have to get a warrant to do anything that they did in that process.

24:17 Matthew Feeney: Well, yeah, the question is who would you even serve a warrant to? In the sense that they’re just using the information that the website’s providing them. Again, it’s not like GEDmatch had the suspect’s DNA and the police wanted to get to the DNA data to compare to the crime scene. And if you look at websites that do this, 23&Me, Ancestry, whatever, they say “Look we comply with valid court orders”. But if you look at their data, it’s very rare for police actually look at these sites at the moment, and we should probably expect this to increase, but it’s still a very rare technique. And in fact, I think I’m correct in pointing out that most of the time that they actually do this, it’s for not the kind of cases we’re discussing here, but mostly doing with identity fraud and things like that.

25:06 Matthew Feeney: And yes, you’re right to point out that recently, the Supreme Court made a decision related to a case about cell site location information. But it was very, very narrow. Any listeners who want to look it up can find Carpenter v. United states, but that had to do with police access of physical location data as obtained via cell tower location information. But the court, five to four found yes, it is a violation of your reasonable expectation of privacy to physical location for police to gather 127 days worth of that information without a warrant. And they were like, “Well, finding out your physical location for more than six or seven days without a warrant is… That’s not okay. That you do need a warrant for more than six or seven days”. But it is not a case that particularly helps out here. That’s for sure.

26:09 Paul Matzko: There was another court case that I think applies, and this is more of an Aaron kind of question.

26:16 Aaron Ross Powell: I like those. [laughter]

26:17 Paul Matzko: It was actually a dissent by Antonin Scalia, the late Antonin Scalia, who is joined by the more liberal members of the court Ginsburg, Sotomayor, Kagan. And it was about a situation in Maryland or Virginia where someone’s DNA was, again, used in a similar method. But it was not a private database, it was with basically a family familial match to a state-​maintained DNA database. And the court ruled that was okay, but in this five-​four dissent, Scalia said, “perhaps the construction of such a genetic Panopticon is wise, but I would doubt that the proud men who wrote the charter of our liberties would have been so eager to open their mouths for boil inspection.” And then, Matt Ford who’s a journalist from the Republic commented, “six years later, it turns out that the American people may have built that genetic Panopticon themselves one self-​swab at a time.” Which is great writing, but when Scalia talks about a genetic Panopticon, what’s he referring to there and do you think that actually applies to this situation?

27:26 Aaron Ross Powell: Yes. So when you put this in our outline for today, I commented that I wasn’t sure how the metaphor worked. So for listeners who aren’t familiar with the Panopticon, Jeremy Bentham thought it was an amazing idea that he had thought up. And then Fooko came along later and said, “no, no, this is actually horrific idea,” and I think Fooko was more right. But it was his perfect prison. So what you have is this prison that’s built in the round. So in a circle, all around are cells that people… Jail cells, and so the bars are all facing inwards towards the center of the big room. And in the center of the big room is a tower. And a person, guards, can sit in that tower. And the guards are not visible to the prisoners, but the prisoners are all visible at any time to a guard who happens to look in that direction.

28:27 Aaron Ross Powell: And so the idea is like the guards can keep their eyes on all of the prisoners at once. But where it gets particularly creepy is that because the guards can’t be seen by the prisoners, no given prisoner at any time knows if they’re being watched. And the idea then is that you don’t actually have to then be watching the prisoners all the time because the prisoners will kind of start self-​policing because they will assume, at any given moment, that they’re being watched. And so they’ll act as if, at any given moment, they are being watched. And so, to some extent, you could get away with having no guard in the tower because the prisoners all end up kind of assuming a non-​existent guard potentially.

29:08 Aaron Ross Powell: So that’s the Panopticon. And you can see that that makes a lot of sense in the ubiquitous surveillance and the facial recognition because you’re out in public or you’re out anywhere, you could be being watched at any time. So you’re going to act as if you’re being watched, and so you’re gonna moderate your behavior with that in mind and not do anything that you think might get you in trouble. But I’m not totally sure how that applies to a DNA database because it’s not like your DNA is being watched potentially at all times, and you’re leaving this stuff all over the place no matter what you do. And that DNA, because we all know that Lamarckism is not correct, that DNA is not somehow… It has not embedded in itself the information about your behaviors and activities.

29:53 Matthew Feeney: Yeah, I think Scalia, I think is rightly renowned as a good writer, but I’m not sure if his pen does justice to the facts of the case here. I do think that this case Maryland v. King, Scalia’s dissent will, in the future I’m sure, will be viewed as a very pressing piece of writing. The case, though, involved Maryland’s database of cold case DNA. So there’s this guy, Alonzo, I believe his name was Alonzo King, was arrested for assault. And when he was arrested under Maryland law, his cheek was swabbed. And then there was a ping, it matched DNA related to an unsolved rape. And he was convicted of that rape and that was the appeal to the Supreme Court where he was making… And I think it’s clear he was right to point this out, like a rather plausible claim that you need some degree of suspicion to… You can… The Maryland’s argument was basically this is just like fingerprinting. It’s just used to ID people. This is no different to another ID verification method. But as we’ve discussed, the amount you can find out about someone from their DNA goes well beyond actually anything to do with the person, but also their family, medical conditions they might even not know they carry. It’s a very, very revealing piece of…

31:18 Paul Matzko: And there’s contamination issues with… I mean, just because we find their DNA, doesn’t mean that they’re actually on the site at that time committing that murder. I mean, there’s… Yeah.

31:25 Matthew Feeney: Sure.

31:25 Aaron Ross Powell: But I guess, I don’t… You say it’s not like a fingerprint and that yes, we can find all this other information. But in the circumstance at hand… So we have DNA from a crime scene that we found, and now we have a person who we’re curious about and we’re gonna check to see if they match the person who was at the crime scene. That looks indistinguishable from the fingerprint thing. So yes, they could also have found out that he was at risk for Parkinson’s disease, but that’s utterly irrelevant to anything they’re doing. I guess I’m having a hard time seeing why this ought to creep us out more than fingerprints, that you fingerprint someone when you take them in for whatever reason, and then you happen to just also upload that fingerprint to a database to see if that person connects to any past crimes. That doesn’t seem terribly bothersome to me.

32:25 Paul Matzko: Well, I mean, is there an issue here of overconfidence in the methods? Even if the use of the DNA in the situation as a piece of supporting evidence is defensible. Using it as like, “Oh, we found his DNA, therefore… ” Absent any kind of circumstantial evidence or other evidence of him being at the crime scene or being a suspect, just that bit of information alone is enough to secure a conviction. In other words, the specifics of the case, the way in which they used it is problematic, not the fact that they use that evidence period. Does that make sense?

33:03 Matthew Feeney: Yeah, and I think consider the slippery slope we’re on. Because I think Aaron is right to point out that, “Look, we’re not talking about violent criminals. You’re not gonna get many people concerned here,” but listeners should if they have the opportunity to actually listen to the oral argument of Maryland v. King. It’s one of the most interesting pieces of oral argument I’ve heard because the Maryland attorney stands up and says, “Well, since we’ve started this, we’ve secured so many convictions and so many arrests.” And Scalia jumps in and says, “Oh, that’s great. I bet you could get even more if you conducted even more unreasonable searches and seizures.” The point being that you could always defend the collection of more data by arguing that you will secure convictions for violent and serious crimes. And that’s right, but that shouldn’t be used as a justification for gathering an increasing amount of data period.

33:56 Paul Matzko: Yeah. Well, this is good, but let’s move the conversation to the more positive spin. I think it’s interesting that none of us… I mean, there are risks to how this information is used. There are potential missteps, false IDs, but there’s nothing inherent to the idea of a DNA collection that necessarily is fundamentally like anti-​liberty, your anti-​libertarian. But when we look at the plus side… It would be a mistake I think to only look at the potential downsides. Let’s look at the advantages of a DNA database for ordinary people’s prosperity, health, and happiness. The two examples that come to mind are adoption, the case of adopted folks trying to find their birth families as well as the kind of medical innovation implications. On the adoption front, there is still a concern. You may want to find your birth parent, but it doesn’t mean your birth parent wants you to find them.

34:58 Paul Matzko: And this makes it easier to broach that barrier. You used to get stopped by not find your way or the documents are lacking at your orphanage or whatnot, but now you can find… There’s a chance you’ll be able to find your parent even though they’ve gone out of their way to make it hard for you to do so. So imagine there’ll be some drama over individual cases like that. On the flip side, there’s a lot of folks who are gonna find birth parents and both sides will be delighted, and thank goodness there was that DNA database.

35:29 Matthew Feeney: Yeah. I think, well, not being a parent myself, but I think anyone who gives up a child for adoption must know that there is a chance even if they want to remain anonymous, especially in the age of 23andMe, that this could happen. So you give up your child for adoption and then years later that child spits in a tube and gets the ancestry information back, and you think you’re in the clear because you didn’t have the service, but it turns out your brother did. Right. So then, your child has figured out like, “Oh, well, I’ve identified an uncle, but I haven’t identified the parents. And you don’t have to be Sherlock Holmes to figure out who your parent is once you’ve discovered that. So I’m trying to think about what kind of solution there could be to this problem, but they don’t even really view it as a problem in the way I think you’re trying to highlight it.

36:20 Paul Matzko: Well, there’s that sense of… The way I would put it is that, to think of it as a problem as I was kind of mooting, is to misunderstand who owns your genetic information, right? Yes, your parents have an identity and there’s a certain amount… Their genetic code is unique to them, and you can maybe argue some kind of ownership right over that. And we do, which is why these organizations have you sign waivers, that you’re signing the waiver to your privacy right of your genetic information. At the same time, you pass on a significant portion of that genetic information to your children. They own a right to that information. And so, in other words, the genetic overlap between you and your parent belongs to both of you. Right? Just as much as you have the right as a parent to control your genetic information, so too do your kids. Yet, that information overlaps, and if that… That’s something that you really just can’t… You both have a legitimate ethical claim to ownership of that.

37:24 Aaron Ross Powell: I can think of a potential positive related to adoption that isn’t limited to just the adoption. So we’re recording this on August 2, and tomorrow August 3 on my other podcast Free Thoughts, we’re releasing an episode with Adam Bates about the refugee situation. And refugees… They face a lot of horrors. Some of them the result of a situation, quite a lot of them the result of the way that governments treat them. But one of the problems that they have is they come into… They get admitted to a new country, a refugee makes it to the US, and now what? They’ve left behind everything that they had. They maybe don’t know the language, they maybe will have a hard time finding a job, whatever else. This might enable them to… Families are powerful networks for support and so you can, “I don’t know anyone in this country, but if I can upload my information to 23andMe, I might be able to find some second cousins, some third cousins who… They might not want to help me, but they also might want to help me.” And it gives you a potential ability to find a support network in an easier way than just stumbling around asking people.

38:37 Matthew Feeney: So your mention of refugees reminded me of another… Not intentional, but a definite benefit of these kind of websites is that they state… Are used by racist assholes who want to prove a lot of their purity. And turns out, that people who claim to be white supremacists might be a little bit African, or a little bit Jewish. And maybe a added benefit of this is actually it’s helping educate more people about that old cliche that we’re all related and that, actually, you shouldn’t care as much about your race as you really do. But of course is they will claim 23andMe is a Jewish conspiracy. So we can’t take their input. But I do find all of that kind of stuff actually a real added benefit. If we can learn more about the history of the species and migration, that’s an added benefit, I think.

39:27 Paul Matzko: Yeah. It’s like the white supremacist’s version of the Henry Louis Gates show for, I think it’s for History, where they go and various celebrities. Inevitably it turns out, no matter who the celebrity is, “Your family owned slaves”. Because it’s rigged.

39:42 Matthew Feeney: Is this “Who Do You Think You Are”?

39:44 Paul Matzko: That sounds right.

39:45 Matthew Feeney: Yeah, this is another American show stolen from the great British television.

39:48 Paul Matzko: Oh is it?

39:49 Matthew Feeney: Yeah but, you know.

39:51 Aaron Ross Powell: Just refined and made better. As we did with the governmental system that they tried to…

39:56 Matthew Feeney: But you didn’t do it with The Office. [chuckle]

39:58 Paul Matzko: Well, that’s arguable, but… So I think the last big point we should make here is to tie this to the advantages for medical innovation. So you mentioned refugees who are disconnected from family networks, but whether you’re a refugee or you’re adopted or for whatever reason you don’t have access to family knowledge about medical conditions. Like I know that there’s a family history of colon cancer in my family. So because of that, I’m taking steps to keep an eye on that, more so I’m doing testing earlier than I would if I didn’t have that knowledge. So there’s a real huge advantage for these databases and allowing people to detect those kind of family issues that they wouldn’t be able to otherwise. This should actually literally extend people’s lives, because they’re gonna be able to take earlier preventive measures.

40:47 Paul Matzko: So that’s a big component. That’s the most obvious one. The other one that comes to mind, 23andMe just signed a deal with GlaxoSmithKline for $300 million to use 23&Me’s database to help create targeted medicines over the next decade. And idea again, if you can go find all these genetic markers across large populations, multiple family trees, they’re going to be able to do a better job of identifying which genetic markers correlate to which disorders and diseases and then target that very specifically. And here comes a potential some day in the future of a CRISPR-​enabled boutique medicine targeting a particular disease across particular family lines and essentially eradicating whole new categories of genetic diseases.

41:42 Paul Matzko: So, there’s really cool possibilities here going forward, and I think it would be a mistake to allow our concerns, legitimate concerns over privacy, law enforcement use, national databases to cause us to kind of impede some of the beneficial aspects of these databases for people’s health and prosperity and happiness. So on that note, I think we’ll call it a close for today’s episode. Thank you for listening. Until next week, be well.

[music]

42:15 S4: Building Tomorrow is produced by Tess Terrible. If you enjoy our show, please rate, review and subscribe to us on iTunes or wherever you get your podcasts. To learn about Building Tomorrow or to discover other great podcasts, visit us on the web at lib​er​tar​i​an​ism​.org.