CyberWarfare / ExoWarfare HyperWarfare

AWS Facial Recognition Platform Misidentified Over 100 Politicians As Criminals

Comparitech’s Paul Bischoff found that Amazon’s facial recognition platform misidentified an alarming number of people, and was racially biased.

Facial recognition technology is still misidentifying people at an alarming rate – even as it’s being used by police departments to make arrests. In fact, Paul Bischoff, consumer privacy expert with Comparitech, found that Amazon’s face recognition platform incorrectly misidentified more than 100 photos of US and UK lawmakers as criminals.

Rekognition, Amazon’s cloud-based facial recognition platform that was first launched in 2016, has been sold and used by a number of United States government agencies, including ICE and Orlando, Florida police, as well as private entities. In comparing photos of a total of 1,959 US and UK lawmakers to subjects in an arrest database, Bischoff found that Rekognition misidentified at average of 32 members of Congress. That’s four more than a similar experiment conducted by the American Civil Liberties Union (ACLU) – two years ago. Bischoff also found that the platform was racially biased, misidentifying non-white people at a higher rate than white people.

These findings have disturbing real-life implications. Last week, the ACLU shed light on Detroit citizen Robert Julian-Borchak Williams, who was arrested after a facial recognition system falsely matched his photo with security footage of a shoplifter.

The incident sparked lawmakers last week to propose legislation that would indefinitely ban the use of facial recognition technology by law enforcement nationwide. Though Amazon previously had sold its technology to police departments, the tech giant recently placed a law enforcement moratorium on facial recognition (Microsoft and IBM did the same). But Bischoff says society still has a ways to go in figuring out how to correctly utilize facial recognition in a way that complies with privacy, consent and data security.

Read the podcast transcript below, or download the podcast direct here.

A lightly edited transcript of the podcast:

Lindsey O’Donnell-Welch: Welcome back to the Threatpost podcast. You’ve got your host, Lindsey O’Donnell Welch here with you today. And I am joined by Paul Bischoff who is the consumer privacy expert with Comparitech. Paul, thanks so much for joining today.

Paul Bischoff: Thanks for having me.

LO: Yeah, so we’re going to talk about facial recognition today. And this is a technology that has really made the news over the past few months, but last week, lawmakers actually proposed legislation that would indefinitely ban the use of facial recognition technology by law enforcement nationwide. And Paul, I know that you have looked a ton at the technology and the privacy challenges there. Can you just to start off, like, what are some of your thoughts on this newly proposed bill? I mean, it’s it’s pretty interesting.

PB: Well, I think at least a moratorium needs to be in place on police use of facial recognition. Right now, there’s pretty much no regulation, and how individual police departments can buy face recognition technologies and use them pretty much however they want. And you know, there really should be more regulations on how they can use it, who they can share data with, when they can use it, in what context, things like that. And hundreds of probably police departments in the US have have bought into the face recognition and are just using it without any sort of regulation or limitation. So I think this is a good place to start. Let’s first of all, get rid of it, and then we’ll start working on regulations about what should be allowed in what shouldn’t be allowed.

LO: Right. I totally agree that we need to take a step back here and Look at what will work versus what is kind of infringing on privacy. It’s so important to do from a government standpoint, and from a law enforcement standpoint. And you know that when even Microsoft, Amazon and IBM are all stepping back and banning the sale of their own facial recognition technology to police departments and pushing for that federal law enforcement for regulation, that that this is something that is serious if even tech companies are buying into that, you know?

PB: Yeah, I think a part of that is, you know, the whole Black Lives Matter movement has brought it into light that face recognition has some issues with racial bias and things like that. And so it’s really, for those big companies, it’s bad optics right now to be selling the police face recognition that can be used to do things like identify protesters. So I think that’s definitely part of it, just that it’s good PR for them, but I’m glad that they’re that they’ve stopped selling it for sure.

LO: Right. Yeah, and to your point, there’s been a ton of stories over the past few weeks around facial recognition in the news and a lot of it does have to do with the ongoing Black Lives Matter protests and, and racial bias and even surveillance and how that is kind of playing into these protests and the government surveillance aspect of it, which makes sense. I’m curious too, more recently, there was this horrifying story emerging of a Detroit citizen, whose  name was Robert Julian-Borchak Williams, who was an African American man and he was arrested after a facial recognition system falsely matched his photo with security footage of a shoplifter. And, you know, the ACLU came out and filed a complaint and said that this was the first wrongful arrest that was caused by faulty facial recognition technology. So it’s it’s kind of insane and that was a pretty scary real life instance to see where this is actually really impacting someone’s life because of a fault in the technology. And I know that Comparitech actually did a study a few weeks ago, where you tested the accuracy of Amazon’s recognition platform, I believe. Can you tell us a little bit more about this study and really how it plays into everything that’s going on right now?

PB:  Sure. So our study was actually a re-creation of what the ACLU did in 2018. The ACLU took a bunch of other members of Congress, it took their headshots, and then matched them against a police arrest database of mugshots. So these are suspected criminals and they just check to see what sort of incorrect matches they would get. The thing about the ACLU study that Amazon contested afterward is that they only used what’s called a confidence threshold of 80 percent. And they said that was too low. Going back a little bit, face recognition technology doesn’t just say, “Oh, these two photos are of the same person or they’re not,” it’s not a yes or no answer, they usually give you an answer in terms of a percentage. So we’re 80 percent sure that these two people are the same person, these two images are of the same person. Amazon says that should be set to 95 or 99 percent for police use. But again, there’s no regulation that says police have to use those thresholds. So what we did is, we ran the same study, and we ran and we tested the results at a confidence threshold of 80 percent and at 95 percent and there were no incorrect matches at 95 percent. But 80 percent we did he see a few incorrect matches and even at 90 percent we saw some incorrect matches. In total, the Amazon recognition technology misidentified 32 members of Congress and matched them against people in our arrest database. We ran this experiment four times, with four different sets of arrest photos, but all the Congress people’s photos were the same each time. And then we averaged the results together.

So on average, it misidentified 32 members of Congress as people in the arrest photos. That’s four more than what the ACLU tested two years ago. So it would seem that the technology hasn’t really improved all that much. And the other thing we found was that there was definitely some racial bias in the technology – the people who were mismatched, about half of them were people of color, even though only about 10 percent of Congress consist of people of color.

LO: Oh, wow. Okay. Yeah, and that was just recently too, right. I mean, you mentioned that the ACLU did this two years ago. And so this goes to show that nothing is changing in terms of this technology, getting better.

PB: According to what we found, a part of the part of the big part of the article was also explaining how accuracy is measured and how accuracy works. So we’re not really testing accuracy, per se, but we can test whether the accuracy improved by repeating the ACLU’s experiment. And at the 80 percent confidence threshold, which is what ACLU tested at, the technology does not seem to have gotten any better when we compare the results from two years ago to now. Part of this is basically because face recognition is actually fairly simple. It’s just measuring the distance between your eyes, your nose and the corners of your mouth. And then it starts to factor in other things like your skin color, your hair color, eye color, etc. So I think there’s maybe a limit as to how good it can be. And maybe that’s another reason why companies have stopped sort of investing so much in it, like IBM, so it doesn’t seem to be getting like a whole lot better. I think everybody sort of has access to the same training set. You know, they all describe publicly available photos online, and they train their algorithms to look at these photos. And there seems to be sort of a limit to how good they can get.

LO: Yeah, I mean, I’m curious if you looked at kind of what was behind these issues in terms of, is it because of the algorithms themselves? Or is it because of the people who are creating the algorithms that they themselves are biased?

PB: Well, I mean, Amazon’s code isn’t like open source or anything, but what I suspect is happening is that whenever they train the algorithm, they just have more photos of white people to train it on. So it gets really good at training the algorithm on white people’s faces, but they have fewer photos to train their algorithm on for black people or for other people of color, or for women or for old people. So I think the reason for that bias isn’t because the the engineers were making it biased. I think it’s because they have a smaller training set – a smaller set of photos to train their algorithm on that include people of color and women.

LO: And another point too, is that in actual real life situations, there’s going to be certain factors and they might need to factor in the weather, it might be that the subject is far away from the facial recognition technology, through the camera or whatever. They might be facing a different direction or running or on a bike. And all these factors and more, as you mentioned, affect facial recognition, accuracy and performance. So, that’s something that’s difficult to spot for even the most advanced facial recognition software that’s  available.

PB: Yeah. And that brings to light the sort of the issue with the whole, the man who is arrested in Detroit is that the image they used to match him was from a video, some security video or something. And video is naturally, a still image from a video is going to be way granier than a still image taken from a normal camera. And so the quality of the photos you put into to match have a huge impact on how accurate the face recognition can be. So I think that that’s one of the things that we need to look at whenever we’re regulating this is to say, what sorts of images are you allowed to put in here and try to match because if you’re just putting in grainy photos, you’re gonna get a lot of mismatches.

LO: It is interesting that all of this is kind of coming to a head in part because of the Black Lives Matter movement and the protests because I feel like facial recognition has been in use for a while now and in real life applications, right? I mean, I know, the TSA uses it for certain TSA Global Entry programs and things like that.

PB: Yeah, I started reporting the first time ever on facial recognition back in like 2013 or 2014. So it’s been it’s been adopted very quickly and broadly by both private and public entities alike. And it’s been adopted by law enforcement and by like you said the TSA, immigration, customs enforcement. So all these people are jumping on board. And yeah, it’s a little bit scary.

LO: Yeah, definitely. I mean, it is funny that they’re, for the most part, I’ve seen like, a couple of instances where people have gone on to social media and complained or been concerned or inquired into, you know, how it’s being used, but I haven’t really, for the most part, I’ve seen kind of widespread adoption and just people being comfortable with it or more comfortable than I would have thought. And so that’s, that’s kind of interesting to note, but, I think that this recently proposed bill kind of begs an important question, which is what is the future of facial recognition? I mean, is this something that is going to continue? Is it something that regardless of law enforcement, maybe having regulations is going to continue at a higher profile level?

PB: Yeah. So there’s basically two major concerns with facial recognition, broadly speaking. One is, what if it’s not good enough? And you know, and then we have a system that disproportionately affects people of color and women and things like that. And then the other concern is what if it works too well? And then what sort of capabilities does that give the government.

But I think as far as your questions, what the future of facial recognition is, I think there will be at least some use of police – police will use it to some degree, I think it can be really in terms of, you know, human trafficking cases, and kidnappings, and things like that, I think those are good uses for facial recognition that the police can get involved in. But there may be more restrictions on putting up a camera on a public street to just identify everybody who walks by it, that sort of thing. But I don’t think facial recognition is going to go away completely. I think private companies have invested a lot into it. And like I said, it’s not that difficult to do. So we’ll probably see things going open source and be freely available for any developer who wants to pick up facial recognition and put it in their app. So I think we need to be prepared for a world where everyone has facial recognition on their phone, they can point it at you as they pass you by the street and try to identify you, that sort of thing. Which is scary, but sort of inevitable with the way that technology is so freely available these days.

LO: Yeah, that is kind of mind blowing. And it does also bring up questions about consent and you know, can you consent to your your face being part of part of the software, if someone’s just pointing a phone at you or even if there’s just a camera on the on the street that is using facial recognition technology. So that’s a whole different conversation.

But I do know that cities and governments seem to, at least in the U.S., be taking some steps to either regulate or ban government use of facial recognition. And even beyond the the federal bill that was proposed last week, Boston, which is my home city, announced that it would be banning the technology. So I think that this is at least kind of the top of the awareness level for a lot of local governments as well as tech companies and academia too. I think it was last week as well that there was 1,000 technology experts from organizations like MIT, Microsoft, Harvard all signed this open letter that denounced a upcoming paper that basically was describing AI algorithms that could predict crime that was based only on a person’s face, they were calling it out for promoting racial bias there too. So I do think that this is something that people are really passionate about and trying to make sure that this is something that doesn’t get to the point where it’s not regulated and could be actually dangerous for citizens.

PB: Yeah. I think when you’re thinking about regulation, the way of thinking about it is how would you write a privacy policy for facial recognition? So privacy policies usually stipulate a few things, we need to figure out who’s allowed to use it – not just police can use it, but who at a police department is allowed to authorize it – what it can be used for, who you share any of the data that you collect with, who is allowed be shared with – are local police departments allowed to share data with the IRS or the FBI, things like that.

So you have, who you’re sharing it with, what’s allowed to be shared with, under what context, what sorts of investigations are needed, are warrants needed, or any sort of court order, to use facial recognition and who it can be shared with. And then there’s obviously consent as well. So do you need to warn people who are about to walk in front of a camera that their faces are being scanned and being used in facial recognition, does there need to be signage up. So these are all questions that we need to ask and address in any attempt to regulate face recognition.

LO: Right. Yeah, I think those are definitely important points. Well, Paul, any any other kind of overarching thoughts about facial recognition technology and kind of where it’s going in the future and if privacy concerns will continue, or they’ll get better?

PB: I think it’ll continue. I think we’re already seeing sort of worst case scenarios in places like China, where face recognition is being used to restrict freedom of movement and freedom of assembly, things like that. So if you want to an example of how bad it can get looked at China, particularly the western regions of China, where it’s being used to heavily restrict where people can go and what they can do. So I’m just trying to be an advocate for privacy and avoid that sort of future. So I hope that more people will join in.

LO: Absolutely. Well, I’m sure that with what’s happened over the past few weeks, we’ll definitely at least raise awareness to this issue. So, Paul, thanks again for joining me today on the Threatpost podcast.

PB: Thank you so much.

LO: And to all our listeners. Once again, this is Lindsey O’Donnell Welch here today with Paul Paul Bischoff with Comparitech. Catch us next week on the Threatpost podcast and be sure to – if you have any thoughts or comments on facial recognition technology and privacy concerns – Be sure to follow us on Twitter @Threatpost and shoot us a comment or a thought in the comments.

 

from: https://threatpost.com/aws-facial-recognition-platform-misidentified-over-100-politicians-as-criminals/156984/