Facial recognition tech sucks, but it’s inevitable
(Source: thenextweb.com)

clicks | 11 days ago | comments: discuss | tags: cryptocurrency


Article preview (bot search)

(Original link: thenextweb.com)

Facial recognition tech sucks, but it’s inevitable by Christopher Shiostu — in Contributors
Credit: US CBP 11 shares Is facial recognition accurate? Can it be hacked? These are just some of the questions being raised by lawmakers, civil libertarians, and privacy advocates in the wake of an ACLU report released last summer that claimed Amazon’s facial recognition software, Rekognition, misidentified 28 members of congress as criminals.
Rekognition is a general-purpose, application programming interface (API) developers can use to build applications that can detect and analyze scenes, objects, faces, and other items within images. The source of the controversy was a pilot program in which Amazon teamed up with the police departments of two cities, Orlando, Florida and Washington County, Oregon, to explore the use of facial recognition in law enforcement.
Have you visited TNW's hype-free blockchain and cryptocurrency news site yet? It’s called Hard Fork.
TAKE ME THERE In January 2019, the Daily Mail reported that the FBI has been testing Rekognition since early 2018 . The Project on Government Oversight also revealed via a Freedom of Information Act request that Amazon had also pitched Rekognition to ICE in June 2018.
Amazon defended their API by noting that Rekognition’s default confidence threshold of 80 percent, while great for social media tagging, “wouldn’t be appropriate for identifying individuals with a reasonable level of certainty.” For law enforcement applications, Amazon recommends a confidence threshold of 99 percent or higher .
But the report’s larger concerns that facial recognition could be misused, is less accurate for minorities, or poses a threat to the human right to privacy, are still up for debate. And if there’s one thing that’s for certain, it’s that this probably won’t be the last time that a high profile tech company advancing a new technology sparks an ethical debate.
So who’s in the right? Are the concerns raised by the ACLU justified? Is it all sensationalist media hype? Or could the truth, like most things in life, be wrapped in a layer of nuance that requires more than a surface level understanding of the underlying technology that sparked the debate in the first place?
To get to the bottom of this issue, let’s take a deep dive into the world of facial recognition, its accuracy, its vulnerability to hacking, and its impact on the right to privacy.
How accurate is facial recognition? Before we can assess the accuracy of that ACLU report, it helps if we first cover some background on how facial recognition systems work. The accuracy of a neural network depends on two things: your neural network and your training data set.
The neural network needs enough layers and compute resources to process a raw image from facial detection through landmark recognition, normalization, and finally facial recognition. There are also various algorithms and techniques that can be employed at each stage to improve a system’s accuracy. The training data must be large and diverse enough to accommodate potential variations, such as ethnicity or lighting. Moreover, there is something called a confidence threshold that you can use to control the number of false positive and false negatives in your result. A higher confidence threshold leads to fewer false positives and more false negatives. A lower confidence threshold leads to more false positives and fewer false negatives.
Revisiting the accuracy of the ACLU’s take on Amazon Rekognition With this information in mind, let’s return to that ACLU report and see if we can’t bring clarity to the debate.
In the US and many other countries, you’re innocent until proven guilty, so Amazon’s response highlighting improper use of the confidence threshold checks out. Using a lower confidence threshold, as the ACLU report did, increases the number of false positives, which is dangerous in a law enforcement setting. It’s possible the ACLU did not take into consideration the fact that the default setting for the API should have been corrected to match the intended application.
That said, the ACLU also noted: “the false matches were disproportionately of people of color…Nearly 40 percent of Rekognition’s false matches in our test were of people of color, even though they make up only 20 percent of Congress.” Amazon’s comment about the confidence threshold does not directly address the revealed bias in their system.
Facial recognition accuracy problems with regards to minorities are well known to the machine learning community. Google famously had to apologize when its image-recognition app labeled African Americans as “gorillas” in 2015.
Earlier in 2018, a study conducted by Joy Buolamwini , a researcher at the MIT Media Lab, tested facial recognition products from Microsoft , IBM , and Megvii of China. The error rate for darker-skinned women for Microsoft was 21 percent, while IBM and Megvii were closer to 35 percent. The error rates for all three products were closer ...

Resources