Coded Bias

This helpful, informative documentary is hosted by an attractive, charming Ph.D. candidate at MIT, Joy Buolamwini.  She works in the MIT Media Lab with a special interest in Artificial Intelligence (AI), and after she began realizing the potential harm in its intrinsic bias, she founded the Algorithmic Justice League.  She is joined by numerous other researchers and writers to enlighten us about AI and the use of algorithms by government officials and corporations.  Also featured is a woman in the UK, Silkie Carlo.  The UK has started an extensive program of surveillance involving taking pictures of passersby on sidewalks and collaborating with the police department to arrest a person if they find a match.  In response to this program, Carlo started Big Brother Watch intended to protect the British people from what she sees as the kind of surveillance George Orwell warned about in his famous novel, 1984.

Most people are unaware of the widespread use of AI.  We’ve heard about facial recognition technology in China, where the government there is using it more than anywhere else.  They use it to for people to enter secure buildings, pay for transportation, groceries (even vending machines), and just about everywhere else.  The government used it to identify protestors in Hong Kong.  Moreover, China has a “social credit score” that tracks what citizens say and do on the basis of where their faces appear, and rates them accordingly.  This score can affect not only an individual’s life, but the lives of family members as well. Obviously, any criticism of the government will cause a score to go down and will determine whether a person receives certain opportunities.  Individual Chinese have little choice in this; for instance, in order to get internet service, one must agree to have one’s picture taken for purposes of facial recognition.

This helpful, informative documentary is hosted by an attractive, charming Ph.D. candidate at MIT, Joy Buolamwini.

The MIT researchers and others have recognized the potential harm in mass surveillance without consent, not only on the part of government but on the part of corporations and public institutions as well.  Concerns more fundamental to surveillance have to do with biases found in the technology that give it a high rate of misidentification.  The researchers have discovered that the technology was devised primarily by white males, for instance, so that faces of women and people of color are more frequently misidentified and discriminated against.  As an example, it became clear that Amazon’s use of algorithms for hiring was so discriminatory, they stopped using it.  Other examples taken from Wall Street, school districts, college admissions, health care, credit, and a wide-ranging number of systems have been found to have inaccuracies in their use of facial recognition technology that have resulted in “algorithmic oppression.”

The upshot is that Federal regulation of the use of this technology is paramount in making sure that citizens are not harmed in ways even they may not recognize. Startling is the fact that the researchers estimate that over 117 million Americans are already on facial recognition networks.  Only a few cities in the U.S. have banned the use of facial recognition without expressed consent.

Final Thought

Coded Bias gives us fair warning about the widespread use of facial recognitiontechnology, on the basis of its intrusiveness and built-in biases.


Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top