Coded Bias review: Remarkable documentary on Netflix examines racist facets of facial recognition systems – Entertainment News , Firstpost


Coded Bias rounds up the troublingly ways artificial intelligence is being used around the world to assess the everyday lives of billions of people — intermittently reminding us of a long and grim fight.

A still from Code for Bias by Shalini Kantayya, an official selection of the U.S. Documentary Competition at the 2020 Sundance Film Festival | Courtesy of Sundance Institute

Language: English

The framing story of Shalini Kantayya’s documentary Coded Bias (released on Netflix on 5 April) is about Ghanaian-American computer scientist Joy Buolamwini, at MIT Media Lab who realised that most facial recognition systems she encountered were more likely to detect her face if she wore a white mask.

Buolamwini realised this when she was at work on a pet project, the ‘Inspire Mirror’, a mirror that could say, superimpose a lion on her face or put someone who inspires her, like Serena Williams. Her initial findings were troubling enough for Buolamwini to dig deeper and chart the true extent of racial and gendered bias in artificial intelligence-based systems in America and around the world — research that put her on a collision course with some of the biggest tech companies in the world, including Amazon.

Early in the piece, Buolamwini lays down the case against these companies’ facial recognition technologies, during an internal briefing at MIT Media Lab: “My own lived experiences show me that you cannot separate the social from the technical. (…) I wanted to look at different facial recognition systems, so I looked at Microsoft, IBM, Face++, Google and so on. It turned out these algorithms performed better with a light male face as the benchmark. It did better on the male faces than the female faces, and lighter face better than darker faces.”

The beauty of Coded Bias is the way it expands upwards and outwards from this starting point and rounds up the small and big ways artificial intelligence is being used around the world to assess the everyday lives of billions of people — and more troublingly, to make resource allocation decisions in real-time. Quite simply, AI-based systems are (often without our knowledge) making decisions about who gets housing, or a car loan—or a job. In a lot of cases, the people affected by these decisions don’t even know the criteria used by the software to adjudicate their lives. And, of course, when it comes to surveillance or other, even more, punitive forms of technology, it’s the poor and the marginalised sections (“areas where there’s a low expectation of human rights being recognised”, as a line from the film explains) of the society that becomes guinea pigs, testing the extents of the technology.

Coded Bias review Remarkable documentary on Netflix examines racist facets of facial recognition systems

In this Wednesday, Feb. 13, 2019, photo, Massachusetts Institute of Technology facial recognition researcher Joy Buolamwini stands for a portrait at the school, in Cambridge, Mass | (AP Photo/Steven Senne)

A small but disturbing scene in London sees the police harassing and eventually charging an old man who while walking down the street, pulled up his jacket to conceal his face from a facial recognition camera. We are shown how protestors in Hong Kong used laser pointers to confuse the cameras — and how the spray-painting of a security camera became a rallying moment, symbolising democratic values. And finally, towards the end of the film, we see the logical endpoint of the surveillance state — China’s ‘social credit’ system. In China, if you want the internet you have to submit yourself to facial recognition. From that moment on, everything you do affects your ‘score’, and the scores of your friends and family. Criticising the Communist Party may very well deprive you or them of basic freedoms like travelling out of the state/province, or you may be punished in some other way.

Coded Bias unfurls all of these remarkable case studies with a little help from women who’ve written extensively on these interrelated matters of math, policy and technology.

Like the futurist Amy Webb, author of The Big Nine, who explains how exactly the ‘big nine’ (the six American and three Chinese firms that are the biggest investors in artificial intelligence) are a part of this whole mess. Or the mathematician and data scientist Cathy O’Neil, author of Weapons of Math Destruction (a quite brilliant treatise on how technology reinforces existing biases, a New York Times bestseller in 2016). I had followed O’Neil’s work much before I ever heard of Coded Bias, and it was a pleasure to see her in the movie, dropping truth bombs at an impressive rate.

O’Neil also functions as one of the emotional centres of the narrative — at middle school, her chauvinist algebra teacher had told her she had “no use” for math since she was a girl. In the present day, we watch the amiable, blue-haired O’Neil playing math games with her young son, in one of the few moments of uncomplicated peace and levity in the film.

Coded Bias review Remarkable documentary on Netflix examines racist facets of facial recognition systems

A still from Coded Bias

Moments like that one also underline the fact that Coded Bias isn’t a straight-laced ‘talking heads’ documentary. There is a lot of playfulness, whimsy and symbolism in its juxtapositions: whether it’s Buolamwini getting her distinctive hairdo done just right while talking about how she had always dreamt of getting into MIT (the subtext is that MIT isn’t exactly overflowing with women who look like her), or a member of the ‘Big Brother Watch UK’ (a civil rights watchdog organisation) reading aloud from Orwell’s Nineteen Eighty-Four.

The makers of Coded Bias also make it clear that while the overall scenario remains bleak, small victories are being quietly racked up by women like Buolamwini, O’Neil and company. Thanks in part to Buolamwini’s research, in 2020 Amazon declared a one-year moratorium on its facial recognition tech’s usage in law enforcement. In Houston, middle schools stopped using a controversial, AI-based ‘value add system’ that assessed teacher performances. IBM has stopped their facial recognition operation altogether, arguing that the technology poses a threat to civil rights.

These are important wins, but as Coded Bias reminds us intermittently, the fight ahead is a long and grim one.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *