Description

The Facial Agency project was started as a way of adding a sense of transparency to the often “black box” nature of how facial recognition software is built and deployed. We both felt motivated, given the current climate of extreme datification of the body, and the face in particular, to draw attention to the gap between how much agency an individual typically wants over their data versus how much agency they are likely to actually have over that data. Making such concerns visible meant generating a system that acknowledged that users themselves produce massive amounts of data about themselves, digitally replicating their faces across the Internet, but also that governmental agencies are deeply invested in the face as a means to verify the identity of its citizens. Therefore, this project is concerned with the actual facial data that is being gathered a kept, but also with the ways in which that data is shared and cross-fed into other algorithms, often non-consensually. This project, then, is not meant to scare those who participate in it, but to provide further knowledge about how the systems work and provide vocabulary and resources for informed public conversation about the increase in the use of biometric systems powered by facial data.

The below image explains, in general terms, how the left and right portraits are generated:


You can find further description of how we calculated Desired Agency (DA) score and the Actual Agency (AA) score, by clicking on the hightlighted URLs.

Once we have a score for both the DA and the AA, we process the portraits taken through a facial recognition software which then also pixelates the face based on the given DA and AA scores. First, using the Python library OpenCV and a technique for facial detection called a Haar Cascade, the script attempts to find a face. If it is able to identify that there is a face within the picture, it crops the area that holds the face and enlarges that crop based on the DA or AA score, effectively pixelating that image; the higher the score, the more pixels it generates and the clearer the portrait becomes. That newly pixelated version of the face is then placed back into the photo and the image is printed.

We have included the python scripts we’ve written on our Code page.

Sol and Aaron are grateful to our classmates in the SurvDH course at the 2019 Digital Humanities Summer Institute lead by Christina Boyles and Andrew Boyles Peterson for the initial inspiration for this project.