The Internet of Faces

Photo by Etienne Girardet (Unsplash)

Log off all you want — throw your phone in a river, if you wish — but somewhere, someone will still be looking at you.

R.E. Hawley (2021)

There’s a scene in Alex Garland’s Ex Machina (2014) that has been haunting me lately. Ava, the android, explores the hallways in the house of her maker, the enigmatic Nathan, and comes across a wall of synthetic faces. She reaches towards one; it is her own face. 

In a world where surveillance technology, algorithms, and artificial intelligence proliferate, is our own face still ours to own? What does the human face, of flesh and blood, mean on the internet? How do we negotiate seemingly opposing desires for our faces to be seen (authenticity) and unseen (privacy) by the internet? 

In a 2018 essay titled “The Face as Technology”, Zara Dinnen and Sam McBean argue that “the meaning of the face is changing and emergent: from the ubiquity of CCTV, selfie-culture and the portrait mode of apps such as FaceTime and Tinder, to celebrities as ‘(inter)face-objects’, to the facial recognition software that links such public and private, personal and common modes” (p.123). The Internet of Things is an Internet of Faces — internet governance necessarily concerns the face and biometric technology. 

The Human Technology Institute define biometric technologies as,

any programs or systems which use biometric data to derive, assess and/or analyse information about people. Facial recognition technologies are a sub-type of biometric technology, as they can be used to verify, identity or analyse people through face data (Davis et al., 2022, p. 17).

Biometric facial recognition technology (FRT) is increasingly prevalent in the mundane rituals of our lives, whether we individually consent to it or not. At a hotel I used to work at, there was talk about installing a staff sign-in system that uses facial recognition. I use FACE ID to unlock my phone to pay — my phone knows what I look like with a mask on or while wearing glasses. And when I stand at a Woolworth’s self-checkout machine, to pay with my FACE ID-unlocked phone, I can see myself on a tiny screen being recorded. 

Stefka Hristova (2022) draws from Deleuze and Guattari’s work on “faciality machines,” to argue that algorithms imbue faces with meaning, only after they’ve been emptied of humanity.  “Faces,” writes Hristova, “emerge as non-human calculations, as surfaces that indeed ellipse the messy and unamenable human” (2022, p.79). Through the “faceless computation logic of computer vision” (Harvey, 2021), the face is defined and delineated by geometric measurements of pixels. The face is indexed as “a set of coordinates that are then given a geographic direction such as north, sound, west, east, north-west, north-east, south-west, south-east” (Hristova, 2022, p.82). How romantic that a face might be a map. It just isn’t our map to use. 

Seeing Faces

During lockdown, I got to know a disembodied version of myself on Zoom. I became a floating head in meetings and seminars and break-out rooms. Perhaps I was having a narcissistic turn, but I watched myself a lot. I thought about what someone else might read and assume about me from my face — my age, gender, ethnicity, emotions — but I was also self conscious of the way I sat, nodded, or pushed my glasses up. In turn, I thought about how eagerly I had altered my behaviour just because I knew there was the potential that I was being watched. So, I feel fortunate indeed to not have had to take an online university exam monitored by a software like ProctorU that uses FRT to confirm a student’s identity, for example (Mason, 2020).

In June last year, the ABC reported on the use of facial surveillance technology on Australian customers at retail stores such as Kmart and Bunnings, revealed after a probe by consumer advocacy group CHOICE. According to their survey of customers, CHOICE found that 76 percent of Australians are unaware that such biometric scanning is being used (Gill, 2022). 

Earlier this year, FRT led police in Louisiana, USA to arrest and jail the wrong person — both the victim of the mistake and the target suspect were Black men (AP News, 2023).

In March, Reuters reported that US tech is being used in facial recognition surveillance deployed by the Russian government to suppress and arrest anti-war and anti-government protesters (Masri, 2023). This is reminiscent of the concern that Hong Kong police were using FRT against protesters in 2019. Whether accurate or not, there were at least reports of police physically forcing those who they deemed suspects to unlock their phones with their faces (Mozur, 2019). 

Biometric technology does not develop out of objective data for neutral agendas. Beyond the valid concern that mass surveillance technologies are a deep invasion of privacy and a powerful tool to suppress citizen’s autonomy, there is also the issue of bias, particularly racism, embedded in the algorithms that power FRT. Furthermore, surveillance technologies have been historically weaponised against marginalised communities – as Mutale Nkonde (2019) asks, “are these facial recognition systems for all people, or just white people?” (p.32). For instance, Fabio Bacchini and Ludovica Lorusso (2019) argue that in the US Black people are “more often stopped, investigated, arrested, incarcerated and sentenced as a consequence of face recognition technology” (p.326). 

Collecting Faces

Lately, transphobes on Twitter, trying their hand at race science and physiogeny, have decided they can tell when a woman is, in fact, “male” by looking at her facial bone structure. What they’re doing is like a manual version of biometrics. Like race, algorithms and artificial intelligence often re-inscribe oppressive binary structures of gender. In late December last year, Roxane Tickle, an Australian trans woman, filed a lawsuit against the “female only” social media app Giggle (Australian Associated Press, 2021). Giggle and its openly transphobic CEO have been criticised heavily for using FRT by AI company Kairos to determine if a user is “male” or “female”. Unsurprisingly, given the ways race and gender intersect, the AI “failed to properly identify women of color” (Perrett, 2022). 

Where exactly is the data for FRT taken from? Images of faces have been known to be scraped (the act of a bot extracting content) from Facebook, Google Images, Flickr, YouTube, Instagram and IMDB, often without the explicit consent of users or the subjects of those photos. For example, in 2013 researchers at the University of North Carolina, Wilmington scraped YouTube for before and after images of trans people undergoing medical transitions, without permission from the owners of these videos. It was later found that the researchers had mistakenly left this data published and unprotected until 2019 (Gault, 2022). 

In January 2021 Adam Harvey and Jules LaPlace launched Exposing.ai, “a search engine to check if your Flickr photos were used in dozens of the most widely used and cited public face and biometric image datasets used for these purposes” (Harvey & LaPlace, 2021). MegaFace was one such publicly available dataset (it is now being decommissioned). Its dataset had 4.7 million photos of about 672,057 unique identities (Harvey & LapLace, 2021). That’s just over the population of Boston. As Harvey warns, “selfies beget biometrics. To pay by selfie is also to invest in facial surveillance futures” (2022, p.83). On the Internet of Faces, there is no shortage of available data for FRT. 

Regulating Faces

As with many issues that fall under the umbrella of internet governance, regulating facial recognition technology concerns market pressures, international and national security, accessibility to information and ethics. Here I am thinking especially of Alice E. Marwick and danah boyd’s (2018) assessment that “as data-based systems become increasingly ubiquitous, and companies that people entrust frequently fail to protect personal data, the lines between choice, circumstance, and coercion grow increasingly blurry” (p.1159).

Harvey (2021) notes that technical definitions of what “face” is and includes, particularly around the specificity of pixel resolution could be useful in limiting mass surveillance potential. However, careful and precise regulation is key: 

such regulation requires unfolding the technical language further and making room for legitimate uses, such as consumer applications or investigative journalism, while simultaneously blunting the capabilities of authoritarian police or rogue surveillance companies (Harvey, 2021, pp.140-141) 

Last year in Australia, it was the Office of the Australian Information Commissioner (OAIC) that shut down the use of Clearview AI’s facial recognition by the Federal and state police because the software company had breached Australian Privacy Principles in the Privacy Act 1998 (Richardson et al., 2022). However, Richardson et al. (2022) in their Op-ed for CHOICE, note that current Australian legislation may not be able to address other issues concerning FRT as quickly and amenably. The extensive clarification and reform of legislation is duly needed. 

In 2022 The University of Technology Sydney’s Human Technology Institute published a 94-page report titled “Facial recognition technology: Towards a model law”. Urging the Federal Attorney-General to take action, the authors of the report write, “​​Australian law does not effectively regulate FRT: our law does not reliably uphold human rights, nor does it incentivise positive innovation” (Davis et al., 2022, p.5). The report proposes amending the Privacy Act so that it includes facial recognition technology, having the Office of the Australian Information Commissioner as the regulator for facial recognition technologies, and implementing an Australian Government taskforce on facial recognition (Davis et al., 2022). The report is particularly thorough in addressing concerns around privacy, consent, discrimination and legality. It argues that although there may be benefits to FRT use and innovation “all FRT applications carry at least a base-level risk to human rights” (Davis et al., 2022, p.27).

*******

I miss you! Show me your face! I think about the selfies my partner and I send each other over Facebook Messenger. I want to be seen, by her, and her by me. It’s like how turning your camera on during a Zoom call is all it takes to disperse the awkwardness (and I just think it’s polite). The story of the Internet will always be a story about connections and, just as they are away from the keyboard, faces are often an integral part of those connections. As Kember (2014) argues, “The face recognition system as a whole is comprised of technologies and users, images, infrastructure, investment, labour, expectation and belief” (p.15). However, it is strange to think of our faces as needing protection and regulation, that we may one day unexpectedly come across our own face on the Internet.

To “watch over” someone means to care for them. Yet, when we are “watched over” by biometric and surveillance technologies, care seems absent. As humans, we already struggle with faces as they relate to beauty, gender, age, race etc. within traditional media — biometric and AI technology is just complicating these questions. Do we let FRT become as ubiquitous and invisible as something like WiFi? Or do we push back? Hristova writes, “Deleuze and Guiattari deliver a powerful appeal for living love, for not hollowing out, not entering, not calculating and coding faces but rather allowing for the unknown” (Hristova, 2022, p.85) Yes, let us allow for the unknown, but also let us not forget what a face still can be, what a face still is in meatspace. A face is freckles, pimples, wrinkles, a warm flush smattered across cheeks. A face is your mother’s eyes and your father’s nose. It is the scars and sunburn from a childhood lived. We dream of faces, we fall in love with faces. 


References

AP News. (2023, January 3). Facial recognition tool led to mistaken arrest, lawyer says. https://apnews.com/article/technology-louisiana-baton-rouge-new-orleans-crime-50e1ea591aed6cf14d248096958dccc4

Bacchini, F., & Lorusso, L. (2019). Race, again: how face recognition technology reinforces racial discrimination. Journal of Information, Communication & Ethics in Society, 17(3), 321–335. https://doi.org/10.1108/JICES-05-2018-0050

Davis, N., Perry, L., & Santow, E. (2022). Facial Recognition Technology: Towards a model law. Human Technology Institute, The University of Technology Sydney. https://www.uts.edu.au/sites/default/files/2022-09/Facial%20recognition%20model%20law%20report.pdf

Dinnen, Z., & McBean, S. (n.d.). The Face as Technology. New Formations , 93, 122–137. Retrieved April 9, 2023, from https://link-gale-com.ezproxy.library.sydney.edu.au/apps/doc/A551338564/AONE?u=usyd&sid=bookmark-AONE&xid=7001ca46

Gault, M. (2022, December 13). Facial Recognition Researcher Left a Trans Database Exposed for Years After Using Images Without Permission. Motherboard. https://www.vice.com/en/article/93aj3z/facial-recognition-researcher-left-a-trans-database-exposed-for-years-after-using-images-without-permission

Gill, S. (2022, June 15). CHOICE raises concern over Bunnings, Kmart and the Good Guys use of facial recognition technology. ABC NEWS. https://www.abc.net.au/news/2022-06-15/choice-investigation-major-retailers-using-facial-recognition/101153384

Harvey, A. (2021). What is a Face? In F. Kaltheuner (Ed.), Fake AI (pp. 135–144). Meatspace Press. https://fakeaibook.com/

Harvey, A. (2022). TODAY‘S SELFIE IS TOMORROW’S BIOMETRIC PROFILE. HMKV Ausstellungsmagazin: House of Mirrors: Artificial Intelligence as Phantasm , 79–83. https://www.hmkv.de/files/hmkv/ausstellungen/2022/HOMI/Publikation/House%20Of%20Mirrors%20Magazin%20PDF.pdf

Harvey, A., & LaPlace, J. (2021). Exposing.ai: About. Exposing.Ai . https://exposing.ai/

Hawley, R. E. (2021, September 7). I’m Not There. Real Life Mag  . https://reallifemag.com/im-not-there/

Hristova, S. (2022). Emptied Faces: In Search of an Algorithmic Punctum. In A. Maurice (Ed.), Faces on Screen: New Approaches (pp. 75–90). Edinburgh University Press.

Kember, S. (2014). Face Recognition and the Emergence of Smart Photography. Journal of Visual Culture, 13(2), 182–199. https://doi.org/10.1177/1470412914541767

Marwick, A. E., & boyd, d. (2018). Understanding Privacy at the Margins: Introduction.   International Journal of Communication  , 12, 1157–1165.

Mason, R. (2020, March 29). Privacy concerns raised over exam provider, ProctorU. Honi Soit. http://honisoit.com/2020/03/usyds-online-exam-provider-proctoru-raises-privacy-concerns/

Masri, L. (2023, March 28). Facial recognition is helping Putin curb dissent with the aid of U.S. tech. Reuters. https://www.reuters.com/investigates/special-report/ukraine-crisis-russia-detentions/

Mozur, P. (2019, July 26). In Hong Kong Protests, Faces Become Weapons. The New York Times. https://www.nytimes.com/2019/07/26/technology/hong-kong-protests-facial-recognition-surveillance.html

Nkonde, M. (2019). Automated Anti-Blackness: Facial Recognition in Brooklyn, New York. Harvard Journal of African American Public Policy, 20, 30–36. http://ezproxy.library.usyd.edu.au/login?url=https://www.proquest.com/scholarly-journals/automated-anti-blackness-facial-recognition/docview/2413005997/se-2

Perrett, C. (2022, January 24). A social media app just for ‘females’ intentionally excludes trans women — and some say its face-recognition AI discriminates against women of color, too. Business Insider. https://www.businessinsider.com/giggle-app-uses-ai-to-exclude-trans-women-ceo-says-2022-1

Be the first to comment

Leave a Reply