Video of Kate Crawford and Trevor Paglen discussing their photography exhibition Training Humans at Osservatorio Fondazione Prada. Video credit: Osservatorio Fondazione Prada.
BLOCK 3

In the last block, we explored algorithmic decision-making and the concepts of algorithmic bias and algorithmic injustice. One might think that this could easily bring about a quite dystopian vision of our educational future that imagines students to be predictable and machine readable, but also, at the same time, irrational, and overly emotional, and their behaviour, therefore, in need of ‘watching’ and ‘managing’ by big data-driven algorithms (Knox et al, 2020). In this block we’ll examine how the concepts of algorithmic bias and injustice play out in the context of using facial recognition technologies and emotion recognition technologies in educational contexts. We will consider how these technologies represent, misrepresent, and fail to recognise already marginalised groups of individuals, thereby treating them unjustly and harmfully specifically in their capacity as learners, working against their interests without any system of appeal or accountability. To warm up to our topic, begin by watching the video above with Kate Crawford and Trevor Paglen discussing their photographic art exhibition Training Humans that exhibited in Milan from September 2019 to February 2020. Its subject was a selection of ImageNet training images, the collections of photographs used by scientists to train computer vision artificial intelligence systems in how to categorise and represent and ‘see’ the world and the people in it.

Crawford and Paglen’s exhibition displays the evolution of training image sets from the 1960s to today and in doing so reveals the materialities and politics driving A.I. Training Humans hones in on two issues: first, it explores how humans are classified, represented, codified and interpreted through training datasets; and it also shows us how technological systems harvest, label and use this material. We learn that the classifications of humans by these systems is invasive and extractive, biased, deeply political, and dependent on the manual and also menial crowdsourced labour of Amazon Turk workers. Crawford and Paglen note that labelling images in the context of computer vision and A.I. systems isn’t a straightforward act of describing someone; it’s an act of representation, of judgement, often, surreptitiously, a sexist and racist act of judgement, and sometimes even a moral judgement that, for example, a person in a photograph is criminal. In the discussion in the video above, Crawford observes that we are being trained by these systems to perform and present and represent ourselves in certain ways, exposing a dynamic in which humans are training machines and machines are training humans. Is it possible that we are not only being trained to perform, present, and represent ourselves in certain ways; we are being trained to think and feel in certain ways too?

FACIAL RECOGNITION TECHNOLOGY IN EDUCATION

As Crawford and Paglen note in their discussion of Training Humans, technology companies have captured massive volumes of surface-level images of human facial expressions, such as billions of Instagram ‘selfies’, TikTok videos, Pinterest portraits, and Flickr photos. Andrejevic and Selwyn (2020: 118-19) recognise that facial recognition technologies now used for a range of data-decision-making in education in the name of, for example, plagiarism-detection, securing young people’s safety, attendance monitoring, proctoring, authenticating online learners to control access to educational content, as well as being used as indicators of student engagement and as support for pedagogical practices putatively connected to concerns about well-being. Here, we’ll focus on two uses of such technologies in education. First, we’ll explore facial recognition technologies that are attempting to identify a particular individual (Crawford, 2021: 153) and that have been used for proctoring. We’ll briefly consider the proctoring company ExamSoft, along with a proctoring scandal in which it was recently embroiled, and we’ll see how its algorithms treated students with darker skintones unjustly. Secondly, we’ll consider emotion recognition technologies that are built into several facial recognition platforms, and that aim to detect and classify emotions by analysing any particular face (Crawford, 2021: 153). We’ll consider the justification for their deployment in education and you are invited to consider an argument by Crean (2022) for the view that such emotion recognition technologies used in education risk treating some already marginalised students unjustly, as she makes the case for a distinctive kind of injustice that she calls algorithmic affective injustice.

ACTIVITY 1

Download and read Andrejevic and Selwyn’s 2020 paper on the use of facial recognition technologies in schools. You may read the entire paper if you wish but you should concentrate your reading on pages 121-125 and think especially carefully about what the authors say about its effects on marginalised groups before moving to the discussion of the way algorithms affect people and the case of the ExamSoft proctoring scandal below.

EXAMSOFT: A FACIAL RECOGNITION TECHNOLOGY FOR PROCTORING

In the last block, we heard Joy Buolamwini describe her groundbreaking work Gender Shades that grew out of her doctoral research project on facial recognition technologies. Her scholarly academic work shows that facial recognition technologies systematically discriminate against marginalised groups on the basis of the colour of their skin, and you’ve just read work by Andrejevic and Selwyn (2020) that goes into more detail on how such systems affect such groups at school. But algorithms do not just shape how people are represented and misrepresented, treated and mistreated, and judged and misjudged (Beer, 2017: 5-6); they also shape how people feel. Bucher (2017) elaborates on the details of peoples’ personal stories of algorithms and how they affect their lived experiences; Andrejevic and Selwyn (2020)’s work gives us the material to imagine the impact of facial recognition systems on pupils’ lived experiences of school.

In the video clip below, Joy Buolamwini illustrates the effects of the failures of AI in categorising the genders of iconic Black women in her short spoken word poem “AI, Ain’t I A Woman?“, drawing on, and movingly remixing and responding to the question at the heart of Sojourner Truth’s electrifying 1851 speech on Black women’s rights: ‘Ain’t I a Woman?’. Buolamwini’s poem recounts how algorithms driven by machine learning systematically and unjustly misrepresent, misjudge, and mistreat iconic Black women from Sojourner Truth to Serena Williams. These ‘proud icons are dismissed’, as she puts it, as they and their trailblazing work is disrespected, rendered meaningless, effectively erased, by being characterised as male. From Buolamwini’s poem we learn about the hurt that algorithmic injustice causes. Her choice to present her work in the form of a short, spoken-word poem enables us to hear, feel, and understand this hurt in a way that her groundbreaking work on Gender Shades cannot.

Video of Joy Buolamwini performing her spoken-word poem ‘A.I, Ain’t I a Woman?’ Video credit: Joy Buolamwini on Youtube.

As we have learnt from Andrejevic and Selwyn (2020), the same kind of algorithmically driven facial recognition systems are increasingly being used in schools, including for the purposes of proctoring. In 2020, the proctoring company ExamSoft told Black students taking exams that its software couldn’t identify them due to ‘poor lighting’. In fact, there were usually no problems with lighting and the problem was not replicated for White students working in similar conditions; rather the algorithm was biased and resulted in algorithmic injustice towards Black students as a result of the racial bias working against Black skin tones designed into the algorithm. In this article, Here, prospective law student Sergine Beaubrun explains the emotional fallout of algorithmic injustice that is of a piece with the fallout of the algorithmic injustice that Buolamwini’s poem describes, but in the context of using proctoring software while attempting mock bar exams. She describes how emotionally stressful the difficulties she faced were, how they interfered with her ability to perform, and how, ultimately, they left her questioning whether the law profession was for her, wondering whether it would recognise her as a person when she entered it.

ACTIVITY 2

Choose one of the following speculative exercises to complete.

EITHER

Imagine you are Sergine Beaubrun in the situation described above and detailed in this article. Draw on the article, and what you have read from Andrejevic and Selwyn (2020) above, to imagine your way into her shoes, and then write a short story, or a spoken word poem of your own, based on your experience of what happened, and that communicates how this experience made you feel, and how it affected your capacity to learn. Consider how it feels not to have your face recognised by the software in a high-stakes educational context where others taking the same exam do have their faces recognised. How might you communicate the kind of algorithmic injustices and, specifically, some of the representational and recognitional harms (encountered in the Baker and Hawn reading in the last block) that ExamSoft’s facial recognition technology inflicted on Sergine specifically in her capacity as a learner in your short story or spoken word poem?

OR

Imagine your educational institution asks you to meet with a representative from ExamSoft who wishes to persuade you that your institution should purchase ExamSoft’s proctoring software for online exams. Write a dialogue between yourself and ExamSoft’s representative. How does the representative make the case to purchase the software? How do you challenge their arguments for their case? Based on what you have learnt so far on this course, what questions do you have for them? How might you explain the kind of harms that their software potentially inflicts on marginalised groups? What decision do you reach in the end? You may use some of the ideas from what you have read by Andrejevic and Selwyn (2020) and from this article to help you construct some of the background details needed.

EMOTION RECOGNITION TECHNOLOGY FOR THE PURPOSES OF SUPPORTING SOCIAL AND EMOTIONAL LEARNING

Emotion recognition technologies are built into several facial recognition platforms. These aim to detect and classify emotions by analysing any particular face (Crawford, 2021: 153) and they are now being used in educational contexts. In the field of educational technology, 4LittleTrees, a new facial recognition software developed by Hong Kong-based start-up Find Solution AI, claims to read children’s emotions as they learn. As the children study, the artificial intelligence, what McStay (2017) calls ‘emotional AI’, uses the camera on their computers to measure muscle points on their faces. Its cheerleaders claim that it can identify emotions including happiness, sadness, anger, surprise, and fear from facial expressions; its founder claims that it can make virtual classrooms ‘as good as — or better than– the real thing’. AI giants Amazon, Microsoft, and IBM all design systems for affect and emotion detection too (Crawford, 2021:155). An extended World Economic Forum report, ‘New Vision for Education: Fostering Social and Emotional Learning through Technology’, confidently endorses such systems, citing MIT start-up Affectiva’s products as shining examples (WEF, 2016:14-15; Williamson, 2017b). Silicon Valley’s influential educational technology magazine EdSurge suggests that emotionally intelligent robots ‘may actually be more useful in the classroom than humans’ (Williamson, 2017b). Google now tells us that teachers can use Google Classroom to send wellness reminders to students and recommends using the socio-emotional learning gaming app Wisdom – Kingdom of Anger with Chromebooks to ‘empower’ students to learn to identify, label, and communicate emotions ‘through facial expressions, body language, voice intonations, physiological reactions and trigger events’.

In tandem, educational data scientists are creating ‘learning analytics’ applications for ‘the measurement, collection, analysis, and reporting of data about learners and their contexts, for the purposes of understanding and optimising learning and the environments in which it occurs’ (Selwyn and Gasevic, 2020:528). This has brought with it the development of ‘emotion learning analytics’ (D’Mello, 2017): ‘the identification and measurement of behavioural indicators from learners through content analysis, natural language processing, big data techniques of sentiment analysis and “machine emotional intelligence”’ (Williamson, 2017b:13). The hope is not only to use tablets and smartphones to measure facial and vocal expressions, ‘to monitor learners’ emotions in real time’ (Rientes and Rivers, 2014) but also, as Knox et al (2020) suggest, to use ‘personalised’, algorithmically driven behavioural interventions, ‘educative nudgings’, to regulate these emotions and shape them (Knox et al, 2020:39-41), putatively for the purposes of supporting the social and emotional dimensions of learning (D’Mello, 2017). In the case of the kind of EAI used by Affectiva, a culture of emotional surveillance is often justified by the need to support the social and emotional dimensions of learning (McStay, 2019). These are dimensions of learning processes whereby children learn to understand and regulate their emotional lives so as to support their wellbeing, empathy with others when working collaboratively, and to establish and maintain healthy relationships (Philibert, 2017; McStay, 2019).

ACTIVITY 3

Below you can download and read Aisling Crean’s 2022 position paper for the course Digital Futures for Learning, part of the MSc in Digital Education at the University of Edinburgh. You can also download and engage with the question-set below based on the paper. Crean’s paper focuses on face-based affect detection systems (FADS) that aim to detect, classify, and regulate emotions by analysing any particular face (Crawford, 2021: 153) and their deployment in schools.

[SEPTEMBER 5TH 2022 NOTE: SINCE THIS OER WAS CREATED IN APRIL 2022, CREAN’S PAPER AND THE ACCOMPANYING QUESTION SET HAVE BEEN TEMPORARILY REMOVED TO PRESERVE ANONYMOUS REVIEW SINCE THE PAPER IS NOW UNDER REVIEW AT A JOURNAL.]

We’re now almost ready to move onto the final block, where we can consider the implications of the topics we have covered over the last few blocks for the future of education. But before we do, engage with the reflection activity below to finish up this block.

REFLECTION

Education is often regarded as a developmental preparation for citizenship of a democratic society that espouses values like liberty, equality, freedom of thought and freedom of expression. What consequences for such a democracy might an educational future have where the social and emotional lives of school children are subject to the often unjust ‘educative nudgings’ of algorithms, especially algorithms owned by powerful multinational corporations like Google? In such a future, how might such powerful corporations be held accountable for unjust algorithmic decision-making that inflicts harm on a person’s educational development?

Blog at WordPress.com.