Image: Young Ruby Bridges being escorted from school
William Frantz Elementary School, New Orleans, 1960.
Author:Uncredited U.S. Department of Justice photographer on Wikicommons.
Public Domain


In the last block, we saw the increasing role that a range of AI-driven decision-making is predicted to play in education. This block explores algorithmic decision-making and introduces the concepts of algorithmic bias and algorithmic injustice. In the next block we’ll examine how these concepts play out in the context of using facial recognition technologies and emotion recognition technologies in the classroom and we will consider their impact on marginalised groups.


With the advent of a kind of artificial intelligence known as machine learning, AI-driven decision-making in education is becoming more widespread. Machine learning is a kind of artificial intelligence that includes software driven by big data and algorithms with the capacity to “recognize patterns, make predictions, and apply the newly discovered patterns to situations that were not included or covered by their initial design.” (Popenici and Kerr, 2017, p.2).  Big Data are, roughly, ‘data that are too big for standard database software to process’ (Eynon, 2013) with ‘the capacity to search, aggregate, and cross-reference large data sets’ (boyd and Crawford 2012, 663). An algorithm is “a set of defined steps that, if followed in the correct order, will computationally process input (instructions and/or data) to produce a desired outcome” (Kitchin, 2016:16). To learn about how algorithms are built, how they work, and about the limitations and potential dangers of their uses for decision-making, including decision-making in education, watch and listen to this short TED talk below with  Cathy O’Neil

Video of Cathy O’Neil’s TED talk ‘The Era of Blind Faith Must End’
Author: TED.

In her TED talk above, O’Neil notes that the decisions that algorithms make are often presented as being objective. This, she claims, ‘is a marketing trick’. She argues that to build an algorithm you need (a) historical data and (b) a definition of what counts as success for that algorithm, and neither of these are plausibly understood as objective and often just bake existing human bias and error into the algorithm. This coheres with Gitelman and Jackson’s (2013) argument that data are not ‘raw’, ‘neutral’ or ‘objective’ and that the process of gathering data is not neutral or objective (Gitelman and Jackson, 2013). The process of gathering data is often framed by asking a question — about gender in a population, for example. One significant thing about questions is that they are always asked from a point of view — the point of view of a questioner who often has an interest or even a stake in asking this or that. Neither are the categories used to sort data, such as gender categories or racial categories, neutral or objective. Whether or not to count a person as a woman, for example, is something that has, in recent years, become hotly contested (Byrne, 2020; Dembroff 2021; Byrne 2021). More generally, Williamson (2017a: 29) reminds us that data are “social products” and this in turn means that their production is subject to the negotiation and power dynamics inflecting most social relations.

O’Neil’s TED talk observes that the formulas for these algorithms used to represent and score people are secret, or as she puts in her book ‘are, by design, inscrutable black boxes’ (O’Neil, 2016:29). This means that people are being represented and scored with secret formulas that they don’t understand and that often misrepresent them (or don’t recognise them) and work against their interests without any system of appeal. These latter kind of harmful data-driven algorithmic models O’Neil calls ‘weapons of math destruction’, due to their opacity, scale, and capacity to inflict damage on people, particularly on those who are already vulnerable. In the context of education, O’Neil (2016) discusses how these algorithms represent and score teachers unfairly and in her book she goes into the details of the history of use, misuse, and gaming of algorithms in education. Algorithms are also known to have treated students unfairly. During the COVID-19 lockdowns, an algorithm was used to determine grades for students wishing to attend university. The algorithm predicted grades for some students that were far lower than expected and seemed to be systematically biased against students from poorer backgrounds in particular. In the face of protests, the algorithm was scrapped: the U.K.’s Westminister government eventually decided that students in England and Wales would no longer receive exam results based on the controversial algorithm. The announcement followed a similar situation and similar U-turn in Scotland, which had previously seen 125,000 results downgraded. 

Edwards (2015) argues that educational software driven by big data and inscrutable ‘blackboxed’ algorithms of a piece of those discussed by O’Neil above is now part of what is sometimes referred to as ‘the hidden curriculum’. In the context of education, the concept of ‘the hidden curriculum’ has been developed as part of a critique of educational institutions for implicitly reproducing and reinforcing inequalities and power-relations that shape the social order (e.g. Snyder 1971; Apple and King 1983; Margolis 2001). The hidden curriculum is concerned with the forms of knowledge, representations, authoritative discourses, resources (such as software and educational technologies), norms and values, including those surrounding teacher-student and student-student interaction, that are not spoken about, invisible, but that are often implicitly deemed ‘legitimate’ or ‘illegitimate’, ‘allowable’ or ‘unallowable’, ‘authoritative’ or not (Edwards, 2015). The idea is that while students might be simply learning particular subjects and acquiring certain skills at a visible level, they are taught many other things at a hidden level, including how others represent them. As Edwards (2015) puts it, “those things which are selected and enacted as part of the formal curriculum provide hidden messages to certain groups and types of students that education is ‘not for them’”. In the next block, we will see how the ‘blackboxed’ algorithms driving facial recognition softwares and emotion recognition softwares used in education are part of this hidden curriculum and effectively treat marginalised students unjustly and harm them in their capacity as learners by not recognising them, or by misrepresenting them, sending them the message that education is not for them.


Below you can download a file with a question-set based on O’Neil’s TED talk. Think about what you might say in response to the questions and feel free to discuss your responses with a fellow student, colleague, or friend, making sure to explain anything from the talk that they might need to understand as your discussion proceeds.  


To warm up to the concept of algorithmic injustice, read this comic illustrated by Vreni Stollberger, and then listen to the podcast interview below with Joy Buolamwini where she argues that algorithmic bias results in algorithmic harms that amount to what she calls algorithmic injustice or algorithmic unfairness. She describes her groundbreaking work on Gender Shades that grew out of her doctoral research project on facial recognition technologies that systematically discriminate against marginalised groups on the basis of the colour of their skin. She discusses founding the Algorithmic Justice League whose mission is “to raise public awareness about the impacts of AI, equip advocates with empirical research to bolster campaigns, build the voice and choice of the most impacted communities, and galvanize researchers, policymakers, and industry practitioners to mitigate AI bias and harms.”

NPR podcast with Joy Buolamwini’s interview
for NPR’s TED Radio Hour Comic Series. Authors: TED and NPR.


In the last block, we saw Selwyn et al (2021: 2) forecast that a range of algorithmically driven artificial intelligences will ‘increasingly become the engine of education, and student data the fuel’ and that ‘the continued adoption of artificial intelligence into mainstream education throughout the 2020s will initiate datafication on an unprecedented scale’. The paper below by Baker and Hawn (2021) extends some of the themes that O’Neil’s TED talk and Joy Boulamwini’s podcast above touch on, with an overview of algorithmic bias in education, its contested definition, and a discussion of its causes, and the distinctive ways that algorithmic bias manifests in education, and the harms it inflicts, in particular on marginalised groups. You may read full paper if you wish. To complete the activity below, you will only need to concentrate your reading on pages 2-5, especially on the discussion about allocative and representational harm, and on page 23 (section 4), on known and unknown biases.

Below is a question-set based on pages 2-5 and section 4, page 23 of Baker and Hawn (2021) for you to download. Think about what you might say in response to the questions and feel free to discuss your responses with a fellow student, colleague, or friend, making sure to explain anything that they might need to understand as your discussion proceeds. 


This piece in The Conversation by scholars working at the University of Sydney and the Australian National University discusses how we can know whether algorithms are fair. As we have seen, algorithms are being used in education, including for allocating student grades. If you were in charge, what decisions would you make for a fairer grading system? How would you design an algorithm? Researchers at the University of Sydney’s Education Futures Studio have devised an ‘algorithm game’ and to walk you through some of the fairness challenges based on the 2020 UK exam controversy. The algorithm game invites you to think about how algorithms work, so that you learn how different inputs lead to different outputs. It also aims to provide insights into the complexity of fairness questions when using algorithms. To play the algorithm game, press the START GAME button below, and follow the instructions given on the webpage linked.

We’re now almost ready to move onto the next block, where we can explore the deployment of facial and emotion recognition technologies in the context of education. But before we do, engage with the reflection activity below to finish up this block.


The decisions of the U.K. algorithms that delivered decisions about grades that were systematically biased against students from poorer backgrounds were overturned in the face of protests primarily by students. In response, the U.K.’s Westminister government decided that students in England and Wales would no longer receive exam results based on the controversial algorithms; the Scottish government made a similar decision in response to student protests. What might these cases and the attendant student protests teach us about the creation of more equitable and just digital educational futures?

Blog at