Image: France in 2000 year (XXI century). Future school. Author: Villemard. Wikicommons: Public Domain


We’ll start by thinking about the future and the increasing importance that surveillance technologies driven by artificial intelligence decision-making (A.I) are predicted to play in education. The term ‘artificial intelligence’ refers to ‘computing systems that are able to engage in human-like processes such as learning, adapting, synthesizing, self-correction and use of data for complex processing tasks.’ (Popenici and Kerr, 2017:2). We’ll adopt a critical approach to the topic that scrutinises some of the assumptions made about AI in education (or what’s sometimes called ‘AIED’) and you will be introduced to the concepts of algorithmic bias and algorithmic injustice. This will help to contextualise our exploration of AI-driven facial recognition technologies and emotional recognition technologies in education and help us to consider the consequences that the deployment of these technologies in educational contexts might have for marginalized groups.  


For decades, educational researchers have been predicting that machines and artificial intelligence (AI) will transform education (Watters, 2021). These predictions have been increasing. In 2016, Luckin et al’s collaborative report with the multinational education company Pearson announced that ‘new technologies’ — especially artificial intelligence — ‘will change not only the ways we learn, but what we learn’ (Luckin et al, 2016). In 2018, Seldon and Obidoye cast the pace of this change in revolutionary terms, declaring there to be ‘no more important issue facing education, or humanity at large, than the fast approaching revolution of AI,’ predicting that AI systems would deliver more ‘personalised’ education (Seldon and Obidoye, 2018) — education where online content would adapt dynamically to a learner’s particular aims, interests, and levels and kinds of competences (Bulger, 2014:4). Educational researchers are not the only actors playing educational futures prediction games. Responding to the 2021 PEW report on ethical AI design, Jeanne Dietsch, New Hampshire senator and former CEO of MobileRobots Inc., commented, ‘Applying AI…will truly benefit our society…individualizing education, building social bonds and much more’ (Rainie et al, 2021). Recently, Priya Lakhani, founder of Century Tech, an AI education technology company, and non-executive board member of the UK government’s Department for Digital, Culture, Media and Sport, suggested that AI is the future of education (Maghribi, 2021).  

Selwyn et al (2021) recognise that AI is now used for a range of data-decision-making in education in the name of, for example, plagiarism-detection, securing young people’s safety, attendance monitoring, proctoring, authenticating online learners to control access to educational content, as well as being used as indicators of student engagement and as support for pedagogical practices putatively connected to concerns about well-being (Andrejevic and Selwyn: 2020: 118-19). They forecast that a range of artificial intelligences will ‘increasingly become the engine of education, and student data the fuel’ and that ‘the continued adoption of artificial intelligence into mainstream education throughout the 2020s will initiate datafication on an unprecedented scale’ (Selwyn et al, 2021: 2). In blocks three we will focus on facial recognition technologies and emotion recognition technologies in education and how their use is bound up with injustice towards marginalised groups.


READING: In her 2021 background paper for UNESCO’s Futures of Education report ‘Futures in education: towards an ethical practice’ (linked below), Keri Facer contends that ideas of the future play an essential role in educational thinking, policy and practice, and that there is a vital need to reflect upon how these ideas are produced and the kinds of work that they do in education. Facer sketches five approaches to thinking about and working with ideas of the future in education, along with the distinctive contribution and core questions and practices associated with each of these approaches and the intergenerational tensions and ethical questions that they broach. The paper proposes nine domains of ethical examination for ethical futures work in education: reflexivity and multiplicity; transparency; curating decay; repair and healing; intergenerational responsibility; emergence and observation; organising hope; limiting pathological speculation; and care for the special and unique temporality of education. It concludes with nine questions for policy makers, consultants, researchers, educators or students who wish to work with the idea of the future in education. You may choose to read the entire paper if you wish but you should concentrate your reading on pages 3-8 and pages 19-22 in particular.

Below is a question-set based on Facer (2021). Think about what you might say in response to the questions and feel free to discuss your responses with a fellow student, colleague, or friend, making sure to explain anything that they might need to understand as your discussion proceeds.


At the beginning, we saw Seldon and Obidoye (2018) proclaim that there is ‘no more important issue facing education, or humanity at large, than the fast approaching revolution of AI’. What effects do you think such confident proclamations about the future of education by influential actors have on how ordinary people here and now in the present think about the future of education? And how plausible do you think the claim that ‘there is no more important issue facing education, or humanity at large than the ‘fast approaching revolution of AI’ actually is? Might there be other, more pressing issues facing education, or, indeed, humanity at large? 

Blog at