Photo by 3MotionalStudio on Pexels

BLOCK 4: This block finishes the way the course began — with imaginaries of the future. It considers the implications of the topics we have covered in the last few blocks for different possible educational futures and considers strategies to create more equitable and inclusive educational futures and tactics to resist futures that sustain algorithmic injustice.


Rebecca Eynon (2015:410) observes that ‘over the course of life, individuals develop a sense of self, strengthened or changed as a result of their life experiences’ and worries about the powerful impact such data-driven systems on a ‘learner’s self-concept’, and that it risks changing it in significant ways.  Eynon never says why we should be concerned about this: that our sense of self develops and changes over time in response to life experiences is not necessarily something to worry about. But one way to understand the root of her anxiety might be this: the life experiences afforded by the personalised data-driven algorithmic systems described by Bulger (2016:4), might be shaping a learner’s self-concept in ways that are harmful and unjust by misrepresenting them or failing to recognise them, and, if that’s right, these systems are not, therefore, aligned with what is in a learner’s best interest. One way of conceptualising the dynamic of this kind of self-shaping is to imagine it as involving the specific kind of behavioural governance that Knox et al (2020) think our educational future may well tend towards — a kind of ‘machine behaviourism’, as they call it, ‘entailing potent combinations of radical behaviourist theories and powerful machine learning systems’ that seek to ‘intervene in educational conduct and shape learner behaviour towards predefined aims’ (Knox et al, 2020: 32). In this dystopian vision of our educational future, students are assumed to be predictable and machine readable, but also, at the same time, irrational, and overly emotional, and their behaviour, therefore, in need of ‘watching’ and ‘managing’ and ‘shaping’ by the digital algorithms of data-driven digital educational platforms. These platforms ‘nudge’ them towards participation and predefined modes of behaviour not of their own choosing and what these platforms claim to be in learners’ best interests (Knox et al, 2020:39-41) but that actually, as we have seen, deliver algorithmic injustice of various kinds to marginalised groups. In this imagined future, education is not only (allegedly) paternalistic; it’s also presumptuous, coercive, and unjust. 


The educational future outlined above is rather dystopian and, when coupled with confident assertions by powerful corporate actors, or by the promises of influential intermediaries such as those we encountered at the start by Luckin et al, 2016, Seldon and Obidoye (2018), and Century Tech‘s Priya Lakhani, that data-driven personalised learning is the future of learning, it might seem inevitable. What strategies might you deploy to reimagine the future and thereby undermine the sense that such a dystopian future is inevitable? 

However, there is nothing essentially coercive about a ‘nudge’ by itself since the ends towards which nudges might be directed might easily be of one’s own choosing. For example, you might consciously tinker with the food choice architecture of your own environment to ‘nudge’ yourself to meet a commitment you’ve made to being a vegetarian by making sure that there’s only vegetarian food in the house. Nudging in this context serves as a bulwark against weakness of will and a support to form new eating habits. Neither is there anything essentially wrong with paternalism in education in many contexts: in the case of children and teenagers who are still growing into their autonomy, and still learning to make their own choices, parents and teachers often act in their best interest; different degrees of paternalism might be appropriate, depending on the situation. We might frame a young person’s environment in such a way, or ‘nudge’ them in such a way, that they find it easier to imagine themselves as, for example, future scientists. Holbert, Dando, and Correa (2020) tell us how their project Remixing Wakanda did exactly this. They created an environment or ‘makerspace’ composed of the positive images of Black technological futures offered by the movie Black Panther and other Afrofuturistic artefacts. They then invited five African American girls, ages 14–16, from a public all-girls school in a large Northeastern American city with a storied Black community, to participate in articulating their own visions of Afrofuturism through the do-it-yourself creation of their own futuristic artefacts. Along the way, the girls actively reconceptualised STEM spaces away from Eurocentric understandings of them and proposed a vision of the future as an open future, not predetermined by the past or the present, and which thereby makes space for their human agency, for their resistance to narratives that send them the message that STEM subjects are not for African American girls, and for their creativity in imagining their own futures. Remixing Wakanda effectively paternalistically framed the girls’ environment in such a way, or nudged them in such a way, that it would it be easier for them to imagine that STEM subjects were for them too.  

This suggests that what’s wrong with the possible future of education suggested by Knox et al (2020) is not that it is paternalistic nor that it involves nudging; it’s that the way the nudging is done is presumptuous, objectifying, coercive, and often unjust and harmful. Presumptuous: human beings are assumed to be machine readable, predictable doormats, who may be categorised and judged, and even seen as irrational, overly emotional fools in need of ‘managing’ by corporately-owned algorithms. Objectifying: these algorithms harvest, extract, and exploit student data, treating them as a mere means to the ends of corporate profit-making. Coercive: human beings are manipulated by algorithms to do things that align more with the values, priorities, and interests of corporate power than with their own values and priorities and interests which may otherwise diverge from the values, priorities, and interests of such power. Unjust: some groups of human beings are treated unfairly and in ways that are harmful. All of these ways of treating human beings involve severe ethical failings of a kind with those outlined by Kant (1785). The first involves the ethical wrong of disrespect for the intellectual and moral agency of human beings and a kind of arrogance in categorising a person who data-driven systems may not be actually capable of fully understanding. The second involves the ethical wrong of objectifying human beings by treating them as mere tools, as mere means to the ends of corporate profit. The third involves the ethical wrong of violating the autonomy of human beings by undermining their capacity to set and pursue their own ends. The fourth involves failing to recognise human beings as being of equal moral worth


Image credit: Created by Nettrice Gaskins using Gatys neural image style transfer, which is part of machine learning. 2021. Wikimedia Commons.

The future is open and contingent: we do not have to accept a future involving the kind of corporate-interest and data driven behavioural governance that Knox et al (2020) describe. If nudging is going to be used in education, it doesn’t have to be deployed in tandem with those kinds of data-driven algorithmic systems with corporate purposes and profit-making priorities, or in a way that’s presumptuous, objectifying, coercive, and unjust. For example, algorithmically driven Culturally Situated Design Tools (CSDTs) are a possible alternative to corporate driven algorithmic systems (Gaskins, 2019) that do not design in the kind of algorithmic injustices we have seen harm already marginalised groups in education. CSDTs are educational softwares that provide web-based modes of engagement with heritage artefacts, and do so by dynamically translating, not just statically modelling, more familiar ‘analogue’ cultural patterns, thereby allowing users to simulate them and make their own creations (Gaskins: 2019:264). Gaskins (2019:265) notes that when developing CSDTs, computer programmers use modules to hide the details so the people using them need not understand the complexities of the programme, so, interestingly, this makes the details of such software part of the hidden curriculum, as Edwards (2015) thinks of it, but in a way that is trustworthy in so far as it does not work against the interests of marginalised groups. Thus, educational practices that challenge the unjust aspects of the hidden curriculum in support of equity themselves have a hidden curriculum (Edwards, 2015, 268-9) but can be used to promote social changes for the better (Cotton, Winter, and Bailey 2013). Working with ethnic groups underrepresented in Computer Science education in the U.S., Rodriguez (2014) reports that in using CSDTs students are able to recognise something they are already familiar with as the algorithm simulates the logarithmic spirals and fractal-based curves in already-familiar crochet artwork. In the process, pupils develop a sense of ownership over mathematical and computational concepts and, crucially, develop the self-concept, or the belief, that they too can do mathematics and computer science.

Finally, even if the educational future that comes about is more dystopian than we’d like, there are strategies of resistance available. Gaskins (2019) describes how underrepresented ethnic groups in the U.S. engage with what she calls ‘techno-vernacular creativity’ that blends cultural artistic, scientific, and technological creative practices of re-appropriation, improvisation and conceptual remixing to “manage their representations” (Gaskins, 2019: 258) and that can be used to resist technological representations imposed by dominant cultures. African American artists “combine or subvert existing knowledge systems in order to invent new ways of using, creating, and performing with technology” (Gaskins, 2019: 258). In the context of education, Andrejevic and Selwyn (2020:125) suggest students and teachers might “work realistically toward non-participation, resistance and (perhaps) reinvention of facial recognition technologies” such as engaging in “facial data obfuscation and other forms of algorithmic counteraction… [and] tactics of ‘facelessness’ and ‘defacements’ (such as wearing masks, asymmetrical hairstyles and face adornments)”.


Given what you have read and thought about during this course, what recommendations might you make for a more equitable future for digital education, especially for marginalised groups? How do these relate to or differ from those recommendations often made for the future of education by the powerful corporate owners and builders of data-driven algorithmic systems (like Google) increasingly used in educational contexts?

Blog at