Part I. Foundations and Tensions of Postdigital Learner Agency
Emphasising ‘agency’ as an integral challenge for the ethics of education is a prescient concern. The distributed nature of cognition undermines simplistic and longheld notions of learner agency in the postdigital landscape. Indeed, postdigital notions of agency have drawn upon critical posthumanist frameworks which seek to contend with the fracturing of conceptual boundaries that demarcate ‘individuals’, and the humanist understanding of ‘reason’ as an agential capacity that separates humanity from the rest of nature (Thomas, 2025). However, as Jenina Loh notes: ‘Openness and vagueness…is part of the agenda of critical posthumanism, as it reveals its general anti-dogmatism and rejection of ideological thinking’ (Loh, 2022, p 20). Postdigital Learner Agency, meanwhile, is focused on praxis – putting forward a ‘comprehensive multidimensional approach’ (Code, 2025) to address ways learner agency may be compromised by developments in more-than-human cognition. Such praxis is necessary, but as critical posthumanists profess, it is important to critique whilst creating, ensuring an evolving, responsive form of praxis. This paper will seek to situate the aims of Postdigital Learner Agency in this posthumanist context by drawing especially on the work of Karen Barad (2007) to advocate for a framing of agency that recognises it as an ‘enactment’: a doing, rather than a having. This framing, it will be argued, does not need to contradict the central aims of PLĀ, but rather it may deepen the commitment to an ethicality that runs through the version of education that PLĀ imagines. Additionally, whilst Code (2025) emphasises an awareness of AI systems’ potential to ‘embed biases that reinforce systemic inequities’, and calls for developing learners’ capacities to ‘navigate invisible power structures, requiring adaptability and critical agency’, this paper addresses material and ideological dimensions to AI developments which such aims need to include. The reshaping of the structures and logics of the economy around data expropriation have led to claims that new forms of capitalism (surveillance capitalism (Zuboff, 2017)), colonialism, (data colonialism (Couldry & Mejias, 2019)) and feudalism (technofeudalism (Varoufakis, 2023)) exemplifying material unfoldings. Furthermore, the hegemonic status of the Big Tech sector (Gilbert & Williams 2023) signals a political as well as an economic shift. The complex cognitive assemblages in which learners find themselves enmeshed often aim to objectify them and undermine the enactment of agency. Learners’ complicity is constructed psycho-politically (Han, 2017) and premised on their ignorance of the expropriation at play. Ideologically, such objectification of humans is justified by grandiose narratives relating to the potential of AI to become superintelligent (Bostrom, 2014), enabling radical abundance, digitally-based consciousness, space colonisation and immortality. The concept of the TESCREAL bundle (Gebru & Torres, 2024) attempts to capture the lineage of philosophical thought that animates the AI industry. Transhumanism is the ‘galaxy brain’ underpinning these ideologies. Transhumanism projects a tractable, machine-like world and Big Data functions not only to interpret this world but also to produce it (Thomas, 2024). PLĀ should seek to enable learners to understand their embeddedness in these material and ideological processes to more fully enable the enactment of learner agency.
Thomas, A. (2026). The Enactment of Agency and the Extension of Political Ethicality within a PLĀ Framework. In J. Code (Ed.), Postdigital Learner Agency. Springer Nature Switzerland.
Part of
Postdigital Learner AgencyEdited by Dr. Jillianne Code
Springer Nature Switzerland
← Back to Table of Contents