I think we can trust artificial intelligence in learning, but not artificial intelligence managed by Silicon Valley corporations in learning.
Stephen Downes
Introduction
If you consider that some elements of AI are already widespread in education (for example, online search, recommendation engines, and autocorrect), then it’s not far-fetched to take seriously the question of what education might look like when AI and machine learning are more broadly distributed. In some ways you could argue that AI is already deeply embedded in education, because it’s already deeply embedded in society.
AI (or at least, machine learning) is almost everywhere, in contact with almost everything we care about. Whether you’re looking at supply chain networks, traffic control (on the ground and in the air), financial institutions, or the voice recognition on your personal devices, machine learning touches our lives in often quite intimate ways. To think that this isn’t going to become even more tightly integrated with many aspects of education is to believe that education is somehow removed from society.
I suppose a lot also depends on what we mean by ‘artificial intelligence’. If we’re thinking of a humanoid robot that literally replaces a teacher standing in front of the classroom, then it’s OK. We’re probably safe. But if we mean something more like ‘algorithmically-influenced-teacher-decision-support-systems’, you could argue that we’re already there, through the increasing influence of learning analytics. However you look at it, AI and machine learning is going to become as much a part of education as pen and paper were, and we would do well to prepare for it.
Podcast
Smith, C. (n.d.). Turing Award Winner Yoshua Bengio and Korbit cofounder Iulian Serban talk about AI in Education. Episode no. 64. Eye on AI podcast.
Turing award winner Yoshua Bengio and his colleague Julian Serban, co-founder of Korbit, an AI ed-tech startup with the aim of democratizing education, talk about enhancing education through the application of deep learning systems that can track student behavior predict their performance and deliver strategies to both improve performance and prevent students from losing interest.
This is a conversation that’s heavily biased towards a positive perspective on machine learning in education (the podcast guests are founders of an education startup that uses machine learning). I decided to include it because it provides a slightly more technical discussion of the technology. The thrust of the conversation is that we can’t scale 1-to-1 education for everyone, and that AI-based agents (i.e. personal tutors) could help to fill this gap.
Read the full transcript here.
Note: If you’re interested in a less technical – and more critical – discussion on the use of AI in education, you can also listen to Selwyn, N., & Southgate, E. (n.d.). AI and education. Meet the Education Researcher podcast.
Article
Swiecki, Z., Khosravi, H., Chen, G., Martinez-Maldanado, R., Lodge, J., Milligan, S., Selwyn, N., & Gašević, D. (2022). Assessment in the age of artificial intelligence. Computers and Education: Artificial Intelligence, 100075.
In this paper, we argue that a particular set of issues mars traditional assessment practices. They may be difficult for educators to design and implement; only provide discrete snapshots of performance rather than nuanced views of learning; be unadapted to the particular knowledge, skills, and backgrounds of participants; be tailored to the culture of schooling rather than the cultures schooling is designed to prepare students to enter; and assess skills that humans routinely use computers to perform. We review extant artificial intelligence approaches that – at least partially – address these issues and critically discuss whether these approaches present additional challenges for assessment practice.
This is quite a long article (32 pages) but I learned a lot from it and highly recommend it for anyone with an interest in the potential role of AI in assessment. It includes a wonderful overview of this emerging field that starts with a discussion of the shortcomings of the Standard Assessment Paradigm, before moving onto how different AI-based systems could help to address these issues, including how we might move from:
- onerous to feasible (e.g. automated assessment construction, AI-assisted peer assessment, writing analytics)
- discrete to continuous (e.g. electronic assessment platforms, stealth assessment, latent knowledge estimation, learning processes)
- uniform to adaptive (e.g. computerised adaptive testing systems)
- inauthentic to authentic (e.g. virtual simulations and internships)
- antiquated to modern (e.g. integrating AI-assistants into commonly used software)
The article then goes into a very comprehensive review of the challenges that may emerge from the introduction of AI-based systems into assessment, including:
- sidelining professional expertise when machines start making pedagogical decisions
- black-boxing of accountability as responsibility is shifted to increasingly distant stakeholders
- restricting the pedagogical role of assessment, as systems become relatively inflexible
- assessing limited forms of learning that conform to the abilities of the assessment systems
- surveillance pedagogy, where students’ behaviours are increasingly observed, captured, analysed and interpreted by algorithms
As I said, it’s long but it captures a lot of what’s going on at the moment, how it could affect all of us, and what concerns we should have.
Resource
European Commission (2019). Ethics guidelines for trustworthy AI. European Commission.
While the scope of this report extends far beyond the educational context, it proposes seven key requirements that AI systems should meet in order to be deemed trustworthy, no matter the context. These requirements could form a foundation for critically evaluating any AI-based system that’s going to be implemented in higher and professional education.
The seven requirements for trustworthy AI systems are:
- Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights.
- Technical Robustness and safety: AI systems need to be resilient and secure.
- Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimate access to data.
- Transparency: the data, system and AI business models should be transparent, with traceability mechanisms helping to achieve this.
- Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalisation of vulnerable groups, to the exacerbation of prejudice and discrimination.
- Societal and environmental well-being: AI systems should benefit all human beings, including future generations by ensuring that they are sustainable and environmentally friendly.
- Accountability: Auditing mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.