The Institute for Ethical AI in Education
Call for Evidence
As part of our work towards developing a shared vision of ethical AI in education, which will form the basis of the ethical framework for AI in education, the Institute is launching a call for evidence. The evidence provided will support the Institute in its review of the applications and impacts of AI in education, and of the instruments available to enable ethical and responsible use. In order to respond to the call for evidence, please download The Call for Evidence document from the ‘Key Documents’ Section below, and upload your responses as an attachment via the contact form below. Please note, this call will be open until close of play on Friday 29th May 2020.
The Institute launched its Interim Report on 25th February at The House of Lords, and published a provisional guidance toolkit for educational institutions using AI at the same time. The report outlines the projected risks and benefits of AI in education, and sets out the steps that need to be taken to develop a respected and effective ethical framework, which will promote beneficial uses of AI whilst also safeguarding learners against the harms of unethical AI.
Get in touch
The IEAIED will work to develop frameworks and mechanisms to help ensure that the use of AI across education is designed and deployed ethically. We are achieving this by engaging with a wide range of stakeholders to develop a code of practice that protects the vulnerable and disadvantaged and maximises the benefits of AI across society.
To maximise the benefit that AI could bring to education, we must ensure that AI technologies, practices and proponents are aligned with the moral values and ethical principles of a ‘Good AI Society’ (Floridi, 2018). Ethics must be ‘designed in’ to every aspect of AI in education and training from the inception of an idea for an AI product or service to the scaled adoption of that AI within society. Educational institutions and practitioners are responsible for the pedagogical, emotional, physical and moral wellbeing of their students, employees, and the ethical context of these responsibilities with respect to the fast-approaching AI revolution needs much more careful attention.
The IEAIED will identify the assumptions about human behaviour and intelligence that underlie current AI development and innovation. We will consider how social values are currently embedded and manifested in AI design. How ethical frameworks can in future be grounded on responsible innovation in all applications of educational AI. We will also examine how AI in education avoids prioritising undesirable aspects of learning at the expense of other beneficial aspects, which could fundamentally distort the process of learning, and human development.
Why is the Institute for Ethical AI in Education Needed?
We believe with Stephen Hawking, 2018, that AI, in general, is going to be either the best thing for humanity or the worst thing that has ever happened. We believe we are putting insufficient thought and effort into the ethical applications of AI in general. This is particularly the case in education where remarkably little thought has been put in by government, parliament, universities, schools and educational bodies. The running has been made by the technology companies, for whom ethical and broader societal implications are less important than the bottom line.
The growing volume and diversity of data generated raises ethical concerns about what happens to that data, who owns it, who uses it, for what purposes, and who is accountable for its interpretation and exploitation. Users’ rights to self-determination, ownership and privacy along with the identity of and accountability by anyone who may affect those rights are key considerations that are universally applicable to all types of data generated.
This requires more than GDPR and other form of privacy and data protection.
These considerations are of particular pertinence to systems that use AI in the support of teaching and/or learning, because such systems frequently aim to effectuate a lasting change on their users, e.g. through recommendations, persuasion or feedback, to engender personal relationships between humans and machines, e.g. through use of conversational or emotion-enabled agents, or even to develop a degree of dependency (intrinsic motivation), e.g. through rewards and levels in games. In the case of technologies developed for education (including ITSs, MOOCs, Learning Analytics, socio-cognitive interventions), the explicit ambition is to achieve positive life-long changes that are measurable at behavioural, psychological and, increasingly, deep neural levels. However, as yet, the methods, technologies and ideologies that underpin the generation, analysis and exploitation of interactive systems’ data have not been subject to sufficient systematic and interdisciplinary scrutiny to ensure our full understanding of their potential effects on users, of the associated ethical issues and risks against which we must safeguard.
Data is never just ‘raw’. The decision to collect data is an action based upon a judgement that the data is of value. Data is a value driven entity in its own right and the decisions to collect it must be grounded in sound ethical principles.
The world of AI in Education beyond academic research, where ethical approval must be sought and granted, is the ‘wild west’, with no consistent or effective governance. Both advertently and inadvertently businesses are taking advantage of people in the way that they are building, implementing and rolling out AI.
We need to guard against inappropriate or biased data collection, analysis or interpretation, e.g. as employed in user modelling, provides the basis for systems’ interactive capabilities. By their very nature interpretations have to involve a commitment to particular theoretical or ideological perspectives, e.g. when data is translated into knowledge representations, and these are also inevitably subjective and debatable in nature.
Lack of input to the development of AI for use in Education from those who understand teaching and learning.
False protection promises from large technology companies who wish to constrain users to their particular brand of technology.
Academics are required to seek and gain ethical clearance for anything and everything they do that involves people: they must demonstrate that they will do not harm – this must be extended to anyone and everyone working in education.
Data without context is not merely meaningless, it is potentially dangerous. We will therefore demand that the interpretation of data can only be conducted with data that has been appropriately contextualised.