The Institute for Ethical AI in Education
The institute’s response to The Ethics Guidelines for Trustworthy Artificial Intelligence (AI)’ published by the High-Level Expert Group on Artificial Intelligence (AI HLEG).
We agree with the seven key requirements that AI systems should meet in order to be trustworthy proposed in these guidelines. In particular, we applaud the way in which these guidelines have been designed to be applicable to the wide range of stakeholders involved in artificial intelligence: specifically, developers, deployers and end-users, as well as the broader society. We agree that “developers should implement and apply the requirements to design and development processes”, that “deployers should ensure that the systems they use and the products and services they offer meet the requirements” and that “end-users and the broader society should be informed about these requirements and able to request that they are upheld.” All requirements are recognised as being of equal importance, and the interrelations between all seven are noted.
However, there are two key challenges specific to education that are not sufficiently recognised within these guidelines.
Firstly, those deploying artificially intelligent systems within education and training settings rarely have sufficient understanding of the technology to be able to ensure that the systems that are being used meet the requirements laid out here for trustworthy AI.
Secondly, there is an enormous and critically underestimated assumption in the statement within principle 1 about Human Agency and Oversight that “users should be able to make informed autonomous decisions regarding AI systems. They should be given the knowledge and tools to comprehend and interact with AI systems to a satisfactory degree and, where possible, be enabled to reasonably self-assess or challenge the system.” This is an admirable statement, but the magnitude of the task of enabling everyone with this knowledge and these tools must not be underestimated. It will take a concerted effort and significant investment in education systems across the world if we are to even scratch the surface of providing people with “the knowledge and tools to comprehend and interact with AI” so that they benefit from these technologies and that they suffer no harm.
The IEAIED will work to develop frameworks and mechanisms to help ensure that the use of AI across education is designed and deployed ethically. Our strategic ambition is to enable the UK to be a world leader in ethical AI for education. We aim to do this by engaging with a wide range of stakeholders to develop a code of practice that protects the vulnerable and disadvantaged and maximises the benefits of AI across society.
To maximise the benefit that AI could bring to education, we must ensure that AI technologies, practices and proponents are aligned with the moral values and ethical principles of a ‘Good AI Society’ (Floridi, 2018). Ethics must be ‘designed in’ to every aspect of AI in education and training from the inception of an idea for an AI product or service to the scaled adoption of that AI within society. Educational institutions and practitioners are responsible for the pedagogical, emotional, physical and moral wellbeing of their students, employees, and the ethical context of these responsibilities with respect to the fast-approaching AI revolution needs much more careful attention.
The IEAIED will identify the assumptions about human behaviour and intelligence that underlie current AI development and innovation. We will consider how social values are currently embedded and manifested in AI design. How ethical frameworks can in future be grounded on responsible innovation in all applications of educational AI. We will also examine how AI in education avoids prioritising undesirable aspects of learning at the expense of other beneficial aspects, which could fundamentally distort the process of learning, and human development.
What will the Institute for Ethical AI in Education do?
We will study best practice in the world today drawing on the work of educational institutions, governments, philosophers, the UN and other relevant bodies.
- Identify the existing forms of governance, ethical principles, guidelines, standards and regulations relevant to ethical AI in education.
- Produce a framework for ethical governance for AI in Education for the UK.
- Produce a roadmap for the development of inclusive, responsible, explainable, interpretable, verifiable and agile ethical governance for AI in Education that will protect people from disadvantage, ill, harm.
- Build public knowledge and appropriately critical trust in AI in education through public engagement.
- Demand more from our large technology companies in terms of ethical practice and ethical education and training for educators, trainers, parents and students.
- Demand support for our Start-up and SME technology community to ensure their ethical practice.
- Demand ethics training for everyone involved in education or training directly or indirectly.
- Ensure that ethical AI in education minimises extra burdens to educators, learners, and parents beyond their need to understand what is required for them to protect themselves, their students, their employees, or their family from.
- Publish an ethical code of conduct for those working to develop and use AI for educational and training purposes.
- Provide ethical training and approval protocols for anyone developing or using AI in education and training to encourage ethical transparency and publicly ethical practice.
Why is the Institute for Ethical AI needed?
We believe with Stephen Hawking, 2018, that AI, in general, is going to be either the best thing for humanity or the worst thing that has ever happened. We believe we are putting insufficient thought and effort into the ethical applications of AI in general. This is particularly the case in education where remarkably little thought has been put in by government, parliament, universities, schools and educational bodies. The running has been made by the technology companies, for whom ethical and broader societal implications are less important than the bottom line.
The growing volume and diversity of data generated raises ethical concerns about what happens to that data, who owns it, who uses it, for what purposes, and who is accountable for its interpretation and exploitation. Users’ rights to self-determination, ownership and privacy along with the identity of and accountability by anyone who may affect those rights are key considerations that are universally applicable to all types of data generated.
This requires more than GDPR and other form of privacy and data protection.
These considerations are of particular pertinence to systems that use AI in the support of teaching and/or learning, because such systems frequently aim to effectuate a lasting change on their users, e.g. through recommendations, persuasion or feedback, to engender personal relationships between humans and machines, e.g. through use of conversational or emotion-enabled agents, or even to develop a degree of dependency (intrinsic motivation), e.g. through rewards and levels in games. In the case of technologies developed for education (including ITSs, MOOCs, Learning Analytics, socio-cognitive interventions), the explicit ambition is to achieve positive life-long changes that are measurable at behavioural, psychological and, increasingly, deep neural levels. However, as yet, the methods, technologies and ideologies that underpin the generation, analysis and exploitation of interactive systems’ data have not been subject to sufficient systematic and interdisciplinary scrutiny to ensure our full understanding of their potential effects on users, of the associated ethical issues and risks against which we must safeguard.
Data is never just ‘raw’. The decision to collect data is an action based upon a judgement that the data is of value. Data is a value driven entity in its own right and the decisions to collect it must be grounded in sound ethical principles.
The world of AI in Education beyond academic research, where ethical approval must be sought and granted, is the ‘wild west’, with no consistent or effective governance. Both advertently and inadvertently businesses are taking advantage of people in the way that they are building, implementing and rolling out AI.
We need to guard against inappropriate or biased data collection, analysis or interpretation, e.g. as employed in user modelling, provides the basis for systems’ interactive capabilities. By their very nature interpretations have to involve a commitment to particular theoretical or ideological perspectives, e.g. when data is translated into knowledge representations, and these are also inevitably subjective and debatable in nature.
Lack of input to the development of AI for use in Education from those who understand teaching and learning.
False protection promises from large technology companies who wish to constrain users to their particular brand of technology.
Academics are required to seek and gain ethical clearance for anything and everything they do that involves people: they must demonstrate that they will do not harm – this must be extended to anyone and everyone working in education.
Data without context is not merely meaningless, it is potentially dangerous. We will therefore demand that the interpretation of data can only be conducted with data that has been appropriately contextualised.
The IEAIED will meet and study evidence, producing an interim report in December 2019 and its final report in 2020.