In 2021, I contributed to the 'Human Experience' blog, sharing insights into Higher Education and supporting military transitions into teaching in Further Education and Training. Now, having relaunched the blog as its editor, I revisit the digital space to explore the ethical challenges posed by Artificial Intelligence (AI) in education, transcending borders and resonating globally.

Currency and Collaboration

The decision to mark the return of the blog was made following an ongoing collaboration between me and Scott Hayden, the Head of Teaching, Learning and Digital at Basingstoke College of Technology. Having been invited to talk to 250 colleagues at the college on the topic of workplace culture, I asked Scott to return the favour and speak to colleagues of mine in the School of Education, Languages and Linguistics, University of Portsmouth. Scott duly obliged and caused quite a stir! His talk about AI and Wellbeing was provocative for some, insightful for most and transformational for many.

With this, though, came completely legitimate concerns – are we leaping to the AI world with boundless enthusiasm and reckless abandon?

David Mather, Associate Head of School (Students), School of Education, Languages and Linguistics

Understanding the Ethical Dimensions

Popularised by platforms like ChatGPT, AI is poised to redefine classrooms. However, ethical challenges arise regarding equitable access to AI benefits, regardless of socio-economic status or cultural capital. I have the luxury of being able to indulge my curiosity about AI; I have the acumen, contacts and access to technology to make this possible. For others, AI isn’t a source of potential – it’s an addition to platforms such as Snapchat that helps users to address questions, seek advice and help with tasks (with Snapchat, you have to pay in order to have the option to disable the AI functionality).

This normalisation of AI - an intensely powerful entity that’s potential is yet to be fully realised – surely warrants caution?

David Mather, Associate Head of School (Students), School of Education, Languages and Linguistics

Privacy in the Age of AI

The integration of AI in popular culture and everyday social life such as that typified by social media which generates vast amounts of data, requiring increased responsibility for privacy safeguards. How can we strengthen protections against unauthorised access, potential breaches and sophisticated deceptions? Some might argue that it isn’t worth asking the questions: Martin Lewis has warned us time and time again that failing to secure our virtual-selves is a disaster waiting to happen, yet scams and deceit are at an all-time high. A technology that can be used effortlessly to mimic those to whom we turn for information, advice and guidance will surely amplify this?

AI's Role in Student Success

Despite legitimate reservations, AI's potential for learning and personal growth is compelling. AI is being used to coach children who are struggling with maths, save time for those tasked with the drudgery of writing reports and even provide guidance from those who have long left the physical world (asking an AI chatbot to answer question whilst assuming the role of David Bowie is a recent and welcome discovery – thanks, Scott). Amongst the many challenges though is that of ensuring equality of both access and understanding. When are the AI Literacy classes going to begin in our schools? When will the GCSE in AI Prompt Engineering be introduced? Or, could it be that AI will make such questions irrelevant? To misquote Jurassic Park, will we find a way to adapt to AI in a similar manner to how we adapted to the existence of calculator, the computer and the internet - inclusive of the inequalities that coexist with them?

Crafting an Ethical Framework

Embracing technological innovation requires ethical choices being made. Policymakers must shape a system that harnesses AI's power within an ethical framework. Yet, in this lay perhaps the biggest challenge of all: what will happen if policymakers ask AI to devise the ethical boundaries of its existence? What will happen when AI learns to be manipulative?


As education and technology converge, real-world insights become crucial additions to the discourse. The challenge lies in preventing misuse of AI and ensuring responsible use. Who defines what is deemed to be ‘responsible’ is a question that maybe only a chatbot will soon be able to answer.

Author: David Mather is the Associate Head of School (Students), School of Education, Languages and Linguistics, Faculty of Humanities and Social Sciences, University of Portsmouth. He is a Senior Lecturer in Educational Leadership and Management. His research area concerns matters of identity, hierarchy and workplace culture, with particular emphasis placed upon military to civilian transition.