Imagine a classroom where AI tailors lessons to each student’s learning style — a place where pupils and teachers would receive instant feedback and learn new ways to shore up their skills. 

This, in a nutshell, is what AI in education could achieve. Realising this potential, however, requires careful consideration of the ethical implications and a commitment to ensuring equitable access for all learners. Without a responsible approach, the deployment of AI in the classroom risks exacerbating existing inequalities and undermining the development of critical thinking skills.

AI in Education – an opportunity for all

There are numerous instances where AI has already begun to positively impact the educational sector. For example, intelligent tutoring systems customise learning experiences based on individual student needs, providing real-time feedback and tailoring exercises to help students master mathematical concepts. Beyond personalised learning, AI-powered chatbots can also assist students by answering queries, providing information on course schedules and help teachers with administrative tasks, thereby reducing the workload on staff and improving the overall student experience.

Such a framework for AI implementation in the classroom would, ideally, provide new and intuitive ways for students to learn and liberate teachers from dull administrative tasks. This does not mean, however, that such deployments are risk-free. The seemingly endless tussle over innate biases in models – wherein the known and unknown prejudices of developers and training data are perpetuated through AI programs – also impacts educational AI. 

Several examples of this can be found in a report I recently co-wrote for the Council of Europe. These include the failure of proctoring software in Dutch universities to recognise dark-skinned students, prompting a formal complaint to the Institute of Human Rights, or its latent capacity to induce anxiety in pupils with disabilities by disallowing carers or making breaks impermissible. Additionally, low-income families sharing rooms may face disadvantages as family members passing behind the screen could signal “aberrant behaviour”  

The design of AI in the classroom

Designing AI for education requires input from diverse professionals, including tech experts, educators, psychologists, diversity specialists, ethicists and students. The first question should be whether the AI solution is necessary – will it solve a problem or create one? After all, techno-solutionism – the belief that technology alone can solve complex societal problems – often leads us to view AI as a quick fix for deeper educational challenges.

While AI-powered tutoring systems can personalise learning, they cannot address the root causes of educational inequality, such as poverty, lack of access to resources or systemic discrimination. AI can support, but not replace, sound educational policy and effective teaching practices. We must be wary of implementing AI solutions simply because they are technologically feasible without carefully considering their broader social and ethical implications. AI can be a powerful tool, but it’s not a magic bullet.

Once viability is established, it is important to evaluate the potential outcomes of the product from various perspectives, including data, functionality, privacy, training, impact on human rights, and inclusion. Unintended consequences may occur if appropriate stakeholders are not involved in the decision-making and evaluation process of the AI system. Therefore, it is essential to have these stakeholders participate in these discussions.

More importantly, effective AI regulation in education and research requires stakeholders to develop coherent policy frameworks grounded in a human-centred approach. UNESCO’s 2023 guidance on generative AI in education offers a strong foundation for developing such frameworks, promoting quality, equity, and inclusion, and addressing ethical considerations, data privacy, and the need for stakeholder involvement. The guidance also outlines key steps for implementing regulations that emphasise the importance of international cooperation and capacity building: promoting cultural diversity and inclusion, encouraging different worldviews, helping students and teachers develop AI skills, safeguarding human agency, overseeing GenAI models and testing them locally, and examining GenAI’s long-term implications.

In university settings, we should also encourage “distrust by design” to encourage caution. This approach helps students analyse AI-generated content critically, question sources, understand limitations, and discuss ethics, privacy, and AI’s societal impact. Specifically, students should be taught to ask questions such as “Who created this AI?”, “What data was used to train it?”, “What are the potential biases embedded in this system?”, and “How might this technology be misused?” Educators could even incorporate these questions into classroom discussions and assignments, fostering a culture of critical inquiry.

By instilling this sense of caution, we empower students to become discerning consumers of AI-generated information, encouraging them to question the source, understand its limitations, and engage in discussions about ethics, privacy, and the broader implications of AI in society.

Shaping students’ future with human-centred AI

Transforming our world with AI requires more than just preparing the younger generation for future professions. It involves reframing technology as a multidisciplinary subject that blends science and the humanities. Since product design impacts all aspects of life, it cannot be viewed solely as a technological issue. Traditional technological thinking alone will not address design challenges such as inclusion, access and transforming how we live and work.

As AI increasingly integrates into our lives, we must equip future generations with the critical thinking skills and ethical awareness necessary to navigate this complex landscape. We must move beyond simply embracing the potential of this technology and actively shape its development and implementation to ensure a more equitable and just future for all. This requires ongoing dialogue, collaboration, and a commitment to putting human values at the centre of technological innovation.

Ivana Bartoletti is the global chief privacy and AI Governance Officer at Wipro

Read more: We should all be embracing the desynchronised workplace