AIBlogEvent

Beyond Buzzwords: What AI and Digital Safety Really Mean for Learners

read

|

24 Feb 2026

|

In this episode, we explore what it means to keep learners safe in an AI-enabled education system, a question that is becoming increasingly urgent as AI tools move from pilots into everyday classrooms. While AI can support teaching and learning in new ways, it also introduces real risks for children. These include unsafe or biased outputs, weak data practices, and unclear accountability. The challenge is not only to set principles, but to ensure safeguards hold up in the messiness of real school contexts.

This discussion draws from the expertise of Google for Education, Country Lead for Indonesia and Malaysia, Olivia Basrin, and Results for Development Director and Global Learning Lead at EdTech Hub’s AI Observatory and Action Lab, Daniel Plaut. Throughout the discussion, we will look into practical realities of protecting learners in real-world settings in which the pace of technology is evolving quicker than ever before.

Watch the Episode

Read the Play-by-Play of the Conversation Below:

Question 1: The adoption of AI in education is accelerating faster than many systems can assess the risks. From your perspective, what are the biggest safety or security risks facing AI in education today?

Olivia began by highlighting that  data privacy and security were the most immediate risks as AI tools become more embedded in classrooms. She noted that enthusiasm for new tools often outpaces awareness of how student data is collected and used. Schools hold rich, long-term data on learners, and without strong safeguards, this information can be at risk of being exposed or misused. She also flagged pedagogical risks, stressing that students need guidance to understand AI’s limitations and avoid over-reliance.

Daniel echoed concerns about data privacy, particularly around what happens to data once it is entered into AI systems. He emphasised the risk of student data being used for commercial purposes or retained indefinitely, potentially following learners throughout their lives. Beyond data, he warned of broader child protection risks, noting that general-purpose AI tools can expose young users to harmful or inappropriate content if age-appropriate protections are not in place.

Question 2: What specific safeguards should schools or parents consider? And are there common misunderstandings around data privacy that need clearer communication?

Olivia pointed to a common misconception that education data is less sensitive than other forms of personal data. In reality, student data reveals detailed insights into learning and development and should be treated with the same level of care as financial or enterprise data. She stressed the importance of institution-managed accounts and secure platforms, alongside early education about AI—helping students understand bias, hallucinations, and limitations before they begin using these tools.

Daniel framed data as a form of currency in digital systems, where users often “pay” with personal information. He emphasised that learners need to understand their rights over their data and that responsibility should not fall solely on users. Strong technical safeguards, clear product principles from companies, and supportive government policies are all necessary to ensure children’s data is protected, particularly in lower-resource contexts.

Question 3: We’ve seen regulation struggle to keep pace with technological change. From your perspectives, where does regulation matter most in ensuring AI safety—particularly around child protection and data privacy?

Daniel highlighted that regulation is essential in setting clear standards for how AI is used in education, especially around child protection and data privacy. He emphasised the need for regulations that encourage tools reflecting local languages, cultures, and contexts, rather than importing values embedded in global models. For him, effective regulation should be flexible, iterative, and grounded in education priorities.

Olivia viewed regulation as a way to establish clear guardrails rather than rigid rules. She stressed that regulation works best when it aligns with national education visions, particularly around privacy and security, while still allowing room for innovation. The goal, she noted, is balance—protecting learners without closing off opportunities for meaningful and responsible AI use in education.

Question 4: How do you guys engage with regulation and, you know, where do you see the role of regulation currently?

Olivia described industry engagement with regulation as an ongoing, collaborative process rather than a one-off compliance exercise. Given that AI evolves daily—sometimes hourly—she argued that regulation will never fully “catch up” with technology. Instead, the most effective regulation sets flexible, principle-based frameworks that reflect national education needs and visions.

She emphasised that regulation works best when governments are clear about what they want to achieve in education and how AI can support those goals. From there, frameworks can establish non-negotiable guardrails—such as data privacy and security—while still allowing innovation to evolve. Olivia cautioned against extremes: neither opening the floodgates to unchecked AI use nor imposing overly restrictive rules that limit access and experimentation. For her, the key role of regulation is to enable controlled, purposeful transformation, grounded in a strong understanding of local education priorities.

Question 5: Inclusion in AI safety is not just about access, but it’s about protection. How do your teams ensure that AI models behave responsibly across, you know, diverse languages, cultural norms, and education systems?

Olivia framed inclusivity as central to how Google designs and trains AI models. She highlighted the importance of working closely with governments, education institutions, and local stakeholders to ensure AI systems reflect national curricula, cultural norms, and learning priorities. Training models with local data and policy input helps ensure responses are relevant and appropriate within specific country contexts.

She also underscored AI’s potential to support students with disabilities and special educational needs, sharing examples from Indonesia where accessibility features—such as voice interaction and adaptive language—have enabled learners with cognitive or mobility challenges to engage more effectively. Importantly, she noted that educators are involved throughout the product lifecycle, from early design to classroom feedback, ensuring accessibility and inclusivity are embedded rather than retrofitted.

Daniel approached inclusion through an equity lens, warning that AI could exacerbate inequalities if not intentionally designed to do the opposite. He highlighted the risk of a “second-order digital divide,” where learners in low-resource contexts may become overly dependent on technology in ways that reduce human interaction and holistic educational experiences.

At the same time, he recognised AI’s strong potential to personalise learning, particularly for neurodivergent learners and those with disabilities, if tools are designed and tested with these users in mind. For Daniel, meaningful inclusion requires bringing learners, teachers, and caregivers into the design process, ensuring AI supports, not replaces, the social, emotional, and community dimensions of education.

Question 5(a): Where should the line be drawn between AI use and human roles in education?

Daniel argued that while AI has demonstrated value in personalised and adaptive learning, education is fundamentally more than content delivery. Schools are social institutions where learners develop identity, belonging, and relationships – elements that cannot be automated.

He warned that in marginalised contexts, there is a risk of substituting human connection with technology in the name of efficiency or scale. This, he suggested, could deepen inequities rather than resolve them. Drawing the line requires recognising teachers as community leaders and role models, not just transmitters of information. AI should support learning where it adds value, but educators must remain central to nurturing social, emotional, and civic development.

Question 6: Are there any emerging security or safety concerns that you believe that the education sector might be underestimating? And especially in Southeast Asia?

Olivia identified teacher preparedness and digital literacy as an emerging and often underestimated risk. While AI is frequently positioned as a solution to workload and efficiency challenges, many educators are introduced to new tools without sufficient training or clarity on how they integrate into pedagogy.

She stressed that AI should lighten teachers’ administrative burdens, not undermine their confidence or professional identity. Without sustained investment in strengthening teacher skills, there is a risk that technology adoption outpaces educators’ ability to use it meaningfully, leading to burnout, misuse, or disengagement. For Olivia, safeguarding learners also means investing in teachers as the anchors of the education system.

Daniel reinforced this point by noting that the most effective tools are those that solve real problems teachers already face. He highlighted the importance of co-designing AI tools with educators and giving them agency in how technologies are used in classrooms. Preparing teachers is not just about training them to use AI, but about ensuring they help shape its role in learning environments.

Question 6(a): In 3 to 5 years, what is one safety or responsible AI standard that you’d like to see embedded across Southeast Asian education systems, and why?

Daniel pointed to two priorities: age-appropriate access and data lifecycle protections. He argued that just as education systems regulate other forms of content, they should be willing to limit certain AI functionalities for younger learners. Not every tool or capability is developmentally appropriate.

He also stressed the importance of safeguards around how long education data is stored and how it is anonymised, warning of future risks related to surveillance and misuse. Rather than fixed rules, Daniel advocated for a continuous learning approach – testing, adapting, and updating standards as technology evolves.

Olivia reframed the question from standards to investment priorities. She argued that the most critical investments should be in educators and students; building digital understanding, ethical awareness, and core values that will endure even as technology changes.

She expressed concern that future learners may struggle to distinguish between AI-generated and human-generated content, making foundational AI literacy essential. For Olivia, preparing for the future means grounding innovation in human values, ensuring teachers remain central, and equipping learners to navigate an increasingly AI-saturated world with confidence and discernment.

Closing Message: If They Could Walk Away With One Key Message From Today’s Discussion, What Might That Be and Why?

Olivia: AI should be embraced with optimism for its ability to personalise learning and expand access, but never at the expense of human values. Technology can transform education, but it cannot replace the norms, ethics, and relationships that define meaningful learning.

Daniel summarised the discussion using framing from the Brookings Institutes’ recent publication:  “Prosper, Prepare, and Protect.” AI offers real opportunities to improve learning, but systems must actively prepare educators and learners, while protecting children from unintended harms—particularly those affecting long-term development.

This episode draws on expertise from the following discussants:

  • Olivia Basrin is the Country Lead for Google Education in Indonesia and Malaysia, where she spearheads growth strategies and digital education reforms, including expansion into Brunei and Timor Leste. With a strong background in public policy and government affairs, she previously held leadership roles at PT HM Sampoerna and Philip Morris Asia, and served in the Office of the President of Indonesia. Olivia holds a Master in Public Policy from the National University of Singapore.
  • Daniel Plaut works as a learning partner to education implementers to further their impact and strengthen systems. With over twelve years of experience in education innovation, he has recently focused on the role of EdTech and artificial intelligence. He currently serves as Program Director for Education at Results for Development (R4D) and as Global Learning Lead for EdTech Hub’s AI Observatory and Action Lab, where he supports education decision-makers in navigating the opportunities and challenges of AI in education.

Episode 1: Beyond Buzzwords — What Do Real Partnerships Look Like?

Episode 2: Beyond Buzzwords — Who Pays? The Future of EdTech Financing

Episode 3: Beyond Buzzwords—Rules of the Game: Governing AI and EdTech in the ASEAN Region

Statement of disclosure: This blog was developed with support from generative AI. A transcript of the recorded session was generated, and the content was organised into question-based segments, which were then provided to an AI tool to assist in the drafting and structuring of this piece.

Acknowledgements

Thank you to colleagues from Google for Education, Olivia Basrin, Pundika, and team; Results for Development (R4D), Daniel Plaut, and all those at EdTech Hub supporting this work including Neema Jayasinghe, Sangay Thinley, Jazzlyne Gunawan, Sophie Longley, Jillian Makungu, and Laila Friese on developing this fifth episode of the EdTech Hub Spotlight Series.

This publication has been produced by EdTech Hub as part of the ASEAN-UK Supporting the Advancement of Girls’ Education (ASEAN-UK SAGE) programme. ASEAN-UK SAGE is an ASEAN cooperation programme funded by UK International Development from the UK Government. The programme aims to enhance foundational learning opportunities for all by breaking down barriers that hinder the educational achievements of girls and marginalised learners. The programme is in partnership with the Southeast Asian Ministers of Education Organization (SAMEO), the British Council, the Australian Council for Educational Research, and EdTech Hub.

This material has been funded by UK International Development from the UK Government; however, the views expressed do not necessarily reflect the UK Government’s official policies and equitable digital learning.

Share: