Japan needs a better strategy for child-centered AI

2022 / 3 / 17 | Author: enw_editor

Photo by Leon Seibert on Unsplash

Artificial intelligence (AI) systems are increasingly being used by governments and businesses in fields ranging from education to healthcare to welfare services. And while AI is a force for innovation, it also poses a risk to children, threatening their privacy, safety and security. Yet most AI policies and strategies today say little, if anything, about the relationships between AI and children. Like most countries, Japan needs to give more attention to issues of AI and children in its national AI strategy, particularly when it comes to protecting children’s data and privacy and cultivating children as a future workforce.

Japan should look to UNICEF and the Government of Finland on ways to fill the gap. Late last year, the two parties jointly hosted the Global Forum on AI for Children, a virtual conference that brought together experts, policymakers, practitioners, researchers, children and young people to share their knowledge, expertise and experience on the use of AI systems by and for children.

The event was part of a two-year project aimed at exploring child-centered AI. It coincided with the launch of Version 2.0 of their Policy Guidance on AI for Children, which describes the potential positive and negative impacts of AI on children and recommendations for ways to leverage and mitigate them, respectively. The event sought to promote understanding of the guidance and case studies from its pilot implementation.

Potential impacts of AI systems on children

Today’s children are the first generation that will never remember life before smartphones. They are growing up in a time when AI-enabled applications and devices are being used with rapidly growing prevalence. They may even be the first generation to ride self-driving cars as a normal part of everyday life.

For these reasons, AI Policy Guidance 2.0 asserts that today’s generation of children must start preparing for AI-related risks we older generations have never encountered before – and prepare before such risks have fully arrived and become commonplace. Many state governments and organizations are already developing “human”-centered AI policies and systems, but few have begun developing policies that consider AI’s impacts on children. Caution is needed because the impacts of AI-enabled technologies on children are currently, and perpetually, uncertain.

The size of AI’s impacts will vary depending on each child’s socioeconomic, geographic and cultural context, as well as on their stage of physical and psychological development. Even AI systems that are not designed for children can impact children indirectly. Therefore, AI systems need to be designed with children in mind, regardless of who they are designed for.

AI-related risks and opportunities for children

AI Policy Guidance 2.0 lists the following as potential risks of AI for children.

  • ・Systemic and automated discrimination and exclusion through bias
    ・Limitations of children’s opportunities and development from AI-based predictive analytics and profiling
    ・Infringement on data protection and privacy rights
    ・Exacerbation of the digital divide

 

Conversely, AI presents the following opportunities.

  • ・Aid children’s education and development
    ・Contribute to better health outcomes for children
    ・Support the achievement of the SDGs

Governments, businesses and other organizations should develop AI policies and strategies that achieve the best balance between these potential risks and opportunities. AI Policy Guidance 2.0 recommends that organizations even minimally associated with these risks and opportunities meet nine requirements for child-centered AI, listed below.

  1. Support children’s development and well-being
    2. Ensure inclusion of and for children
    3. Prioritize fairness and non-discrimination for children
    4. Protect children’s data and privacy
    5. Ensure safety for children
    6. Provide transparency, explainability, and accountability for children
    7. Empower governments and businesses with knowledge of AI and children’s rights
    8. Prepare children for present and future developments in AI
    9. Create an enabling environment

According to IBM, “explainability” in Requirement 6 refers to “a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms,” recognizing that “as AI becomes more advanced, humans are challenged to comprehend and retrace how the algorithm came to a result.” Explainability, for example, “might be important in allowing those affected by a decision to challenge or change that outcome.”

Child-centered AI: A model example from Sweden

Sweden is one country that has already started experimenting with ways to place children’s rights at the center of AI system development. Since the UN Convention on the Rights of the Child (CRC) was incorporated into law in Sweden in January 2020, around 60% of Swedish municipalities have been working to ensure that the CRC is incorporated into their policies and processes.

Many of those initiatives, however, were stopgap or one-off measures. Stakeholders voiced a need for more comprehensive policies and strategies. To address this, Sweden launched two initiatives: City Track and National Track. Each aims to identify the components needed for incorporating child-centered AI systems into government and private-sector operations. Based on results obtained from three cities in the City Track, the National Track is advancing development of a national strategy through cross-sector collaboration and with sponsorship from the Swedish Innovation Agency.

In Japan, the pandemic spurred a rapid adoption of tablet devices in elementary and junior high school classrooms. Many schools, however, have left it up to parents and guardians to decide how to manage those devices when brought home. Given the uncertainty surrounding the potential impacts of AI on children, including violation of their human rights, leaving such decisions entirely up to parents is a tough ask.

Like the examples offered by Finland and Sweden, Japan — including the Government of Japan, municipalities, businesses, and academia — faces a growing urgency to assess the potential impacts of AI and develop policies and strategies that ensure a safer, more secure and empowering educational environment for children.

Co-authored by Nao Okayama and Stephen Jensen