Engage with children on ethical and social norms for artificial intelligence
Relational skills of children are unique sources of inspiration for the development of human-centred Artificial or Artificial Intelligence (AI). But children are still too little involved in the public dialogue on AI systems. DesignLab and children's rights organisation KidsRights in Amsterdam conducted unique research in the Netherlands that they focused entirely on children and young people. The aim of the research is to collect ethical and social values, both about children's current interactions with AI systems and their outlook on the future. In doing so, DesignLab and KidsRights aim to contribute to promoting broad and meaningful participation of children.
Lead researcher Karolina La Fors of DesignLab: 'The developments in AI are going so fast that we shouldn't forget to also look at the ethical limitations and what it means for children. Children contribute ethical standards that adults don't think about'.
KidsRights president Marc Dullaert calls on the Dutch government to involve children and young people in drawing up ethical standards for human-centred AI: 'Now everything that is technically possible seems to happen to us, including harmful effects. Children and young people are extremely vulnerable here. We must therefore protect and involve them and be inspired by them. With this research, children and young people are lecturing the government'.
Survey offers new insights
The representative survey of 374 children aged between 4 and 16 - most participating children were between 6 and 13 - provides new, surprisingly interesting insights. These are of great importance for the development of AI systems.
For instance, the researchers asked the children about positive and negative experiences with AI systems in their daily lives, such as taking on various social roles. When asked whether children would like to be helped by a robot salesperson in shops, 55.3% replied yes and 41% no. The majority think robots perform better than humans here.
When asked whether a police officer can be a robot, 54.8% are against and 41.5% in favour. The group against thinks robot cops could threaten their safety. A robot doctor is a bad idea for most children: 61.3%, compared to 35.3% who would like it.
'I would feel safer with a normal human as a GP.'
Boy 11 years old, Dordrecht
'Can a robot be your friend?'
Children indicate that they consider human characteristics and values important and that these should not be lost in the development of AI systems. AI should serve humans; that human check is important to them. Thus, they do see AI as a solution to a socially relevant problem, for example a robot that helps children with dyslexia. But a robot is not seen as a friend, mainly due to the lack of human literacy, comfort, humour and empathy.
'If a robot is made to be my friend, it would only learn from me. So how can I know how to make others happy or what sadness is? How can I learn and adapt if I only learn what I am doing from a robot friend?'
Girl 8 years old, Enschede
Broad and early dialogue helps
Where many children (70.6%) did not know the term AI before the survey, eventually everyone knew that they deal with a wide variety of AI systems on a daily basis. Examples of smart systems that children recognise AI in include computers, smart TVs, thermostats, smartphones, smart doorbells and robotic lawnmowers and hoovers. The second thing children recognise AI in or associate it with are various brands, such as Google, Playstation, Youtube, TikTok, Netflix, Apple Watch, NS 'ov-chipkaart' or McDonald's order kiosk.
These are some results from the research report AI register of children in the Netherlands: Children's awareness, ethical and social values and ideas about AI systems. La Fors and Dullaert explain that much remains unclear about the implications of AI systems on children's development. A broad and early dialogue helps here. Questioning children about their awareness of and ideas about AI is crucial in moving towards more human-centric AI systems and for discussing what it means to be human-centric. Especially in a world where technical developments like chatGPT, GPT Force and internet of things are already unimaginable.