AFP wants joyful pictures for your childhood | information age

0

Researchers train AI to recognize happy children. Photo: Shutterstock

Nine months after Apple dropped plans to scan users’ photo libraries for child sexual abuse material (CSAM), a Monash University research team is taking a different approach by calling on adults to voluntarily contribute 100,000 childhood photos for a new AI project.

Launched by the Monash-based AiLECSlab – a joint venture between the university, the Australian Federal Police (AFP) and the Westpac Safer Children Safer Communities grant program – the new My Pictures Matter (MPM) project aims to use techniques of machine learning (ML) to, effectively, mathematically model what a happy child looks like.

One of the main goals of the project, which has received formal academic ethics approval, is to circumvent machine learning (ML) image analysis conventions that require image algorithms to be trained on a large number of the type of image to be analyzed.

Researchers building machine vision systems often train their algorithms on freely available photo libraries, or by scraping images from social media sites or the internet – an approach that recently saw Australian company ClearView AI investigated by researchers. privacy commissioners and ordered to delete her images. .

Using the same technique for CSAM investigations would require the algorithms to be fed large amounts of child pornography material, which would raise serious ethical, moral and legal questions.

Instead, the Monash team recruits a large number of adults to donate happy childhood photos to MPM and formally consent to their use to train an AI/ML engine using a large number of event images. normal childhood.

The images will be anonymized and stored securely with limited access to Monash researchers and AFP, with a ‘data minimization’ approach, meaning the team will not collect any information from participants outside of their email addresses – which will be stored separately from the images.

Ease the burden on the police

Analysis of collected and consented “safe” images will provide what AiLECSlab researchers believe is the world’s first large-scale, consent-based, ethically managed AI model.

In establishing what “safe” childhood images look like, the working hypothesis is that an AI that would then be presented with a CSAM image during an AFP survey would detect features that deviate from the norm – flagging the image as potential abuse material.

“This is good exploratory research and we will investigate how ML technologies can then be applied with other datasets to assess whether visual files may contain dangerous images of children,” said researcher Dr Nina Lewis. in the software department of the university. Systems & Cybersecurity, told information age.

These assessments are usually carried out manually by teams of specially trained AFP officers at agencies such as the Australian Center for Combating Child Exploitation (ACCCE), on which the need for viewing thousands of often brutal images takes a heavy emotional toll and even induces post-traumatic stress disorder (PTSD).

The ACCCE received more than 33,000 reports of online child exploitation in 2021 alone, with AFP Chief Principal Constable Dr Janis Dalins warning that “reviewing this material horrifying can be a slow process and the constant exposure can cause significant psychological distress to investigators.”

The MPM project could minimize that burden, Lewis said: “Anything we can do to not replace humans in this process, but to help sort out some of the material and deal with the magnitude of the problem is really going to be of great help.”

Pushing AI too far?

Yet even as the Monash team explores the effectiveness and viability of consent-based image collection, previous projects have struggled to apply similar AI-based image analysis of a ethically effective way.

Apple’s high-profile plans to scan its users’ iCloud Photo Libraries for potential CSAMs have been welcomed by child protection advocates, but fears the system could become a tool for De facto mass surveillance – able to automatically flag individuals’ social and professional associations – forced the company to put plans on hold.

AI has a patchy history when applied to child protection decision-making: years ago, for example, the UK’s Metropolitan Police announced plans to harness the AI to search for computer devices seized for CSAM – yet this month child protection officials in the US state of Oregon announced they would stop using a tool powered by the AI designed to flag families for investigation of potential child neglect.

The algorithm in place had, authorities concluded, produced racially biased results despite its designers’ best efforts to avoid bias with an “equity correction”.

Lewis is well aware that similar problems could arise in the MPM project – which will not accept any images of naked children even if they are in innocuous environments like the bathtub – as the number of images provided increases.

“We absolutely anticipate that there will be imbalances in the representation of what we get,” she explained, “and it could be related to ethnicity, age, race, the age of the individuals in the photo and the age of the photos themselves.”

“Due to the nature of crowdsourcing, we won’t really know what we’re getting until we get it.

“Developing machine learning technologies obviously requires a lot of data, and we’re really interested in how this can be done in an ethical way.”

Share.

Comments are closed.