Spying on students to prevent suicide

Ms. Cholka was unaware that artificial intelligence software used by the Neosho School District in Missouri tracked what Madi wrote on the laptop provided by the school.

During the night, Madi had texted a friend to tell her of her intention to kill herself with her anti-anxiety medication. A school official was alerted and called the police. WhenMrs. Cholka and the policeman found Madi, she had already swallowed some 15 pills. They dragged her out of bed and rushed her to hospital.

2100 km away, around midnight, the landline phone rang in a house in Fairfield County, Connecticut, but the parents didn’t answer in time. Fifteen minutes later, three police officers were at the door: they wanted to see their 17-year-old daughter, because surveillance software had detected a risk of self-mutilation.

Her parents woke her up and took her down to the living room so that the police could question her about a phrase she had typed on her cell phone at school. It was soon concluded to be a false alarm: it was an extract from a poem she had written years before. But the visit shook the young girl to her core.

“It was one of the worst experiences of her life,” says her mother, who asked not to be named so she could discuss this “traumatic” episode for her daughter.

Of the AI technologies entering American schools, few raise as many critical issues as those aimed at preventing self-harm and suicide.

This software became widespread during the COVID-19 pandemic, after many schools provided students with laptops and switched to virtual computing.

A U.S. law requires these computers to be equipped with filters to block certain content. Educational technology companies – GoGuardian, Gaggle, Lightspeed, Bark, Securly, among others – saw this as an opportunity to tackle suicidal and self-harming behavior. They have integrated tools that scan what students are typing and alert the school if they appear to be considering harming themselves.

Millions of American students – nearly half, according to some estimates – are now subject to such surveillance. Details are disclosed to parents once a year, when they give their consent.

Most systems identify key words; algorithms or human review determine which cases are serious. During the day, students can be withdrawn from class and screened.

Outside school hours, if parents can’t be reached by phone, the police can go to students’ homes to see what’s going on.

There’s no way of analyzing the accuracy, advantages and disadvantages of these alerts: the data belongs to the technology companies; the data on each subsequent intervention and its outcome is generally kept by the school authorities.

According to parents and school staff, alerts can sometimes be used to intervene at critical moments. More often, they are used to provide support to students in difficulty to prevent them from acting out.

However, alerts can have unforeseen, sometimes harmful, consequences. Advocacy groups believe there is a risk to privacy. They are also criticized for putting students in unnecessary contact with the police.

As for the mental health benefits of this tool, opinions are divided. There are a lot of false positives, which wastes staff time and upsets students. In some districts, home visits outside school hours have generated so much controversy that interventions are now limited to the school day.

But in some schools, it is stressed that this software helps with a very difficult task: recognizing in time children who are suffering in silence. Talmage Clubbs, Director of Guidance Services for the Neosho School District, was reluctant to turn the system off, even during the summer vacations, for moral reasons: “It’s difficult: if you turn it off, someone could die.”... Source

Leave a Reply