SBN

AI vs. Keyword Scanning Self-Harm Monitoring Technology and Suicide Prevention

What self-harm monitoring technology is best for K-12 students?

Data science has come a long way and its uses are broadly applicable. Advances in data technology like keyword scanning and Artificial Intelligence (AI) are allowing developers to build ever more advanced self-harm monitoring technology for school student safety initiatives, among other practical applications that help schools keep kids safe both online and off.

While self-harming behavior is far off from attempting suicide, most experts agree that it can be an early indicator of future suicidal ideation. They also tell us that the three top behaviors that we should look for in terms of student suicide are:

  1. Talking about wanting to die or kill oneself
  2. Looking for a way to kill oneself, such as searching online for alternatives or obtaining a gun
  3. Talking about feeling hopeless or having no reason to live

On the other hand, students who self-harm aren’t looking for ways to die. Rather, they’re looking for ways to feel alive. Indicators of self-harming behavior that may show up in school collaboration tools include:

  1. Using a document as a digital journal, where the student writes about harming themselves. The most common types of self-harm are cutting, scratching, burning, and hitting oneself or hitting an object
  2. Images that are uploaded, and potentially shared with friends, of cuts, burns, bruises, etc.
  3. Sharing with friends that they’ve been harming themselves via chat apps, email, and/or shared documents

In today’s digital world, students often “talk” online in chat rooms or on social media. Increasingly, students are doing that talking in school-provided apps like Google Docs and Google Chat. As a result, there are more calls for district IT teams to get involved with student suicide prevention programs by implementing tools like self-harm monitoring.

There are several vendors on the market that develop student safety monitoring technologies for school districts. Some use AI, others use keyword scanning, and most use a combination of the two. So, what is the difference between keyword scanning and AI when it comes to self-harm monitoring technology?

Keyword Scanning

Keyword scanning is the earliest method used for student self-harm detection. It’s still heavily relied on today for two reasons:

  1. Keyword scanning is tried and true. This approach has been effective in student safety monitoring. However, it often returns a large number of false positives. IT admins brace themselves when lesson plans call for reading To Kill a Mockingbird. But, we know that the keyword scanning technology will find those student safety signals.
  2. AI is still in the early stages of development. It gets a lot of attention, but there’s still much work to be done. Models need to be built and they need initial and ongoing “training” to be effective. It’s simply not the silver bullet that many people would like it to be.

To set up self-harm keyword scanning, the admin inputs the keywords that they want the technology to look for. It’s similar to entering words into a search engine. In this case, however, when the system locates a keyword it’s supposed to look for, it will flag it for further review.

Systems often use regex in addition to keyword scanning. Regex stands for “regular expression.” It allows admins to enter a text string that defines a search pattern as opposed to a single word. For example, have you ever entered your email address into an online contact form and gotten an error message telling you that what you typed isn’t a valid email address? It means that the contact form incorporates regex logic, and it knows that you forgot to type the “@” sign in your email address.

What to Look for in Keyword-Based Self-Harm Monitoring Technology

K-12 IT admins researching different options should look for self-harm monitoring technology that provides customizable filtering options, including the following:

  • Keyword Match Count: The ability to set the number of times a keyword must appear in a document, email, or chat before it is flagged as a safety signal and triggers an alert.
  • Context Setting: The ability to include words that appear before or after a keyword that will set off an alert. For example, rather than just looking for “cut,” the self-harm monitoring technology should look for contextual phrases such as “cut myself.” Otherwise, you’re likely to get alerts for things like “Let’s cut through the red tape.”
  • Context Length Customization: You’ll also want the ability to set your own length checks for the contextual phrases. As an alternative, you’d want to have the technology set up length checks for you.

Artificial Intelligence Self-Harm Monitoring Technology

AI is all the rage these days. Even though it’s not currently as perfect as some would like to describe it, it’s still pretty awesome and darn helpful. Many school districts are using technology that incorporates AI at some level for a variety of use cases. Among them is self-harm detection, with an end-goal of helping with suicide prevention.

How does AI technology work? In simple terms, it takes large amounts of data and learns from patterns in that data using fast processing and advanced algorithms. Developers can build AI models from the ground up as a proprietary system. Alternatively, developers can use open source technology, or obtain a license for existing technology that they can customize to fit their needs.

What are the Advantages of AI for Self-Harm Monitoring?

AI systems built for a specific purpose, such as student self-harm monitoring, work much better than those that are designed to be more generic. AI becomes better at finding what it’s looking for when it receives more specific and relevant data. Advantages of using AI vs. keyword scanning are:

  • AI understands the context of words in a sentence. For example, “I want to kill myself” vs “This heat will kill me” whereas a keyword-based match will flag both
  • AI can understand the context of sentences in a paragraph. So, even if a risky keyword is found, AI technology can understand the context of preceding and following sentences in a paragraph to evaluate if it’s a self-harm signal or not
  • AI understands word negation, such as “I want to die” vs “I don’t want to die”, which is not possible with keyword-based monitoring
  • AI understands tenses, singular vs plural, synonyms, adjectives, etc, for example discussing the past would be seen as less risky for self-harm signals compared to present tense
  • AI can tolerate spelling mistakes, abbreviations, emojis, etc. and thus they are more robust to real-world communications and behavior

It takes time to create a perfect AI system, and more realistically, you should call it near perfect. But, once these systems achieve that level of operation, the end result will be better self-harm detection and fewer false positives. Over time, the system will receive only relevant data and it will continue to learn and improve.

There’s no doubt that district IT teams are only going to become more important in the fight against student self-harm and other types of student safety signals, especially as the tools they use continue to improve. Cyber safety in schools is a broad issue, and IT admins aren’t the people who will take action to investigate and stop self-harming behavior. But they will increasingly become the first line of defense in identifying students who are in crisis online.

When it comes to K-12 cyber safety, districts are realizing the need to break down traditional silos and develop cross-functional partnerships. This way, students get the safety and resources they need to be able to learn and grow in a safe and healthy school environment—whether they’re in-class or online.

Bremerton School District Cyber Safety

The post AI vs. Keyword Scanning Self-Harm Monitoring Technology and Suicide Prevention appeared first on ManagedMethods.


*** This is a Security Bloggers Network syndicated blog from ManagedMethods authored by Katie Fritchen. Read the original post at: https://managedmethods.com/blog/self-harm-monitoring-technology/