The Trevor Project shows how even the simplest AI can help save lives

At least one LGBTQ+ youth between the ages of 13 and 24 attempts suicide every 45 seconds. That’s not the kind of problem you’d usually try to solve with artificial intelligence.

Machines aren’t smart enough to take the mental health crisis head-on. Sure, AI-powered therapy bots can be very effective when it comes to augmenting human counseling sessions or reinforcing cognitive behavioral therapy instructions.

But, when it comes to mental health crises, there’s no substitute for a qualified, empathetic human.

That’s exactly why The Trevor Project was established. It’s core products include phone, text, and chat support for queer youth in crisis. Something that, unfortunately, requires a lot of volunteers.

The group knew there was no way AI could directly bolster those numbers – The Trevor Project insists that everyone reaching out for help will always speak with a real, live human whether on the phone, via text, or in chat.

So they flipped the script. Instead of automating the counselors, they created the Crisis Contact Simulator , essentially chat bots that pretend to be LGBTQ+ youth in crisis in order to help train more volunteers.

Per a press release earlier this year:

Counselors receive a combination of training that includes best practices and core guidelines augmented by roleplaying exercises with both human and AI participants.

The chatbots are designed to represent LGBTQ+ youth in trouble. The first, released earlier this year, was a fictional teen from North Carolina who “feels anxious and depressed” named Riley.

Counselors interact with Riley in order to practice the skills learned during training, and as a way of preparing them for the real thing on live crisis lines.

According to the Trevor Project , it’s been able to add an additional 1,000 volunteers since implementing the Crisis Contact Simulator.

And, to that end, the organization recently announced “Drew,” a new persona meant to introduce counselors to a different challenge.

Per a recent press release:

It’s refreshing to see an organization dedicated to using AI for good. And it’s super cool to see the same basic technology powering the “How can we help?” bots on just about any commerce website helping The Trevor Project save lives everyday.

If you or someone you know is experiencing a mental health crisis you can contact a helpline by finding the appropriate listing for your location here .

To reach The Trevor Project’s helplines check out the image below:

Neural’s AI predictions for 2022

Welcome to the fifth annual “Neural’s AI predictions” article! That makes this one of the longest running series in Neural’s history. And, this year, we aim to set the bar higher than it’s ever been with our best round of insights yet.

Gather round, grab some hot chocolate, and let’s see what the experts think is about to happen next in the world of AI:

Natalie Monbiot, Head of Strategy at Hour One :

Max Versace, CEO and co-founder of Neurala :

Andy Hock, Head of Product at Cerebras Systems :

Yashar Behzadi, CEO and founder of Synthesis AI :

Michael Krause, Senior Manager of AI Solutions at Beyond Limits :

Kim Duffy, Senior Life Science Product Manager at Vicon :

Here’s hoping your holidays are great and 2022 is your best year ever! We’ll be here to bring you all the news, analysis, and opinion you’ve come to expect from the Neural team.

In the meantime, you can check out last year’s predictions here and, as always, time-travelers can check out next year’s at their convenience.

MIT removes huge dataset that teaches AI systems to use racist, misogynistic slurs

MIT has taken offline a massive and highly-cited dataset that trained AI systems to use racist and misogynistic terms to describe people, The Register reports .

The training set — called 8 0 Million Tiny Images, as that’s how many labeled images it’s scraped from Google Images — was created in 2008 to develop advanced object detection techniques. It has been used to teach machine-learning models to identify the people and objects in still images.

As The Register’s Katyanna Quach wrote : “ Thanks to MIT‘s cavalier approach when assembling its training set, though, these systems may also label women as whores or bitches, and Black and Asian people with derogatory language. The database also contained close-up pictures of female genitalia labeled with the C-word.”

The Register managed to get a screenshot of the dataset before it was taken offline:

Credit: The Register

8 0 Million Tiny Images’ “serious ethical shortcomings” were discovered by Vinay Prabhu, chief scientist at privacy startup UnifyID, and Abeba Birhane, a PhD candidate at University College Dublin. They revealed their findings in the paper Large image datasets: A pyrrhic win for computer vision? , which is currently under peer review for the 2021 Workshop on Applications of Computer Vision conference.

This damage of such ethically dubious datasets reaches far beyond bad taste; the dataset has been fed into neural networks, teaching them to associate image with words . This means any AI model that uses the dataset is learning racism and sexism, which could result in sexist or racist chatbots , racially-biased software, and worse. Earlier this year, Robert Williams was wrongfully arrested in Detroit after a facial recognition system mistook him for another Black man. Garbage in, garbage out.

As Quach wrote: “ The key problem is that the dataset includes, for example, pictures of Black people and monkeys labeled with the N-word; women in bikinis, or holding their children, labeled whores; parts of the anatomy labeled with crude terms; and so on – needlessly linking everyday imagery to slurs and offensive language, and baking prejudice and bias into future AI models.”

After The Register alerted the university about the training set this week, MIT removed the dataset, and urged researchers and developers to stop using the training library, and delete all copies of it . They also published an official statement and apology on their site :

Examples of AI showing racial and gender bias and discrimination are numerous . As TNW’s Tristan Greene wrote last week : “All AI is racist. Most people just don’t notice it unless it’s blatant and obvious.”

“But AI isn’t a racist being like a person. It doesn’t deserve the benefit of the doubt, it deserves rigorous and constant investigation. When it recommends higher prison sentences for Black males than whites, or when it can’t tell the difference between two completely different Black men it demonstrates that AI systems are racist,” Greene wrote. “And, yet, we still use these systems.”

Leave A Comment