Twitter will reveal how its algorithmic biases cause ‘unintended harms’

Twitter has launched a new initiative called “Responsible ML” that will investigate the harms caused by the platform’s algorithms.

The company said on Wednesday that it will use the findings to improve the experience on Twitter:

The move comes amid mounting concerns around social media algorithms amplifying biases and spreading conspiracy theories .

A recent example of this on Twitter involved an image cropping algorithm that automatically prioritized white faces over Black ones.

Twitter said the image-cropping algorithm will be analyzed by the Responsible ML team.

They’ll also conduct a fairness assessment of Twitter’s timeline recommendations across racial subgroups, and study content recommendations for different political ideologies in seven countries.

Cautious optimism

Tech firms are often accused of using responsible AI initiatives to divert criticism and regulatory intervention. But Twitter’s new project has attracted praise from AI ethicists.

Margaret Mitchell, who co-led Google’s ethical AI time before her controversial firing in February, commended the initiative’s approach.

Twitter’s recent hiring of Rumman Chowdhury has also given the project some credibility.

Chowdhury, a world-renowned expert in AI ethics, was appointed director of ML Ethics, Transparency & Accountability (META) at Twitter in February.

In a blog post , she said Twitter will share the learnings and best practices from the initiative:

She added that her team is building explainable ML solutions to show how the algorithms work. They’re also exploring ways to give users more control over how ML shapes their experience.

Not all the work will translate into product changes, but it will hopefully at least provide some transparency into how Twitter’s algorithms work.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here .

A Google algorithm misidentified a software engineer as a serial killer

Google’s algorithmic failures can have dreadful consequences, from directing racist search terms to the White House in Google Maps to labeling Black people as gorillas in Google Photos.

This week, the Silicon Valley giant added another algorithmic screw-up to the list: misidentifying a software engineer as a serial killer.

The victim of this latest botch was Hristo Georgiev, an engineer based in Switzerland. Georgiev discovered that a Google search of his name returned a photo of him linked to a Wikipedia entry on a notorious murderer.

“My first reaction was that somebody was trying to pull off some sort of an elaborate prank on me, but after opening the Wikipedia article itself, it turned out that there’s no photo of me there whatsoever,” said Georgiev in a blog post .

Georgiev believes the error was caused by Google‘s knowledge graph, which generates infoboxes next to search results.

He suspects the algorithm matched his picture to the Wikipedia entry because the now-dead killer shared his name.

Georgiev is far from the first victim of the knowledge graph misfiring. The algorithm has previously generated infoboxes that falsely registered actor Paul Campbell as deceased and listed the California Republican Party’s ideology as “Nazism” .

In Georgiev’s case, the issue was swiftly resolved. After reporting the bug to Google, the company removed his image from the killer’s infobox. Georgiev gave credit to the HackerNews community for accelerating the response.

Other victims, however, may not be so lucky. If they never find the error — or struggle to resolve it — the misinformation could have troubling consequences.

I certainly wouldn’t want a potential employer, client, or partner to see my face next to an article about a serial killer.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here .

Watch: This AI mashup of movie characters singing ‘All Star’ is the best DeepFake ever

Unpopular opinion: All Star , by Smash Mouth, is the greatest rock song ever created.

Okay, that’s a lie. But it does make for the most compelling use of DeepFake technology we’ve ever seen. You simply have not lived until you’ve seen Gerard Butler as Leonidas and the creepy Agent Smith from The Matrix singing out the lyrics.

Behold:

The above video was made using a DeepFakes program called Wave2Lip which was created using code from the paper “A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild.” It relies on facial recognition and artificial intelligence to create the illusion that these old movie characters are lip syncing along to the classic song.

According to the researchers responsible for the original paper, Prajwal K R and Rudrabha Mukhopadhyay:

Despite the potential for misuse, DeepFakes have managed to permeate mainstream society thanks to clever applications like this. A few of the clips show serious artifacts and it’s doubtful anyone’s going to be convinced this is ‘real,’ but it doesn’t have to be. This novel use — and the easy to follow demo and tutorials available online — show just how easy it is to make fun, entertaining footage using AI and some old movie reels.

For more information check out the GitHub for Wave2Lip here and the original paper from which it draws its code here .

Huge tip of the hat to Boing Boing and Rob Beschizza for drawing our attention to this treasure.

Leave A Comment