Mainstream adoption of facial recognition can have sinister consequences

On Dec. 14, the governments of British Columbia, Alberta, and Québec ordered facial recognition company Clearview AI to stop collecting — and to delete — images of people obtained without their consent . Discussions about the risks of facial recognition systems that rely on automated face analysis technologies tend to focus on corporations, national governments, and law enforcement. But what is of great concern are the ways in which facial recognition and analysis have become integrated into our everyday lives.

Amazon, Microsoft, and IBM have stopped supplyingfacial recognition systems to police departments after studies showed algorithmic bias disproportionately misidentifying people of color, particularly Black people .

Facebook and Clearview AI have dealt with lawsuits and settlements for building databases of billions of face templates without people’s consent.

In the United Kingdom, police face scrutiny for their use of real-time face recognition in public spaces . The Chinese government tracks its minority Uyghur population through face-scanning technologies .

And yet, to grasp the scope and consequences of these systems we must also pay attention to the casual practices of everyday users who apply face scans and analysis in routine ways that contribute to the erosion of privacy and increase social discrimination and racism.

As a researcher of mobile media visual practices and their historical links to social inequality , I regularly explore how user actions can build or change norms around matters like privacy and identity. In this regard, the adoption and use of face analysis systems and products in our every day lives may be reaching a dangerous tipping point.

Everyday face scans

Open-source algorithms that detect facial features make face analysis or recognition an easy add-on for app developers. We already use facial recognition to unlock our phones or pay for goods . Video cameras incorporated into smart homes use facial recognition to identify visitors as well as personalize screen displays and audio reminders. The auto-focus feature on cellphone cameras includes face detection and tracking, while cloud photo storage generates albums and themed slideshows by matching and grouping faces it recognizes in the images we make.

Face analysis is used by many apps including social media filters and accessories that produce effects like artificially aging and animating facial features. Self-improvement and forecasting apps for beauty, horoscopes, or ethnicity detection also generate advice and conclusions based on facial scans.

But using face analysis systems for horoscopes, selfies, or identifying who’s on our front steps can have long-term societal consequences: they can facilitate large-scale surveillance and tracking while sustaining systemic social inequality .

Casual risks

When repeated over time, such low-stakes and quick-reward uses can inure us to face-scanning more generally, opening the door to more expansive systems across differing contexts . We have no control over — and little insight into — who runs those systems and how the data is used.

If we already subject our faces to automated scrutiny, not only with our consent but also with our active participation, then being subjected to similar scans and analysis as we move through public spaces or access services might not seem particularly intrusive.

In addition, our personal use of face analysis technologies contributes directly to the development and implementation of larger systems meant for tracking populations, ranking clients, or developing suspect pools for investigations. Companies can collect and share data that connects our images to our identities, or for larger data sets used to train AI systems for face or emotion recognition .

Even if the platform we use restricts such uses, partner products may not abide by the same restrictions. The development of new databases of private individuals can be lucrative, especially when these can comprise multiple face images of each user or can associate images with identifying information, such as account names.

Pseudoscientific digital profiling

But perhaps most troubling, our growing embrace of facial analysis technologies feeds into how they not only determine an individual’s identity but also their background, character, and social value.

These interrelated systems depended to varying degrees on face analysis to justify racial hierarchies, colonization, chattel slavery, forced sterilization, and preventative incarceration. Many predictive and diagnostic apps that scan our faces to determine our ethnicity, beauty, wellness, emotions, and even our potential earning power build on the disturbing historical pseudosciences of phrenology , physiognomy , and eugenics .

Our use of face analysis technologies can perpetuate these beliefs and biases , implying they have a legitimate place in society. This complicity can then justify similar automated face analysis systems for uses such as screening job applicants or determining criminality .

Building better habits

Regulating how facial recognition systems collect, interpret and distribute biometric data has not kept pace with our everyday use of face-scanning and analysis. There has been some policy progress in Europe and parts of the United States , but greater regulation is needed.

In addition, we need to confront our own habits and assumptions. How might we be putting ourselves and others, especially marginalized populations, at risk by making such machine-based scrutiny commonplace?

A few simple adjustments may help us address the creeping assimilation of facial analysis systems in our everyday lives. A good start is to change app and device settings to minimize scanning and sharing. Before downloading apps, research them and read the terms of use .

Resist the short-lived thrill of the latest social media face-effect fad — do we really need to know how we’d look as Pixar characters? Reconsider smart devices equipped with facial recognition technologies. Be aware of the rights of those whose image might be captured on a smart home device — you should always get explicit consent from anyone passing before the lens.

These small changes, if multiplied across users, products, and platforms, can protect our data and buy time for greater reflection on the risks, benefits, and fair deployment of facial recognition technologies.

Article by Stephen Monteiro , Assistant Professor of Communication Studies, Concordia University

This article is republished from The Conversation under a Creative Commons license. Read the original article .

The 5 best fictional AIs in gaming

There’s something cool and meta about interacting with, playing as, or fighting against an artificial intelligence in video games. AI has captured our imagination in print, film, and even song , but games give us the space to interact with and see fantastical worlds from otherwise impossible perspectives.

To be absolutely clear, this isn’t about the games with the smartest “CPU AI.” We’re not discussing whether the “AI” in a game is tough to beat. We’re talking about fictional depictions of artificial intelligence. Here’s a handy primer on the difference between the two concepts.

This was a tough list to make and I probably left your favorites off, but it’s only because I decided that “best” meant: the ones I felt had the most impact.

I hope you agree with at least one or two items on this list, but honestly, if you don’t and it starts a conversation about AI in video games, then that’s just as good.

On to the list:

No, not the spunky little military murder machine cum friendly fugitive “Number Five” (AKA Johnny Five) from the 1980s film “ Short Circuit ,” we’re starting the list backwards so we can count down to number one.

The Cylons from Battlestar Galactica: Deadlock kick things off because, well, they’re the perfect villain for a hardcore strategy wargame.

I love BSG:DL for a lot of reasons. As far as turn-based-tactical strategy games go I’m hard-pressed to think of one I enjoy more. In essence, it’s a naval combat game with the additional challenge of a vertical axis. The scope of the game is large enough to show off the enormity of your command vessels and the dozens upon dozens of ships all participating in the maneuvers and reactions dance that is tactical warfare.

But what truly sets it apart is the enemy. When I play most war/strategy games I’m forced to reckon with a certain level of politics. When I drop bombs on enemy cities in Hearts of Iron IV or send my warriors to sack a town in a round of Civilization VI, I know I’m killing innocent digital civilians too, I just can’t care if I want to win.

And that’s probably a good thing. We don’t want to get bogged down in the viscera and horror of war when we, for example, play the classic Battleship board game. It’s just a game, right?

But Battlestar Galactica: Deadlock lets me face the reality without feeling like a genocidal jerk. The Cylons aren’t humans. And, while they are sentient and probably deserve to live, it’s made abundantly clear that they won’t rest until every last human has been destroyed. And that gives the entire game a sense of gravitas and urgency that you just don’t get when you’re painting the map in most strategy games.

This was an easy one: It’s Claptrap, it’s Claptrap, it’s always been Claptrap! I adore Claptrap. In fact, I’ve never met a gamer who doesn’t.

Claptrap is one of the best things to ever come out of the celebrated Borderlands franchise. It first appeared in the original game as a sort of guide and his role continued to grow until, finally, it was made a playable character in the Borderlands Pre-Sequel , which was the third title released.

There was a bit of controversy surrounding the release of Borderlands 3 because the original person who voiced Claptrap, a Gearbox employee named David Eddings, chose not to reprise the role. According to reports, he wasn’t offered money commensurate with the gig. Gearbox said that wasn’t the case.

At any rate, while I certainly missed Eddings and laud their work as among the best in video games, their replacement, Jim Faronda, did an excellent job in part 3 as well.

Aside from being genuinely entertaining, hilarious, and occasionally endearing, the reason I included Claptrap on this list and not, say, LGBTQPIA+ icon FL4K , is because Claptrap isn’t just a supporting character (and one time playable character): it’s a buffer between the gory nature of the game with its psychopathy-is-the-norm attitude and the random silliness that pervades the game’s world.

Without Claptrap, Borderlands is just Mad Max with fart jokes.

My personal favorite AI character of all time is GLaDOS. This AI was once a human before becoming a disembodied voice, a chip on a potato (get it?), and eventually a robot. The reason I like GLaDOS so much is because it’s just flat out sassy. It’s the AI I’d most want to hang out with at a party. But, like, in a snarky queer way where we mock everyone else.

Portal was one of those games that changed the way everyone looked at gaming. People weren’t ready for the game’s stunning combination of jaw-dropping graphics, gut-busting comedy, and incredible game play.

But most of all they weren’t ready for the psychotic, murderous, cake-loving-and-lying-about intelligence that is GLaDOS.

GLaDOS, for my money, is the most entertaining AI in games. Not only is it hilarious, it’s also a talented singer. The end credits for Portal feature the entity singing a song called “Still Alive” that was so catchy it ended up in Rockband 3.

The robot species from Synthetic Dawn , a Stellaris DLC. There’s no one character here I can point out, but that’s why it’s a very close second to being my favorite fictional AI. You are the AI in Synthetic Dawn.

Stellaris is a grand strategy game set in space where you control an entire civilization. With Synthetic Dawn you’re able to become a sentient AI species, and that means understanding and dealing with the unique challenges that come from leading machines in a galaxy full of organics.

The writing is excellent and the art and events are fantastic, but what really shines here is the little things. Playing as machines fundamentally changes the experience of governing in Stellaris in so many small ways that it, essentially, becomes an entirely different gaming experience.

With a mid-game crisis beating at your borders, enemies in every direction, and at least half the galaxy under the belief that your species doesn’t matter, life as an AI civilization is tough. But it’s also full of unique situations. You’re, for example, given the opportunity to purge organics and use their life-force as energy to power your growth ala The Matrix. And, in the course of many games, you’ll find ancient machine intelligences who’ll respond to your species in ways those living creatures could never understand.

[Related: Games to play on date night: Rule the galaxy together in Stellaris ]

Maybe I’m biased, but as someone who gets paid to think about what it’ll be like if AI ever becomes sentient, I find embodying robots at the political, economic, and military level in a game to be extremely thought-provoking.

As much as I enjoy being the machines in Stellaris , Cortana is clearly the winner here. As far as I know, there’s never been a character from a video game that literally manifested in real life before Cortana stepped out of the Xbox and became everybody’s secretary.

Today, Cortana’s most used as Microsoft’s version of Alexa or Siri. In fact you’ve probably got the little circle icon down at the bottom left of your taskbar right now if you’re on a PC. You can click that and, just like Masterchief, ask Cortana to help you out.

But, before it was just another AI we mostly use to ask how old celebrities are (I can’t be the only one), Cortana was the heart and soul of the Halo franchise. You might be thinking that was the dude in the big green armor with the gun that looked suspiciously like the one from Ridley Scott’s “Aliens,” but it was clearly Cortana.

Halo was an early graphical science fiction shooter, but it looked and played a lot like a modern warfare game. Warthogs really just looked like fancy Humvees and most of the human weapons, buildings, and vehicles had a pretty modern aesthetic. I can only assume this was to make them sympathetic protagonists we could identify with when viewed against the colorful, spiky, alien enemies.

Cortana was the far-future plot piece Halo needed to keep players in the science fiction mindset when they were trekking across brown, green, and gray landscapes. And, in some ways, it remains the same in the real world.

While we live in a world where the discourse on AI more and more often concerns our fears over privacy, misuse, and misalignment, Cortana kind of, sort of, reminds us that things have changed quickly in the past few years. We couldn’t always just say “Cortana, what’s the whether like in Amsterdam right now?” and have a pleasant-sounding robot give us the correct answer.

Cortana reminds us that the future is now. And, doubtless, it was instrumental in inspiring the development of the AI systems we use today.

How to tell the difference between AI and BS

Artificial intelligence is as important as electricity, indoor plumbing, and the internet to modern society. In short: it would be extremely difficult to live without it now.

But it’s also, arguably, the most over hyped and misrepresented technology in history — and if you remove cryptocurrency from the argument there’s no debate.

We’ve been told that AI can (or soon will) predict crimes , drive vehicles without a human backup, and determine the best candidate for a job.

We’ve been warned that AI will replace doctors, lawyers, writers, and anyone in a field not computer-related.

Yet none of these fantasies have come to fruition. In the case of predictive policing, hiring AI, and other systems purported to use machine learning to glean insights into the human condition: they’re BS, and they’re dangerous .

AI cannot do anything a human can’t, nor can it do most things a human can.

For example, predictive policing is alleged to use historical data to determine where crime is likely to take place in the future so that police can determine where their presence is needed.

But the assumptions driving such systems are faulty at their core. Trying to predict crime density over geography by using arrest data is like trying to determine how a chef’s food might taste by looking at their headshots.

It’s the same with hiring AI. The question we ask is “who is the best candidate,” but these systems have no way of actually determining that.

It might seem difficult to digest. There are tens of thousands of legitimate businesses peddling AI software. And a significant portion of them are pushing BS.

So what makes us right and them wrong? Well, let’s take a look at some examples so we can figure out how to separate the wheat from the chaff.

Hiring AI is a good place to start. There is no formula for hiring the perfect employee. These systems either take the same data available to humans and find candidates whose files most match those of people who’ve been successful in the past (thus perpetuating any existing or historical problems in the hiring process and defeating the point of the AI) or they use non-related data such as “emotion detection” or similar pseudoscience-based quackery to do the same feckless thing.

The bottom line is that AI can’t determine more about a candidate than a human can. At best businesses using hiring AI are being swindled. At worst, they’re intentionally using systems they know to be anti-diversity mechanisms.

The simplest way to determine if AI is BS is to understand what problem it’s attempting to solve. Next, you just need to determine if that problem can be solved by moving data around.

Can AI determine recidivism rates in former felons? Yes. It can take the same data as a human and glean what percentage of inmates are likely to commit crimes again.

But it cannot determine which humans are likely to commit crimes again because that would require magical psychic powers.

Can AI predict crime? Sure, but only in a closed system where ground truth crime data is available. In other words, we’d need to know about all the crimes that happen without the cops being involved, not just the tiny percent where someone actually got caught.

But what about self-driving cars, robot surgeons, and replacing writers?

These are all strictly within the domain of future tech. Self driving cars are exactly as close today as they were in 2014 when deep learning really started to take off.

We’re in a lingering state of being “a couple of years away” from level 5 autonomy that could go on for decades.

And that’s because AI isn’t the right solution, at least not real AI that exists today. If we truly want cars to drive themselves we need a digital rail system within which to constrain the vehicle and ensure all other vehicles in proximity operate together.

In other words: people are too chaotic for a rules-based learner (AI) to adapt to using only sensors and modern machine learning techniques.

Once again, we realize that asking AI to safely drive a car, in the current typical traffic environments, is in effect giving it a task that most humans can’t complete. What is a good driver? Someone who is never at fault for an accident the entire time they drive?

This is also why lawyers and writers won’t be replaced any time soon. AI can’t explain why a crime against a defenseless child might merit harsher punishment than one against an adult. And it certainly can’t do with words what Herman Melville or Emily Dickinson did.

Where we find AI that isn’t BS, almost always, is when it’s performing a task that is so boring that, despite there being value in that task, it would be a waste of time for a human to do it.

Take Spotify or Netflix for example. Both companies could hire a human to write down what every user listens to or watches and then organize all the data into informative piles. But there are hundreds of millions of subscribers involved. It would take thousands of years for humans to sort through all the data from a single day’s sitewide usage. So they make AI systems to do it faster.

It’s the same with medical AI. AI that uses image recognition to notice anomalies or perform microsurgery is incredibly important to the medical community. But these systems are only capable of very specific, narrow tasks. The idea of a robotic general surgeon is BS. We’re nowhere close to a machine that can perform a standard vasectomy, sterilize itself, and then perform orthoscopic knee surgery.

We’re also nowhere close to a machine that can walk into your house and make you a cup of coffee.

Other AI BS to be leery of:

Fake news detectors. Not only do these not work, but even if they did, what difference would it make? AI can’t determine facts in real time so these systems either search for human-curated keywords and phrases or just compare the website publishing the questionable article to a human-curated list of bad news actors. Furthermore, detecting fake news isn’t a problem. Much like pornography, most of us know it when we see it. Unlike porn however, nobody in big tech or the publishing industry seems to be interested in censoring fake news.

Gaydar: we won’t rehash this , but AI cannot determine anything about human sexuality using image recognition. In fact all facial recognition software is BS, with the sole caveat being localized systems that are trained specifically on the faces they’re meant to detect. Systems trained to detect faces in the wild against mass datasets, especially those associated with criminal activity, are inherently biased to the point of being faulty at conception.

Basically, be wary of any AI system purported to judge or rank humans against datasets. AI has no insight into the human condition, and that’s unlikely to change any time in the near future.

Leave A Comment