US banks turn to AI to tell homeless people to go away — along with fraud prevention and stuff

Banks have long embraced surveillance systems to prevent robbery. But they’re also using the technology to monitor customers, workers, and homeless people.

Several US banking giants are implementing AI cameras to analyze customer preferences, track what staff are doing, and observe activities around their premises, Reuters reports .

The tools are being used for a variety of purposes. Wells Fargo is leveraging the tech to prevent fraud, while City National plans to deploy facial recognition near ATMs as authentication methods.

JPMorgan, meanwhile, has been using computer vision to analyze archive footage of customer behavior. Their early analysis found that more men arrive before or after lunch, while women are more likely to visit in mid-afternoon.

One unnamed bank is using AI to arrange their layouts more ergonomically, while another is monitoring homeless people setting up tents at drive-through ATMs. An executive told Reuters that staff can play an audio recording “politely asking” the loiterers to leave. Sounds delightful.

All these new applications of AI come amid growing concerns about AI-powered surveillance.

Biometric scans can encroach on democratic freedoms, and facial recognition is notorious for misidentifying people of color , women , and trans people .

Critics have also noted that consumer monitoring can lead to income and racial discrimination. In 2020, the drug store chain Rite Aid shut down its facial recognition system after it was found to be mostly installed in lower-income, non-white neighborhoods.

Bank executives told Reuters that they were sensitive to these issues, but a backlash from customers and staff could stall their plans. Their deployments will also be restricted by a growing range of local laws.

A number of US cities have recently prohibited the use of facial recognition, including Portland , which last year banned the tech in all privately-owned places accessible to the public. The Oregon Bankers Association has asked for an exemption, but their request was rejected.

Still, in most places in the US banks are free to roll out AI monitoring tools. It’s another step in the sleepwalk towards surveillance capitalism.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here .

Why can’t Google’s algorithms find any good news for queer people?

If you’re interested in reading about the global suffering of queer people I’ve got just the place for you. It’s called “Google News.”

Today’s actually an off day for the product. Typically, if you take a gander at the “ LGBTQ+ ” topic in Google News you’ll find that about 90% of the stories surfaced by the platform’s algorithms are negative.

But today is “Coming Out Day,” so there’s a handful of non-negative pieces in the feed right now taking up slots that are typically filled with negative ones.

As of the time of this article’s publishing, the stories that surface in the LGBTQ+ topic break down as follows:

52 total in feed

39 clearly negative

12 not directly negative

1 entirely unrelated

The negative pieces make up about 76% of the feed. That’s a problem. And it’s a really simple problem to solve. But Google has no interest in doing so b ecause the solution is to replace the algorithm with human curators.

And not just because the algorithms in use are apparently biased towards negative news pieces concerning the LGBTQ+ community, but because they’re just bad at curating news.

For example, the Arrowhead Pride newspaper’s readers were probably quite interested to learn that Willie Gay Jr. was going to be active for Sunday’s game against the Bills. But this news wasn’t useful to the LGBTQ+ community at large. And it certainly doesn’t belong in the “Pride” section. That’s a mistake no human curator would have made.

Yesterday it was the top story in the Pride section. Today, it’s been replaced by a news piece about someone taking a crap on a Pride flag. In fact, all of the stories displayed in the section are negative. So much for Pride.

Again, this isn’t something that would happen with a human curator. Bigots probably think it’s hilarious though.

It would be understandable if we lived in a world where there simply wasn’t any good news related to queer people. But that’s a ridiculous assertion that’s easy to refute.

If we take a look at PinkNews , a popular queer news publication, its front page is full of positive news pieces. There’s some negative ones too. That’s how balanced coverage works.

Unfortunately, Google‘s algorithms aren’t capable of finding balance or surfacing relevant pieces. It surfaces what it’s been trained to look for.

Google is comprised of mostly straight, cisgender, white men and its products work demonstrably better for people fitting that description than those who don’t.

Nearly half of the people in the US use an Android device. And most of those come with Google News preinstalled. That means Google News is among the globe’s largest aggregators of news.

And the algorithm is feeding everyone almost exclusively negative stories related to the LGBTQ+ community without apparent or competent human oversight. That’s bound to have some consequences.

The biggest problem is that Google is an AI company . And the answer to every problem it faces is always going to be: more AI.

And, because of that, Google‘s become old and out of touch. It’s still operating like an early 2000s-era big business that’s beholden to outdated retro-futuristic ideas on how powerful algorithms will be in “just a few more years.”

The reality is that Google‘s been developing these algorithms for longer than it takes to educate a doctor and they’re still functionally stupider than a 5th grader.

AI isn’t the future. It’s just a tool. The future is people.

We reached out to Google for comment. We’ll update this piece if we get a response.

Related: Google News thinks I’m the queerest AI journalist on Earth

Update 11 October 12:38 PST: Google returned our request for commentary. A Google spokesperson provided Neural with the following statement:

Who are the nuns taking on Microsoft?

Update (December 4, 2021): Microsoft’s Annual Shareholders Meeting, which is mentioned in this piece, took place subsequently after we published this story, and the votes on proposals presented there have been counted. We’ve updated this story with the results.

Microsoft has survived brutal battles against Apple and Google , but the company now faces a more formidable foe: the Sisters of St. Joseph of Peace.

The congregation is leading a group of Microsoft investors who want to hold the firm accountable for its tech. The campaigners urged Microsoft shareholders to vote for two proposals at an annual shareholders’ meeting on November 30:

The sisters may not look like your stereotypical digital activists, but they’re more tech-savvy than you might expect. Oh, and at said meeting, 38% of Microsoft’s shareholders supported the first proposal , which requested the tech giant report on alignment between its lobbying activities and company policy.

Who are the Sisters of St. Joseph of Peace?

The Roman Catholic order was founded in 1884 in Nottingham, England by Margaret Anna Cusack , and has a history of promoting social justice as a way to peace. The congregation currently serves in the US, UK, and Haiti.

The sisters are also seasoned shareholder advocates. This year, they’ve zeroed in on Microsoft’s lobbying efforts.

Sister Susan Francois has been the order’s most prominent campaigner.

The assistant congregation leader was once an election official in Portland, Oregon. In her blog , Sister Susan says the 9/11 terrorist attacks laid the seeds for her “transition from bureaucrat to Gen-X nun.”

“As shareholders, as tech workers, as campaigners for justice, we can and must hold these companies accountable,” she said in a campaign video . “New innovation should support human dignity and a fair and just society, not magnify division and discrimination.”

Sister Susan is also a prolific user of Twitter. In 2018, she was interviewed by The New York Times after tweeting daily prayers to Donald Trump for more than 650 days.

The beef with Microsoft

As racial justice protests swept across the US last year, Microsoft pledged to restrict sales of facial recognition tech to police . However, the firm made no mention of other contentious government clients, such as ICE and authoritarian regimes.

The company is also is also attempting to shape the regulations that governs it. Microsoft lobbied hard for facial recognition laws that were adopted in Washington last year — which is unsurprising, given the bill was sponsored by one of its own employees.

“Despite what it says publicly, Microsoft is spending its $9.5 million annual lobbying budget on fighting a bill that would ban discriminatory facial recognition ,” said Sister Susan. “In fact, it lobbies states to pass laws that would increase police use of dangerous surveillance tech.”

The Sisters of St. Joseph of Peace had previously asked Microsoft for a report on how its lobbying aligns with its stated principles, The Hill reported in June . They have now called on the company’s shareholders to hold the firm accountable.

Good luck, sisters. Whatever the vote, you’ve already raised awareness of Microsoft’s facial recognition lobbying — and countered some stereotypes about nuns in the process.

HT: Protocol

Leave A Comment