Democrats demand answers from Google on diversity training cuts

Democrats in the US House of Representatives have demanded answers from Google on reported cuts to its diversity and inclusion programs — particularly for staff working in AI.

Ten Democrats led by Congresswoman Robin Kelly of Illinois state sent a letter to Google CEO Sundar Pichai asking which initiatives had been scaled back, and what diversity training Google now provides to its global workforce.

The letter was written in response to an NBC News investigation alleging that Google had scaled back its diversity programs to avoid accusations of an anti-conservative bias.

Citing interviews with six current and ex-employees, NBC News found that the teams responsible for those initiatives had been downsized, and that staff had been discouraged from even using the word diversity at work.

The House members specifically asked where Google was providing additional bias training for staff working in AI, an industry with a long track record of perpetuating gender and racial biases — including at Google. In 2015, a software engineer found that the company’s image recognition algorithms had labeled his black friends as gorillas .

“A company that is a leader in artificial intelligence should be acutely aware of the harm that bias can have on underrepresented populations,” read the letter.

Google denied the accusations. “ Diversity, equity, and inclusion remains a company wide commitment and our programs are continuing to scale up,” said a spokesperson for the company.

Google’s struggles with diversity

Diversity has been a lightning rod issue for Google since 2017, when engineer James Damore was fired for circulating an internal memo questioning the company’s policies.

Damore’s claims that Google’s gender gap was partly due to “biological” differences between men and women had made him a poster child for conservatives and a bete-noire for liberals — and brought mainstream attention to inclusion in Silicon Valley.

Google’s efforts to build a more representative workforce have attracted criticism from both supporters and opponents of diversity initiatives. Earlier this month, the search giant released its seventh annual diversity report , revealing a minor uptick in representation for women and people of colo r, but a company that remained disproportionately white, Asian and male.

Melonie Parker, Google’s chief diversity officer, said the company isn’t cutting its diversity training but “maturing our programs to make sure we’re building our capability” — which sounds more like spin than substance.

Instead, it looks Google is reducing its focus on diversity in the midst of a pandemic that is already disproportionality affecting those on the margins of society.

Smart devices can now read your mood and mind — they shouldn’t without consent

While waiting to board a plane on a recent trip out of town, an airline staff member asked me to momentarily take off my face mask to allow the facial recognition technology to check me in to expedite my boarding process. I was taken aback by the bluntness of the request — I did not want to take my mask off in such a crowded space and I had not given permission to have my face scanned.

While this encounter felt like an invasion of my privacy, it also got me thinking about other biometric recognition devices which, for better or worse, are already integrated into our everyday lives.

There are obvious examples: fingerprint scanners that unlock doors and facial recognition that allows payment through a phone. But there are other devices that do more than read an image — they can literally read people’s minds.

Humans and machines

My work explores the dynamics of how humans interact with machines , and how such interactions affect the cognitive state of the human operator.

Researchers in human factors engineering have recently focused their attention on the development of machine vision systems . These systems sense overt biological signals — for example, the direction of eye gaze or heart rate — to estimate cognitive states like distraction or fatigue .

A case can be made that these devices hold undeniable benefits in certain situations, such as driving . Human factors like distracted driving, which ranks among the top contributors of road fatalities , could be all but eliminated following an adequate introduction of these systems. Proposals to mandate the use of these devices are being introduced worldwide.

A different yet equally important application is the one proposed by none other than Elon Musk’s Neuralink corporation . In a December 2021 appearance at the Wall Street Journal ‘s CEO Council Summit, Musk portrayed a very-near future where brain implants will help patients suffering from paralysis regain control of their limbs through a brain implant.

While the concept and, in fact, the reality of brain-computer interfaces has existed since the 1960s , the thought of an implanted device having direct access to the brain is disconcerting, to say the least.

It’s not only these devices’ ability to create a direct bridge between the human brain and the outside world that frightens me: what will happen to the data being harvested and who will have access to it?

Cognitive freedom

This opens up the question of what, in regard to neuroethics — the body of interdisciplinary studies exploring the ethical issues related to neuroscience — is referred to as cognitive freedom.

Italian cognitive scientist Andrea Lavazza defines cognitive freedom as “ the possibility of elaborating one’s own thoughts autonomously, without interference, and of revealing them totally, partially or not at all on the basis of a personal decision .” Cognitive freedom is brought to the forefront when technology has reached a point where it can monitor or even manipulate mental states as a means of cognitive enhancement for professionals like physicians or pilots .

Or mind control for convicted criminals — Lavazza suggests that “it would not be so strange for the criminal system to require a person convicted of a violent crime to undergo [a brain implant] so as to control any new aggressive impulses.”

The ramifications that the development and deployment of biological sensors and devices like brain-computer interfaces have on our lives are at the center of the debate. Not only in neuroethics, which is witnessing the formation of neuro-rights initiatives worldwide, but also across the broader civil spectrum where it is being debated whether actions undertaken with an implant ought to be governed by the same laws ruling conventional bodily movements .

Personally, I will need to take some more time weighing the pros and cons of biological sensors and devices in my everyday life. And if I am asked for permission to have my face scanned to expedite boarding a plane, I will respond with: “Let’s do it the old-fashioned way, I don’t mind waiting.”

Article by Francesco Biondi , Associate Professor, Human Systems Labs, University of Windsor

This article is republished from The Conversation under a Creative Commons license. Read the original article .

UK to use existing NHS app as vaccine passport for travel overseas

The UK government is retooling a National Health Service app as a vaccine passport for international travel, Transport Secretary Grant Shapps announced this morning.

The app will provide proof that people have been vaccinated or received a negative test for the virus. Shapps told Sky News that the system is already being developed:

The system will be based on an NHS app used to book an appointment with a doctor rather than the COVID-19 app. Shapps said he was working with partners across the world to ensure the system is internationally recognized.

The scheme will be welcomed by many Brits planning a trip abroad when international travel is due to reopen on May 17, but critics warn that they could put peoples’ civil liberties and privacy at risk.

Attila Tomaschek , a d igital p rivacy expert at ProPrivacy , said the massive stores of personal data could be used beyond the scope of the pandemic:

In the rush to jumpstart international travel, let’s hope the government doesn’t overlook — or embrace — the risks.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here .

Leave A Comment