Want to develop ethical AI? Then we need more African voices

Artificial intelligence ( AI ) was once the stuff of science fiction. But it’s becoming widespread. It is used in mobile phone technology and motor vehicles . It powers tools for agriculture and healthcare .

But concerns have emerged about the accountability of AI and related technologies like machine learning. In December 2020 a computer scientist, Timnit Gebru, was fired from Google’s Ethical AI team. She had previously raised the alarm about the social effects of bias in AI technologies. For instance, in a 2018 paper Gebru and another researcher, Joy Buolamwini, had shown how facial recognition software was less accurate in identifying women and people of color than white men. Biases in training data can have far-reaching and unintended effects.

There is already a substantial body of research about ethics in AI. This highlights the importance of principles to ensure technologies do not simply worsen biases or even introduce new social harms. As the UNESCO draft recommendation on the ethics of AI states:

In recent years, many frameworks and guidelines have been created that identify objectives and priorities for ethical AI.

This is certainly a step in the right direction. But it’s also critical to look beyond technical solutions when addressing issues of bias or inclusivity. Biases can enter at the level of who frames the objectives and balances the priorities.

In a recent paper , we argue that inclusivity and diversity also need to be at the level of identifying values and defining frameworks of what counts as ethical AI in the first place. This is especially pertinent when considering the growth of AI research and machine learning across the African continent.

Context

Research and development of AI and machine learning technologies are growing in African countries. Programs such as Data Science Africa , Data Science Nigeria , and the Deep Learning Indaba with its satellite IndabaX events , which have so far been held in 27 different African countries, illustrate the interest and human investment in the fields.

The potential of AI and related technologies to promote opportunities for growth, development, and democratization in Africa is a key driver of this research.

Yet very few African voices have so far been involved in the international ethical frameworks that aim to guide the research. This might not be a problem if the principles and values in those frameworks have universal application. But it’s not clear that they do.

For instance, the European AI4People framework offers a synthesis of six other ethical frameworks. It identifies respect for autonomy as one of its key principles. This principle has been criticized within the applied ethical field of bioethics. It is seen as failing to do justice to the communitarian values common across Africa. These focus less on the individual and more on community, even requiring that exceptions are made to uphold such a principle to allow for effective interventions.

Challenges like these – or even acknowledgment that there could be such challenges – are largely absent from the discussions and frameworks for ethical AI.

Just like training data can entrench existing inequalities and injustices, so can failing to recognize the possibility of diverse sets of values that can vary across social, cultural, and political contexts.

Unusable results

In addition, failing to take into account social, cultural, and political contexts can mean that even a seemingly perfect ethical technical solution can be ineffective or misguided once implemented .

For machine learning to be effective at making useful predictions, any learning system needs access to training data. This involves samples of the data of interest: inputs in the form of multiple features or measurements, and outputs which are the labels scientists want to predict. In most cases, both these features and labels require human knowledge of the problem. But a failure to correctly account for the local context could result in underperforming systems.

For example, mobile phone call records have been used to estimate population sizes before and after disasters. However, vulnerable populations are less likely to have access to mobile devices. So, this kind of approach could yield results that aren’t useful .

Similarly, computer vision technologies for identifying different kinds of structures in an area will likely underperform where different construction materials are used. In both of these cases, as we and other colleagues discuss in another recent paper , not accounting for regional differences may have profound effects on anything from the delivery of disaster aid, to the performance of autonomous systems.

Going forward

AI technologies must not simply worsen or incorporate the problematic aspects of current human societies.

Being sensitive to and inclusive of different contexts is vital for designing effective technical solutions. It is equally important not to assume that values are universal. Those developing AI need to start including people of different backgrounds: not just in the technical aspects of designing data sets and the like but also in defining the values that can be called upon to frame and set objectives and priorities.

This article by Mary Carman , Lecturer in Philosophy, University of the Witwatersrand and Benjamin Rosman , Associate Professor in the School of Computer Science and Applied Mathematics, University of the Witwatersrand, is republished from The Conversation under a Creative Commons license. Read the original article .

Everything you need to know about facial recognition in Australia

Facial recognition technology is increasingly being trialled and deployed around Australia. Queensland and Western Australia are reportedly already using real-time facial recognition through CCTV cameras. 7-Eleven Australia is also deploying facial recognition technology in its 700 stores nationwide for what it says is customer feedback.

And Australian police are reportedly using a facial recognition system that allows them to identify members of the public from online photographs.

Facial recognition technology has a somewhat nefarious reputation in some police states and non-democratic countries. It has been used by the police in China to identify anti-Beijing protesters in Hong Kong and monitor members of the Uighur minority in Xinjiang.

With the spread of this technology in Australia and other democratic countries, there are important questions about the legal implications of scanning, storing and sharing facial images.

Use of technology by public entities

The use of facial recognition technology by immigration authorities (for example, in the channels at airports for people with electronic passports) and police departments is authorized by law and therefore subject to public scrutiny through parliamentary processes.

In a positive sign, the government’s proposed identity matching services laws are currently being scrutinized by a parliamentary committee, which will address concerns over data sharing and the potential for people to be incorrectly identified.

Indeed, Australian Human Rights Commissioner Edward Santow recently sounded an alarm over the lack of regulation in this area.

Another specific concern with the legislation is that people’s data could be shared between government agencies and private companies like telcos and banks.

How private operators work

Then there is the use of facial recognition technology by private companies, such as banks, telcos and even 7-Elevens.

Here, the first thing to determine is if the technology is being used on public or private land. A private landowner can do whatever it likes to protect itself, its wares and its occupants so long as it doesn’t break the law (for example, by unlawful restraint or a discriminatory practice).

This would include allowing for the installation and monitoring of staff and visitors through facial recognition cameras .

By contrast, on public land, any decision to deploy such tools must go through a more transparent decision-making process (say, a council meeting) where the public has an opportunity to respond.

This isn’t the case, however, for many “public” properties (such as sports fields, schools, universities, shopping centers and hospitals) that are privately owned or managed. As such, they can be privately secured through the use of guards monitoring CCTV cameras and other technologies.

Facial recognition is not the only surveillance tool available to these private operators. Others include iris and retina scanners , GIS profiling , internet data-mining (which includes “ predictive analytics ,” that is, building a customer database on the strength of online behaviours), and “ neuromarketing ” (the use of surveillance tools to capture a consumer’s attributes during purchases).

There’s more. Our technological wizardry also allows the private sector to store and retrieve huge amounts of customer data, including every purchase we make and the price we paid. And the major political parties have compiled extensive private databanks on the makeup of households and likely electoral preferences of their occupants.

Is it any wonder we have started to become a little alarmed by the reach of surveillance and data retention tools in our lives?

What’s currently allowed under the law

The law in this area is new and struggling to keep up with the pace of change. One thing is clear: the law does not prohibit even highly intrusive levels of surveillance by the private sector on private land in the absence of illegal conduct.

The most useful way of reviewing the legal principles in this space is to pose specific questions:

Can visitors be legally photographed and scanned when entering businesses?

The answer is yes where visitors have been warned of the presence of cameras and scanners by the use of signs. Remaining on the premises denotes implied consent to the conditions of entry.

Do people have any recourse if they don’t want their image taken?

No. The law does very little to protect those who may be upset by the obvious presence of a surveillance device on a door, ceiling or wall. The best option for anybody concerned about this is to leave the premises or not enter in the first place.

What about sharing images? Can private operators do whatever they like with them?

No. The sharing of electronic data is limited by what are referred to as the “ privacy principles ”, which govern the rights and obligations around the collection, use and sharing of personal information. These were extended to the private sector in 2001 by amendments to the Commonwealth Privacy Act 1988 .

These privacy principles would certainly prohibit the sharing of images except, for example, if a store was requested by police to hand them over for investigation purposes.

Can private businesses legally store your image?

Yes, private or commercial enterprises can store images of people captured on their cameras in their own databases. A person can ask for the image to be disclosed to them (that is, to confirm it is held by the store and to see it) under the “privacy principles”. Few people would bother, though, since it’s unlikely they would know it even exists.

The privacy principles do, however, require the business to take reasonable steps to destroy the data or image (or ensure there is de-identification) once it is no longer needed.

What if facial recognition technology is used without warnings like signs?

If there is a demonstrable public interest in any type of covert surveillance (for example, to ensure patrons in casino gaming rooms are not cheating, or to ensure public safety in crowded walkways), and there is no evidence of, or potential for, misuse, then the law permits it.

However, it is not legal to film someone covertly unless there is a public interest in doing so.

What does the future hold?

Any change to the laws in this area is a matter for our parliamentarians. They have been slow to respond given the difficulty of determining what is required.

It will not be easy to frame legislation that strikes the right balance between respecting individuals’ rights to privacy and the desires of commercial entities to keep their stock, patrons and staff secure.

In the meantime, there are steps we can all take to safeguard our privacy. If you want to protect your image completely, don’t select a phone that switches on when you look at it, and don’t get a passport.

And if certain businesses want to scan your face when you enter their premises, give them a wide berth, and your feedback.

This article is republished from The Conversation by Rick Sarre , Adjunct Professor of Law and Criminal Justice, University of South Australia under a Creative Commons license. Read the original article .

The US Army is developing a nightmarish thermal facial recognition system

The US Army just took a giant step toward developing killer robots that can see and identify faces in the dark.

DEVCOM , the US Army’s corporate research department, last week published a pre-print paper documenting the development of an image database for training AI to perform facial recognition using thermal images.

Why this matters: Robots can use night vision optics to effectively see in the dark, but to date there’s been no method by which they can be trained to identify surveillance targets using only thermal imagery. This database, made up of hundreds of thousands of images consisting of regular light pictures of people and their corresponding thermal images, aims to change that.

How it works: Much like any other facial recognition system, an AI would be trained to categorize images using a specific number of parameters. The AI doesn’t care if it’s pictures of faces using natural light or thermal images, it just needs copious amounts of data to get “better” at recognition. This database is, as far as we know, the largest to include thermal images. But with less than 600K total pics and only 395 total subjects it’s actually relatively small compared to standard facial recognition databases.

[Read next: Meet the 4 scale-ups using data to save the planet ]

This lack of comprehensive data means that it simply wouldn’t be very good at identifying faces. Current state-of-the-art facial recognition performs poorly at identifying anything other than white male faces and thermal imagery contains less uniquely identifiable data than traditionally-lit images.

These drawbacks are evident as the DEVCOM researchers conclude in their paper:

Quick take: The real problem is that the US government has shown time and time again it’s willing to use facial recognition software that doesn’t work very well. In theory, this could lead to better combat control in battlefield scenarios, but in execution this is more likely to result in the death of innocent black and brown people via police or predator drones using it to identify the wrong suspect in the dark.

H/t: Jack Clark, Import AI

Leave A Comment