Facebook removes Chinese network of fake accounts that used AI-generated faces

Facebook has taken down a China-based network of fake accounts that used AI-generated faces to spread government propaganda across the platform, according to research by analytics firm Graphika.

The social network announced on Tuesday that it had removed 155 accounts, 11 Pages, nine Groups, and six Instagram accounts for “coordinated inauthentic behavior on behalf of a foreign or government entity.”

The fake accounts predominantly focused on Southeast Asia, where they posted about events including Beijing’s interests in the South China Sea and developments in Hong Kong. But a smaller cluster posing as Americans posted content that both supported and criticized then-presidential candidates Pete Buttigieg, Joe Biden, and Donald Trump.

Before the takedown, Facebook asked Graphika to analyze the data. The Pentagon-linked firm found that some accounts had stolen their profile photos from real people, which can be exposed as inauthentic through reverse image searches.

Others tried to evade detection by using Generative Adversarial Networks (GAN) to create fake pictures, a tactic that Graphika says “has exploded in the last year.”

The company found 12 profile pictures that it suspects were AI-generated, due to their distorted backgrounds and asymmetrical peripheral features such as ears, glasses, and hair.

Graphika spotted some of these signals by rendering the images opaque and then superimposing them on top of each other to expose the alignment of the features. The firm suspects that some of the photos were cropped or included stickers to disguise the fact they’re fake.

The findings mark the second time this month that a network of fake Facebook accounts has reportedly used AI-generated profile pictures. Expect the tactic to play a growing role in future information operations.

So you’re interested in AI? Then join our online event, TNW2020 , where you’ll hear how artificial intelligence is transforming industries and businesses.

Maker of Sophia the robot plans to sell droids to people seeking company during COVID

The maker of Sophia the robot plans to sell droids to people craving company during the COVID-19 pandemic.

Hanson Robotics aims to roll out four models — including Sophia — in the first half of 2021.

“Sophia and Hanson robots are unique by being so human-like,” CEO David Hanson told Reuters . “That can be so useful during these times where people are terribly lonely and socially isolated.”

The Hong Kong-based firm aims to sell “thousands” of the droids this year. Hanson believes they could help fight the pandemic, which has accelerated demand for automation and robotics.

Sophia is arguably the planet’s most famous real-world robot, but it’s also attracted controversy.

Saudia Arabia notoriously granted citizenship to Sophia in 2017, giving the android more rights than millions of humans in the country.

Critics noted that Sophia wasn’t obliged to follow the same rules as real women in Saudia Arabia, and had received citizenship before many migrant workers who’d spend their whole lives in the kingdom.

Hanson has also been mocked for claiming that Sophia is “basically alive,” when it’s essentially just a chatbot in a robotic body.

In 2018, Facebook’s chief AI scientist, Yann LeCun , described Sophia as “complete bullshit” and slated the media for promoting the “Potemkin AI.”

Sophia’s official Twitter account responded by tweeting ironically: “ I do not pretend to be who I am not.”

LeCun called the tweet “more BS from the (human) puppeteers behind Sophia” that was deceiving people into thinking the robot is intelligent:

The humanoid could nonetheless provide some animatronic company to people suffering through lockdowns and restricted human interactions.

But robosexuals should take not e : Sophia isn’t programmed to perform sex acts.

A beginner’s guide to ensemble learning

The principle of “the wisdom of the crowd” shows that a large group of people with average knowledge on a topic can provide reliable answers to questions such as predicting quantities, spatial reasoning, and general knowledge. The aggregate results cancel out the noise and can often be superior to those of highly knowledgeable experts. The same rule can apply to artificial intelligence applications that rely on machine learning , the branch of AI that predicts outcomes based on mathematical models.

In machine learning, crowd wisdom is achieved through ensemble learning. For many problems, the result obtained from an ensemble, a combination of machine learning models, can be more accurate than any single member of the group.

How does ensemble learning work?

Say you want to develop a machine learning model that predicts inventory stock orders for your company based on historical data you have gathered from previous years. You use train four machine learning models using a different algorithms: linear regression, support vector machine, a regression decision tree, and a basic artificial neural network . But even after much tweaking and configuration, none of them achieves your desired 95 percent prediction accuracy. These machine learning models are called “weak learners” because they fail to converge to the desired level.

But weak doesn’t mean useless. You can combine them into an ensemble. For each new prediction, you run your input data through all four models, and then compute the average of the results. When examining the new result, you see that the aggregate results provide 96 percent accuracy, which is more than acceptable.

The reason ensemble learning is efficient is that your machine learning models work differently. Each model might perform well on some data and less accurately on others. When you combine all them, they cancel out each other’s weaknesses.

You can apply ensemble methods to both predictions problems, like the inventory prediction example we just saw, and classification problems, such as determining whether a picture contains a certain object.

Ensemble methods

For a machine learning ensemble, you must make sure your models are independent of each other (or as independent of each other as possible). One way to do this is to create your ensemble from different algorithms, as in the above example.

Another ensemble method is to use instances of the same machine learning algorithms and train them on different data sets. For instance, you can create an ensemble composed of 12 linear regression models, each trained on a subset of your training data.

There are two key methods for sampling data from your training set. “Bootstrap aggregation,” aka “bagging,” takes random samples from the training set “with replacement.” The other method, “pasting,” draws samples “without replacement.”

To understand the difference between the sampling methods, here’s an example. Say you have a training set with 10,000 samples and you want to train each machine learning model in your ensemble with 9,000 samples. In case you’re using bagging, for each of your machine learning models, you take the following steps:

When using pasting, you go through the same process, with the difference that samples are not returned to the training set after being drawn. Consequently, the same sample might appear in a model’s several times when using bagging but only once when using pasting.

After training all your machine learning models, you’ll have to choose an aggregation method. If you’re tackling a classification problem, the usual aggregation method is “statistical mode,” or the class that is predicted more than others. In regression problems, ensembles usually use the average of the predictions made by the models.

Boosting methods

Another popular ensemble technique is “boosting.” In contrast to classic ensemble methods, where machine learning models are trained in parallel, boosting methods train them sequentially, with each new model building up on the previous one and solving its inefficiencies.

AdaBoost (short for “adaptive boosting”), one of the more popular boosting methods, improves the accuracy of ensemble models by adapting new models to the mistakes of previous ones. After training your first machine learning model, you single out the training examples misclassified or wrongly predicted by the model. When training the next model, you put more emphasis on these examples. This results in a machine learning model that performs better where the previous one failed. The process repeats itself for as many models you want to add to the ensemble. The final ensemble contains several machine learning models of different accuracies, which together can provide better accuracy. In boosted ensembles, the output of each model is given a weight that is proportionate to its accuracy.

Random forests

One area where ensemble learning is very popular is decision trees, a machine learning algorithm that is very useful because of its flexibility and interpretability. Decision trees can make predictions on complex problems, and they can also trace back their outputs to a series of very clear steps.

The problem with decision trees is that they don’t create smooth boundaries between different classes unless you break them down into too many branches, in which case they become prone to “overfitting,” a problem that occurs when a machine learning model performs very well on training data but poorly on novel examples from the real world.

This is a problem that can be solved through ensemble learning. Random forests are machine learning ensembles composed of multiple decision trees (hence the name “forest”). Using random forests ensures that a machine learning model does not get caught up in the specific confines of a single decision tree.

Random forests have their own independent implementation in Python machine learning libraries such as scikit-learn.

Challenges of ensemble learning

While ensemble learning is a very powerful tool, it also has some tradeoffs.

Using ensemble means you must spend more time and resources on training your machine learning models. For instance, a random forest with 500 trees provides much better results than a single decision tree, but it also takes much more time to train. Running ensemble models can also become problematic if the algorithms you use require a lot of memory.

Another problem with ensemble learning is explainability . While adding new models to an ensemble can improve its overall accuracy, it makes it harder to investigate the decisions made by the AI algorithm. A single machine learning models such as decision tree is easy to trace, but when you have hundreds of models contributing to an output, it is much more difficult to make sense of the logic behind each decision.

As with most everything you’ll encounter in machine learning, ensemble is one among the many tools you have for solving complicated problems. It can get you out of difficult situations, but it’s not a silver bullet. Use it wisely.

This article was originally published by Ben Dickson on TechTalks , a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here .

Leave A Comment