Honey traps and bribery: Ex-Cambridge Analytica CEO slapped with 7-year directorship ban

The disgraced former Cambridge Analytica CEO has been banned from running limited companies in the UK for seven years for letting staff offer unethical services.

The British government’s Insolvency Service said Alexander Nix, had permitted Cambridge Analytica‘s parent firm SCL Elections and its affiliated companies to offer prospective clients a dizzying range of unscrupulous services. They included “ bribery or honey trap stings, voter disengagement campaigns, obtaining information to discredit political opponents, and spreading information anonymously in political campaigns.”

Mark Bruce, Chief Investigator for the Insolvency Service, said SCL Elections had repeatedly offered shady political services over a number of years:

Nix told Reuters he hadn’t admitted any wrongdoing nor been accused of breaking the law, but had accepted the disqualification to “avoid an unnecessary, lengthy and expensive court case”.

SCL Elections and the five connected companies ceased trading in 2018 after the Cambridge Analytica scandal drove away their clients.

The political consultancy had harvested data from up to 87 million Facebook users , which was used to target US voters with personalized ads that helped Donald Trump‘s 2016 presidential campaign.

Nix was later caught on camera saying the firm could use sex workers, bribes, ex-spies, and fake IDs to assist election campaigns around the world.

“Send some girls around to the candidate’s house,” Nix said in one exchange, adding that Ukrainian women “are very beautiful, I find that works very well.”

The Insolvency Service didn’t reveal whether the companies had ever performed any of these services when announcing Nix’s ban, which comes into effect on October 5. We’ll be keeping an eye out for his next move.

So you’re interested in AI? Then join our online event, TNW2020 , where you’ll hear how artificial intelligence is transforming industries and businesses.

Study shows how dangerously simple it is to manipulate voters (and daters) with AI

A pair of researchers, Ujué Agudo and Helena Matute of Universidad de Deusto in Spain, recently published a fascinating study demonstrating how easy it is to influence humans with algorithms.

Up front: The basic takeaway from the work is that people tend to do what the algorithm says. Whether they’re being influenced to vote for a specific candidate based on an algorithmic recommendation or being funneled toward the perfect date on an app, we’re dangerously easy to influence with basic psychology and rudimentary AI.

The big deal: We like to think we’re agents of order making informed decisions in a somewhat chaotic universe. But, as Neural’s Thomas Maucalay recently pointed out in our weekly newsletter , we’re unintentional cyborgs .

And that means we’re susceptible to the same disadvantages as our hairy ancestors as well as those that have traditionally only affected machines. If you prick us, we bleed. And if an algorithm tells us something is true, we usually agree.

An argument: We’re no longer homo-sapiens – which means “wise man” in Latin. We’re more like reverse-homomutatas. That might sound like it would have something to do with “mutant humans,” but in fact it refers to a type of cloud.

Homomutatas are cloud-like formations caused by human interference in the natural atmosphere that have transcended their initial state to become… something more. It takes a combination of humanity’s byproducts and nature’s influence to create homomutatas.

What humanity has become is the opposite. We once were the “wise man” of nature, now we’re fully-augmented cybernetic beings who’ve eschewed millions of years of natural evolution in exchange for the ability to externalize our cognitive functions. In other words: we let computers do the work our brains were evolved to handle so we can fill our time with more creative endeavors. Such as arguing over politics or deciding whether to swipe right or left.

The research: Agudo and Matute, the aforementioned researchers, probably weren’t trying to argue that humanity has evolved beyond the natural order when they conducted their study. But, after reading the research, it’s hard to come to any other conclusion.

Per the study:

The researchers conducted four distinct experiments under similar conditions. Each began with a fake personality test. Once participants completed the test they were given a personality profile which they were told would inform the algorithm and help determine the best personalized results for them.

In reality, there were no individual personality profiles. All participants were given the same fake profile, a vaguely worded one that could apply to anyone.

In the first experiment, the researchers used explicit manipulation to get participants to vote for a specific political candidate. People were shown images of fictional politicians and told that specific candidates matched their personality to a high percentage and then asked, based on nothing more than the images of the politicians and the algorithmic recommendations, who they were most likely to vote for.

When compared against a control group of people viewing images with no algorithmic manipulation, people were much more likely to vote for the candidate the AI told them to.

The second experiment used indirect manipulation. Rather than telling participants an AI was recommending politicians, the algorithm secretly chose four politicians and pre-exposed participants to their images in order to develop familiarity.

Interestingly, people didn’t seem to be influenced in any statistically meaningful way when the algorithm tried to use what magicians call a “force,” that is, to hide the fact they’re pushing a mark towards a specific outcome.

In politics, it seems, people trust the algorithm to tell them what to do even more when they know they’re being manipulated.

But the results were a 180-degree departure when the same concepts and algorithms were applied to dating apps.

When the algorithm told people they would be a match for certain individuals, participants were still just as likely to say they’d rather date a different fictional match than the one the AI tried to force.

However, when the AI stayed behind the scenes and surfaced specific images, users were more likely to choose those images.

The bottom line: We’re easily manipulated by algorithms. The question isn’t whether a bad actor can manipulate us with AI, it’s which algorithms work best for a given situation.

What that means for humanity, in the post homo-sapiens sense, remains to be seen. We’ve only been living with modern AI techniques for a matter of decades. The bad actors of the world have an extreme advantage over the good-faith researchers, scientists, and politicians who want to study and regulate AI.

As the researchers put it:

You can read the whole study here .

PastBook launches a Google Photos-esque, AI-assisted photobook app for iOS

PastBook, a startup out of Amsterdam, is bringing its web-to-print photobook service to iOS. Thanks to some nifty AI integrations, it looks and feels a lot like something Google would make. And that’s a good thing.

We’re closing in on a generation who’ve grown in up in a world where photography is almost exclusively a digital medium. But there’s no substitute for artful prints, family photo albums, and the aesthetics of a physical memoir of your personal journey as either a shutterbug, subject, or both.

PastBook provides dead-simple photobook solutions, which is cool, but what’s most interesting (to us here at Neural) is the way the company uses AI.

One of the best things about owning a modern iPhone is that they come with brilliant cameras, AI, and software. And this often means people take pictures of everything. Some of us have tens of thousands of images on our devices – many we might never see again if it weren’t for memories apps and posting them to social media.

The PastBook app asks you for a date or location and it sorts through your images to find what it considers the best. This involves identifying duplicates and using computer vision to determine which images you’re likely to find pleasing.

The big idea is to turn sifting through hundreds or thousands of photos into an endeavor that takes less than a minute but still results in a photobook that looks like it was hand-curated.

In practice, this actually works out quite well. The app tends to select quality images and it creates compelling collages. And, when it doesn’t work out perfectly, it allows you to edit the images and change collage layouts.

PastBook doesn’t have the most robust interface. It’s fairly sparse on options. That’s not so much a complaint as it is a head’s up. This service is designed so that just about anyone who can use an iPhone can use it, so don’t expect photo-editing or anything more complex than a few interface questions you can answer with a thumbs up/down emoji.

And, at the end of the day, this ‘free’ app is really just the UI for a photobook printing service. That means there’s almost no reason to install the app if you’re not interested in ordering one of PastBook’s print products.

In this case, we were more interested in playing with the curation AI. But it bears mentioning we did not review any of the actual print products.

The company offers a satisfaction guarantee and it appears to have overwhelmingly positive reviews on each platform its been on, so everything appears on the up and up. But with any personalized product comes the risk of dissatisfaction.

For more information you can check out Pastbook’s website and, starting 29 June, you can download the app here on iOS .

Leave A Comment