Who thought political ads featuring Deepfake Putin and Kim trashing the US was a good idea?

A not-for-profit called RepresentUS , working with creative media agency Mischief @ No Fixed Address , recently used the popular Deepfake AI system to create a pair of political ads featuring actors digitally manipulated to look like Vladmir Putin and Kim Jong Un mocking the current state of US politics.

The ads were reportedly slated to air on Fox, CNN, and MSNBC in their DC markets but were “pulled at the last minute” for reasons unknown.

Allow me to clear the mystery: they were probably pulled because this is a bad idea. But before we get into that, let’s take a moment to break down what’s happening in the ads.

Here’s Deepfake Vladmir Putin:

And here’s Deepfake Kim Jong Un:

RepresentUs, the not-for-profit behind the project, says on its website that it brings together “conservatives, progressives, and everyone in between to pass powerful state and local laws that fix our broken elections and stop political bribery. Our strategy is central to ending political corruption, extremism and gridlock.”

The creators claim the purpose of the Deepfake ads is to be shocking and warn voters about the potential dangers our democracy faces. On the surface, this is a great message and it’s easy to get behind the campaign. Whether you’re politically red, blue, purple, or none of the above: if you’re eligible to vote you should.

The reason this ad campaign is a bad idea is because the political battlefield is already rife with bots, bad actors, misinformation, disinformation, and organized chaos designed to disenfranchise as many people as possible .

Instead of clearing the air or cutting through the noise, these ads are just more signal distortion. Not only are they disingenuous on their surface, but they’re marketing fluff. There’s no revelatory information in the ads. It’s a fantasy that distracts from a reality. The Putin and Kim in those videos are claiming they aren’t messing with our democracy because they don’t have to.

Yet, there’s ample evidence that they are engaged in massive interference campaigns. So what’s the real purpose of this ad campaign?

Who is the target audience for this faux-deception? People who think things are going so well that the only way they’d vote is if they were momentarily tricked into thinking Putin and Kim aren’t actively attempting to influence the 2020 election? It doesn’t add up.

The idea that Deepfakes, intentional deception, can be used for political good is a silly one. You won’t find any renowned experts opining that US politics doesn’t have enough subterfuge.

It’s important to mention that the organizations behind the Deepfake Dictators campaign aren’t hiding the fact that these are fake videos. They run a disclaimer at the end of them.

But even this seems a bit skeevy to me, as the disclaimer should run throughout the entirety of the clips to ensure no reasonable person could believe they were real. The people making the videos don’t get to decide what bad actors do with them, but they shouldn’t make it ridiculously easy for their work to be abused.

There’s no good reason to try and “fool” anyone in politics anymore, especially not the voters. Ad campaigns like this are toxic and the fact that this one was created by an outfit that claims no apparent bias towards any particular candidate makes it suspicious. Why muddy the already murky water surrounding the 2020 campaign when our democracy is already drowning in propaganda?

At best, it’s a misguided effort to be provocative for the sake of being provocative — “look at us, we’re doing something we’re not supposed to but it’s for a good cause,” it screams while using Deepfakes for political influence ads, the one thing every AI expert on the planet feared would happen.

But at its worst, it looks like a pointed attempt to add more noise to the election scene while simultaneously downplaying the threat of foreign election interference. And that’s a bad look no matter what your original intent was.

What I learned from creating a metaverse for my students

We’ve been hearing a lot recently about the metaverse – a vision for the internet which uses technology like virtual and augmented reality to integrate real and digital worlds. With Facebook changing its name to Meta to focus on this space, and other big tech companies like Microsoft coming on board, there is much discussion about the potential of the metaverse to enhance the way we socialize, work and learn.

A key component of the metaverse ecosystem will be the creator economy. The virtual worlds within the metaverse need to be conceived, designed, and built by individuals and organizations.

To that end, I established a module at the University of Nottingham in 2020 where up to 100 of my engineering students interact with each other in avatar form in a virtual world known as Nottopia . Nottopia began as a fantastical virtual island, and has since become a floating castle in the sky.

I’ve approached this as somewhat of a research project, surveying students about their experiences , and observing their behaviour within the virtual world. My observations have informed changes I’ve made to Nottopia along the way.

They’ve also informed my answers to three central questions, which I think are relevant to anyone considering their own virtual world – whether for education or any of the metaverse’s myriad other applications.

What platform should you use?

To build a virtual world that others can join, you need to use a social virtual reality (VR) platform. I used Mozilla Hubs, but there are several others . These platforms can broadly be categorised according to accessibility and customisability.

Accessibility questions include whether the platform can run on everyday computers, including mobile devices, or whether it requires dedicated hardware, like VR headsets. From my perspective, it was important the virtual world be accessible on standard technology, but also that students could benefit from the immersive experiences that VR headsets afford (students can use VR headsets provided by the university if they wish).

Customisability is how easy it is to edit virtual worlds on the platform (usually, they have a set of “template” worlds), or create your own from scratch. Last year I edited an existing world, while this year I took several pre-existing building blocks (for example, bits of a castle) to re-imagine Nottopia.

The ability to customise this world has been essential for me. I needed to develop a world where we could easily gather for whole group discussions and presentations, as well as smaller group discussions. It also needed to be a space students could contribute to (for example by adding post-it notes, photos, videos and 3D objects).

What should the virtual world look like?

A second fundamental decision concerns whether the virtual world should be based on a real-world space (perhaps even aiming to replicate one as a digital twin ) or should be purposefully different. When conceiving my VR module in spring/summer 2020, this was an interesting dilemma. As we were at the height of lockdown, I was tempted to recreate specific campus buildings so the students had the feeling of being at the university.

But based on a combination of educational theory and student feedback, I eventually created a world that was overtly fantastical. The aim was to motivate students through gamification – making their experience in the virtual world playful and challenging, somewhat like a game – and also provide an escape from the stresses of the pandemic. Hence the decision to start Nottopia as a futuristic building on a Mediterranean-style island. The reason I subsequently redesigned Nottopia as a castle on an island in the sky was to draw upon Nottingham’s reputation as a medieval city , and in response to students’ desire for a more expansive campus-style environment.

In Nottopia, I occasionally deliver lectures, but typically use the virtual space in more creative ways. For example, I often meet the students in the castle courtyard to discuss the week’s topic before asking them to follow a treasure trail around Nottopia’s buildings where they have to solve engineering problems at each stage.

How complex should the world be?

Current social VR technology presents limitations on how detailed an environment can be before it causes performance problems . This is especially the case for software like Mozilla Hubs that runs on the browser (as opposed to an application you have to download and install) and which is accessible on mobile devices.

Generally, the factors which have the most significant impact on performance include the number of polygons (the basic shapes that make up a 3D object), texture sizes (the number of pixels for 2D images placed over 3D objects), and the overall file size of the world. This was challenging to balance. The more complex your world, the more you’ll end up with low frame rates (a jittery world). A complex world can also be restricted in the number of users who can join the space.

You might assume that creating a virtual world requires considerable technical skills, including the ability to write code. But my experiences have shown me this is not the case.

The development of each of these worlds took me roughly one week. I benefited from the fact that social VR platforms like Mozilla Hubs are increasingly user-friendly, requiring little or no technical know-how. And there are many online video tutorials available.

Ultimately, the process by which these virtual worlds are created should be human-centred – designed according to the abilities, goals and expectations of the intended users. Too often, we hear about or experience products that have ultimately failed because the designers were too focused on the technological possibilities, rather than the users. The metaverse will be no different.

Article by Gary Burnett , Professor of Transport Human Factors, Faculty of Engineering, University of Nottingham

This article is republished from The Conversation under a Creative Commons license. Read the original article .

This startup replaces your entire live-streaming production crew with AI

LiveControl is an interesting company. Its mission is to take the incredibly complex and ridiculously expensive world of video production and distill it to a format that just about anyone can use and afford.

And, because it’s 2021, that means it’s an AI startup.

We love startups here at Neural, but the vast majority of pitches we get come from crappy companies pushing pie-in-the-sky misrepresentations of what predictive algorithms and computer vision can accomplish.

It’s refreshing when we come across a little company with a big AI idea that seems genuinely helpful.

LiveControl provides livestreaming production services to venues. Their business model is actually based on serving the needs of the church community.

We spoke with Patrick Coyne, one of the company‘s founders, who told us the original idea began as a more traditional company. It was started by a man who wanted to help his Rabbi put out a higher quality live stream during services at the synagogue he attended.

At first, the idea was to optimize a multi-camera streaming setup and to have a trained human videographer on site to produce the venue’s live stream.

Coyne told Neural:

The way LiveControl works is actually pretty cool. This isn’t a solution for, say, your kids birthday party in your back yard. And it’s not something you’d use to make a big-budget Hollywood film. It’s more for like a small theater, a church, or some other venue with a recurring event.

The Live Control team sends the venue a “studio in a box,” with a couple of cameras (or more, if needed) and everything it needs to get setup.

According to Coyne:

Once the gear is setup, the company‘s various experts get on a call with a representative for the venue and walk them through optimizing the gear for sound, placement, and lighting.

After that, the venue just has to give LiveControl 24-hours notice ahead of an event. A LiveControl employee will log in to your camera system at the time of the event and run everything remotely for its entire duration. This includes hosting the live stream and publishing it to whichever streaming services you require, including Facebook and YouTube.

So LiveControl is an AI startup, but it’s also a service that matches clients with producers.

Coyne told Neural:

The way Coyne describes it, the system is mostly software-based. The cameras use computer vision algorithms to highlight and track objects in real-time and the system suggests optimal viewing angles, zooms, and what it thinks might be the most entertaining or best-framed shots.

Because it’s software-based, it can be used with just about any PTZ camera.

This is a pretty cool use of AI, and a grand example of how “human in the loop” can be a good thing. LiveControl uses AI to augment its in-house-trained producers; the machine doesn’t actually make any decisions.

There’s really no substitute for human creativity when it comes to putting on an event or presenting a production. Unfortunately, a traditional multi-camera film crew would likely cost thousands of dollars per event.

That’s why small and medium-sized venues tend to have crappy live-streams that are basically just wide shots that make everything look like it was shot from a fan in the balcony section.

LiveControl’s services start at under $200 a production. That’s a world apart from traditional video services and there’s no denying the fact that dynamic, multi-camera content tends to get more eyes on it than static wide angle streams.

The company just announced the completion of a $30 million funding round yesterday, so it’s a safe bet the team’s going to further invest in their platform and be around long enough to serve their clients for awhile.

And, now that the world is starting to trickle back into public venues, it’s a great time to up your spot’s live-streaming game.

You can find out more on LiveControl’s website .

Leave A Comment