Boston Dynamics doesn’t want you to shoot paintballs from Spot the robot dog

Spot the robot dog has found fame through viral videos of it dancing , delivering sneakers , and even herding sheep . But civil liberties groups fear that the quadruped could also be weaponised.

A new game aims to highlight the dangers of the Boston Dynamics bot by arming it with a paintball gun and dropping it in a gallery — and giving you the controls.

The project is the brainchild of MSCHF (pronounced “mischief”), a group of pranksters whose previous inventions include an app that recommends stocks for your star sign.

The Brooklyn-based collective said it’s now created “ Spot’s Rampage ” to draw attention to the robot’s lethal capabilities:

MSCHF’s track record (and name) suggests some surprises will be in store when the rampage begins on February 24 at 1PM EST. But the concept sounds cathartic as well as vaguely edifying.

The group says that people who download the MSCHF app will be able to remotely pilot Spot from their phones. Every two minutes, the controls will be passed to a new random viewer of the livestream.

Boston Dynamics told TechCrunch that it had initially considered working on the project, but backed out after finding out about the paintball gun.

The firm tweeted that the game “fundamentally misrepresents Spot and how it is being used to benefit out daily lives.”

The company’s irritation is unsurprising. Boston Dynamics has already been criticised for getting funding from military research agency DARPA and letting police test Spot . It would clearly rather people focused on the robot’s potential societal benefits.

However, the public denunciation will only draw more attention to a game that sounds like a lot of fun to me. I just hope it doesn’t deliver any unwelcome surprises.

The key to making AI green is quantum computing

We’ve painted ourselves into another corner with artificial intelligence. We’re finally starting to breakthrough the usefulness barrier but we’re butting up against the limits of our our ability to responsibly meet our machines’ massive energy requirements.

At the current rate of growth, it appears we’ll have to turn Earth into Coruscant if we want to keep spending unfathomable amounts of energy training systems such as GPT-3 .

The problem: Simply put, AI takes too much time and energy to train. A layperson might imagine a bunch of code on a laptop screen when they think about AI development, but the truth is that many of the systems we use today were trained on massive GPU networks, supercomputers, or both. We’re talking incredible amounts of power. And, worse, it takes a long time to train AI.

The reason AI is so good at the things it’s good at, such as image recognition or natural language processing, is because it basically just does the same thing over and over again, making tiny changes each time, until it gets things right. But we’re not talking about running a few simulations. It can take hundreds or even thousands of hours to train up a robust AI system.

One expert estimated that GPT-3, a natural language processing system created by OpenAI, would cost about $4.6 million to train. But that assumes one-shot training. And very, very few powerful AI systems are trained in one fell swoop. Realistically, the total expenses involved in getting GPT-3 to spit out impressively coherent gibberish are probably in the hundreds-of-millions.

GPT-3 is among the high-end abusers, but there are countless AI systems out there sucking up hugely disproportionate amounts of energy when compared to standard computation models.

The problem? If AI is the future, under the current power-sucking paradigm, the future won’t be green. And that may mean we simply won’t have a future.

The solution: Quantum computing.

An international team of researchers, including scientists from the University of Vienna, MIT, Austria, and New York, recently published research demonstrating “quantum speed-up” in a hybrid artificial intelligence system.

In other words: they managed to exploit quantum mechanics in order to allow AI to find more than one solution at the same time. This, of course, speeds up the training process.

Per the team’s paper:

How?

This is the cool part. They ran 10,000 models through 165 experiments to determine how they functioned using classical AI and how they functioned when augmented with special quantum chips.

And by special, that is to say, you know how classical CPUs process via manipulation of electricity? The quantum chips the team used were nanophotonic, meaning they use light instead of electricity.

The gist of the operation is that in circumstance where classical AI bogs down solving very difficult problems (think: supercomputer problems), they found the hybrid-quantum system outperformed standard models.

Interestingly, when presented with less difficult challenges, the researchers didn’t not observe any performance boost. Seems like you need to get it into fifth gear before you kick in the quantum turbocharger.

There’s still a lot to be done before we can roll out the old “mission accomplished” banner. The team’s work wasn’t the solution we’re eventually aiming for, but more of a small-scale model of how it could work once we figure out how to apply their techniques to larger, real problems.

You can read the whole paper here on Nature .

H/t: Shelly Fan, Singularity Hub

Nvidia is building the UK’s fastest supercomputer to use for AI research in healthcare

Nvidia today announced that it’s building the UK‘s fastest supercomputer, which it intends to use for AI research in healthcare.

The Cambridge-1 will deliver 400 petaflops of AI performance, giving it a spot among the world’s 30 most powerful supercomputers . Nvidia says it will also be among the three most energy-efficient supercomputers on the current Green500 list .

The system is expected to come online by the end of the year. Its early users will include healthcare researchers at GSK, AstraZeneca, Guy’s and St Thomas’ NHS Foundation Trust, King’s College London, and Oxford Nanopore Technologies.

“Tackling the world’s most pressing challenges in healthcare requires massively powerful computing resources to harness the capabilities of AI,” said Nvidia founder and CEO Jensen Huang in his GPU Technology Conference keynote .

“The Cambridge-1 supercomputer will serve as a hub of innovation for the UK, and further the groundbreaking work being done by the nation’s researchers in critical healthcare and drug discovery.”

The system will have four key focus areas: joint industry research into large-scale healthcare and data-science problems; support for AI startups; compute time for university studies on medical cures; and education for future AI practitioners.

It will eventually form a part of an AI Center of Excellence that Nvidia plans to create in Cambridge alongside Arm, the British chip designer that the US firm recently agreed to buy for $40 billion .

Leave A Comment