Scientists figured out how to stop time using quantum algorithms

Everyone’s always talking about traveling through time, but if you ask me the ultimate temporal vacation would be just to pause the clock for a bit. Who among us couldn’t use a five or six month break after 2020 before we commit to an entire new calendar year? It’s not you 2021; it’s us.

Unfortunately, this isn’t an episode of Rick and Morty so we can’t stop time until we’re ready to move on.

But maybe our computers can.

A pair of studies about quantum algorithms, from independent research teams, recently graced the arXiv preprint servers. They’re both basically about the same thing: using clever algorithms to solve nonlinear differential equations.

And if you squint at them through the lens of speculative science you may conclude, as I have, that they’re a recipe for computers that can basically stop time in order to solve a problem requiring a near-immediate solution.

Linear equations are the bread-and-butter of classical computing. We crunch numbers and use basic binary compute to determine what happens next in a linear pattern or sequence using classical algorithms. But nonlinear differential equations are tougher. They’re often too hard or entirely impractical for even the most powerful classical computer to solve.

[Read next: How Netflix shapes mainstream culture, explained by data ]

The hope is that one day quantum computers will break the difficulty barrier and make these hard-to-solve problems seem like ordinary compute tasks.

When computers solve these kinds of problems, they’re basically predicting the future . Today’s AI running on classical computers can look at a picture of a ball in mid-air and, given enough data, predict where the ball is going. You can add a few more balls to the equation and the computer will still get it right most of the time.

But once you reach the point where the scale of interactivity creates a feedback loop, such as when observing particle interactions or, for example, if you toss a heaping handful of glitter up in the air, a classical computer essentially doesn’t have the ooomph to deal with physics at that scale.

This, as quantum researcher Andrew Childs told Quanta Magazine , is why we can’t predict the weather. There’s just too many particulate interactions for a regular old computer to follow.

But quantum computers don’t obey the binary rules of classical computing. Not only can they zig and zag, they can also zig while they zag or do neither at the same time. For our purposes, this means they can potentially solve difficult problems such as “where is every single speck of glitter going to be in .02 seconds?” or “what’s the optimum route for this traveling salesman to take?”

In order to understand how we get from here to there (and what it means) we have to take a look at the aforementioned papers. The first one comes from the University of Maryland. You can check it out here , but the part we’re focusing no now is this:

And let’s take a peek at the second paper. This one’s from a team at MIT:

Both papers are fascinating (you should read them later!) but I’ll risk gross oversimplification by saying: they detail how we can build algorithms for quantum computers to solve those really hard problems.

So what does that mean? We hear about how quantum computers can solve drug discovery or giant math problems but where does the rubber actually hit the road? What I’m saying is, classical computing gave us iPhones, jet fighters, and video games. What’s this going to do?

It’s potentially going to give quantum computers the ability to essentially stop time. Now, as you can imagine, this doesn’t mean any of us will get a remote control with a pause button on it we can use to take a break from an argument like the Adam Sandler movie “ Click .”

What it means is that a powerful-enough quantum computer running the great-great-great-great-grandchildren of the algorithms being developed today may one day be able to functionally assess particle-level physics with enough speed and accuracy to make the passage of time a non-factor in its execution .

So, theoretically, if someone in the future threw a handful of glitter at you and you had a swarm of quantum-powered defense drones, they could instantly respond by perfectly positioning themselves between you and the particles coming from the glitterplosion to protect you. Or, for a less interesting use case, you could model and forecast the Earth’s weather patterns with near-perfect accuracy over extremely long periods of time.

This ultimately means quantum computers could one day operate in a functional time-void, solving problems at nearly the exact infinitesimally-finite moment they happen.

H/t: Max G Levy, Quanta Magazine

Judge says Amazon suit alleging Trump interfered in Project JEDI can go ahead

An Amazon lawsuit alleging former President Donald Trump interfered in the selection process for the Department of Defense’s JEDI project can go forward, a federal judge ruled today.

The ruling stems from a 2019 lawsuit where Amazon insisted Trump purposefully snubbed the company in favor of Microsoft for the JEDI account – a $10B program to build AI solutions for the Pentagon. The reasoning for this, according to Amazon, has to do with its CEO’s ownership of the Washington Post, a newspaper Trump referred to as the “enemy of the people” numerous times during his one-time stint as US president.

Aside from personal beef between the richest person on the planet and the impeached US president, Amazon also asserts its product is far superior to Microsoft’s and claims it’s clearly better-suited to meet the taxpayers’ needs.

The Department of Justice and Microsoft filed a joint injunction request in an attempt to have Amazon’s case dismissed. It’s unclear at this time as to why the Judge refused the pair’s request.

An Amazon spokesperson told Neural:

Quick take: This whole saga is an ugly patch for both the US government and big tech. There’s no unified scientific ethics body for artificial intelligence research and the US government has yet to describe its own policies regarding AI beyond blanket statements .

This means we’ve spent nearly three years watching the richest companies in the world (Google, Microsoft, and Amazon were all in the running) fight over which of them gets to militarize artificial intelligence for the purpose of winning wars with absolutely no legal framework to dictate what that means.

Not only is taxpayer money being wasted to represent the government and Microsoft in court because the DoD’s selection process was flawed from the beginning, but the continuing presence of Donald Trump’s myriad conflicts of interest and utter lack of principles and ethics means that, by the time this is all said and done, we’ll have spent millions of dollars and years in court before the first algorithm gets installed.

To use a common military acronym: the whole thing is FUBAR.

Facebook wants you to believe its AI is working against hate speech

In the middle of the last decade, Facebook decided it needed to build AI to fight hate speech.

While the technology did work in some cases, we’ve also seen glaring failures. After the Christchurch shooting, for example, Facebook wasn’t able to quickly remove the video .

Over the weekend, the Wall Street Journal published a new report indicating Facebook’s AI can’t identify first-person shooting videos and racist rants consistently. Plus, there was a bizarre incident where the algorithm wasn’t able to separate cockfighting and car crashes.

The report noted that the firm’s AI only detects a small part of hate speech posts on the platform , and removes a few of them. In recent documents leaked by former Facebook employee Frances Haugen, the company takes action on only 3-5% of hate and 0.6% of violence and incitement content .

According to a senior engineer who spoke to WSJ, the company doesn’t have and  “possibly never will have a model that captures even a majority of integrity harms, particularly in sensitive areas.”

Facebook has hit back at these claims through a blog post written by the VP of integrity, Guy Rosen. The post claims that the company’s AI has been able to reduce the prevalence of hate speech on the platform by 50%.

Prevalence is a measure the firm uses to measure the spread of hate speech on the platform. For instance, the current percentage rate is 0.05%. That means 5 views per 10,000 come across a hate speech-related post. However, given Facebook’s massive scale, that still means many people see these posts.

Rosen said that data from leaked documents is wrongfully trying to paint a picture that Facebook’s AI is inadequate in removing hate speech:

Facebook spokesperson Andy Stone told WSJ that AI is just one of the ways the company aims to tackle hate speech. It also lowers the visibility of problematic posts so that fewer people see them.

While the company claims its AI has improved leaps and bounds, examples like relating black people to primates keep coming up.

Despite these blemishes, Facebook is bullish on using AI to fight hate speech. So that means the company needs to buckle up and make its algorithms more inclusive and effective.

Leave A Comment