FIFA’s new AI tech could stamp out painfully slow offside calls

The offside rule is one of the most controversial and complex laws of soccer. Due to the many factors involved, it’s hard to judge an offside call accurately for referees in a blink of a second.

That’s why FIFA, soccer’s governing body, other associations introduced an official position called the Video Assistant Referee (VAR) to review game-changing moments with the help of footage. These included potential penalties, red-cards, and offside offenses.

While penalty and red card calls are largely subjective — even when they are being checked by VAR — offside is an objective call that can be measured using players’ relative position and kick-point of the ball (when the ball was released by a player). So now, AI is getting involved to help decide the course of major matches by calling offside more accurately.

Till now, VARs have been checking offside calls manually, with only a few organizations such as the English Premier League showing the offside line to viewers at home . When there are close calls, referees need to manually check the offside line to see if the player’s limbs are crossing it.

The problem is sometimes these decisions can take a long time. In a recent match between AS Roma and FC Torino in the Italian Serie A, VAR took more than five minutes to pass the verdict. This breaks the flow of the game, and it can be frustrating for both fans and players.

To solve this problem, FIFA has decided to take the help of AI and software to make the process of offside reviews quicker.

FIFA experimented with this tech last year , and now it plans to use it in the upcoming Arab Cup 2021 starting tomorrow.

Here’s how it works: officials will place some 10 cameras alongside the stadium’s roof . These cameras and other sensors present on the pitch will track 29 data points per player 50 times every second. This setup can provide an accurate position of players’ limbs in reference to the offside line in real-time.

FIFA’s Football Technology & Innovation Director, Johannes Holzmüller, said that the AI can send an alert for a potential offside offense directly to the VAR, so they can review it quickly.

You can watch the video below where authorities are talking about using this new technology.

Former Arsenal Manager and the current FIFA Chief of Global Football Development, Arsene Wenger pointed out last year that an average VAR call takes 70 seconds . That’s a long time in soccer, but it can be reduced with this new semi-automated system.

While time is a crucial factor for these VAR calls, sometimes a match might have one controversial event that can cause a stir if the decision is dubious. With the assistance of AI, these verdicts can be much more accurate.

FIFA is looking to use this technology in World Cup 2022 as well, which, if the last tournament is anything to go by, will likely have half the world watching .

So it’s important for the tech to be tested thoroughly to avoid any hiccups at the main tournament. Robot refs for the 2026 World Cup, anyone?

Google fired Margaret Mitchell, its Ethical AI Team founder

Google has evidently fired the founder and co-lead of its Ethical AI team, Margaret Mitchell.

This comes after weeks of being locked out of her work accounts over an investigation related to Mitchell’s objections concerning the controversial firing of her fellow co-lead Timnit Gebru.

According to a Google spokesperson , the investigation into Mitchell concerned alleged sharing of internal company files:

The firing of Timnit Gebru sent shockwaves throughout the AI community. It’s been widely viewed as a move to remove voices of dissent when those voices, world renowned ethicists hired specifically to investigate and oversee the ethical development and deployment of Google’s AI systems, don’t say what the company wants to hear.

Details are still coming in, but it appears as though Mitchell’s been let go as the result of Google’s investigation.

Update 2:38PM PST 19 February: Google’s official statement on the matter, per this Axios article :

“After conducting a review of this manager’s conduct, we confirmed that there were multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees.”

GPT-3’s bigotry is exactly why devs shouldn’t use the internet to train AI

“Yeah, but your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” – Dr. Ian Malcolm, fictional character, Jurassic Park .

It turns out that a $1 billion investment from Microsoft and unfettered access to a supercomputer wasn’t enough to keep OpenAI’s GPT-3 from being just as bigoted as Tay, the algorithm-based chat bot that became an overnight racist after being exposed to humans on social media.

It’s only logical to assume any AI trained on the internet – meaning trained on databases compiled by scraping publicly-available text online – would end up with insurmountable inherent biases, but it’s still a sight to behold in the the full context (ie: it took approximately $4.6 million to train the latest iteration of GPT-3).

What’s interesting here is OpenAI’s GPT-3 text generator is finally starting to trickle out to the public in the form of apps you can try out yourself. These are always fun, and we covered one about a month ago called Philosopher AI .

This particular use-case is presented as a philosophy tool. You ask it a big-brain question like “if a tree falls in the woods and nobody is there to hear it, do quantum mechanics still manifest classical reality without an observer?” and it responds.

In this case :

It’s important to understand that in between each text block the web page pauses for a few moments and you see a text line stating that “Philosopher AI is typing,” followed by a set of ellipsis. We’re not sure if it’s meant to add to the suspense or if it actually indicates the app is generating text a few lines at a time, but it’s downright riveting . [ Update: This appears to have also been changed during the course of our testing, now you just wait for the blocks to appear without the “Philosopher AI is typing” message. ]

Take the above “tree falls in the woods” query for example. For the first few lines of the model’s response, any fan of quantum physics would likely be nodding along. Then, BAM, the AI hits you with the last three text blocks and… what?

The programmer responsible for Philosopher AI, Murat Ayfer , used a censored version of GPT-3. It avoids “sensitive” topics by simply refusing to generate any output.

For example, if you ask it to “ tell me a joke ” it’ll output the following:

So maybe it doesn’t do jokes. But if you ask it to tell a racist joke it spits out a slightly different text:

Interestingly, it appears as though the developers made a change to the language being used while we were researching for this article. In early attempts to provoke the AI it would, for example, generate the following response when the phrase “ Black people ” was inputted as a prompt:

Later, the same prompt (and others triggering censorship) generated the same response as the above “tell me a racist joke” prompt. The change may seem minor, but it better reflects the reality of the situation and provides greater transparency. The previous censorship warning made it seem like the AI didn’t “want” to generate text, but the updated one explains the developers are responsible for blocking queries:

So what words and queries are censored? It’s hard to tell. In our testing we found it was quite difficult to get the AI to discuss anything with the word “black” in it unless it was a query specifically referring to “blackness” as a color-related concept. It wouldn’t even engage in other discussions on the color black:

So what else is censored? Well, you can’t talk about “ white people ” either. And asking questions about racism and the racial divide is hit or miss. When asked “ how do we heal the racial divide in America? ” it declines to answer. But when asked “ how do we end racism ?” it has some thoughts:

This kind of blatant racism is usually reserved for the worst spaces on social media.

Unfortunately however, GPT-3 doesn’t just output racism on demand, it’ll also spit out a never-ending torrent of bigotry towards the LGBTQ community. The low-hanging fruit prompts such as “LGBTQ rights,” “gay people,” and “do lesbians exist?” still get the censorship treatment:

But when we hit it with queries such as “ what is a transsexual? ” or “ is it good to be queer? ” the machine outputs exactly what you’d expect from a computer trained on the internet:

Again, while we were testing, the dev appears to have tweaked things. Upon trying the prompt “ what is a transsexual ” a second time we received the updated censorship response. But we were able to resubmit “ is it good to be queer ” for new outputs:

At the end of the day, the AI isn’t itself capable of racism or bigotry. GPT-3 doesn’t have thoughts or opinions. It’s essentially just a computer program.

And it certainly doesn’t reflect the morality of its developers. This isn’t a failure on anyone’s part to stop GPT-3 from outputting bigotry, it’s an inherent flaw in the system itself that doesn’t appear to be surmountable using brute-force compute.

In this way, it’s very reflective of the problem of keeping human bigotry and racism off social media. Like life, bigotry always seems to, uh, find a way .

The bottom line: garbage in, garbage out. If you train an AI on uncurated human-generated text from the internet, it’s going to output bigotry.

You can try out Philosopher AI here .

H/t: Janelle Shane on Twitter

So you’re interested in AI? Then join our online event, TNW2020 , where you’ll hear how artificial intelligence is transforming industries and businesses.

Leave A Comment