Why AI needs a physical body to emotionally connect with humans

Artificial intelligence seems to be making enormous advances. It has become the key technology behind self-driving cars , automatic translation systems , speech and textual analysis, image processing and all kinds of diagnosis and recognition systems. In many cases, AI can surpass the best human performance levels at specific tasks.

We are witnessing the emergence of a new commercial industry with intense activity, massive financial investment , and tremendous potential. It would seem that there are no areas that are beyond improvement by AI – no tasks that cannot be automated, no problems that can’t at least be helped by an AI application. But is this strictly true?

Theoretical studies of computation have shown there are some things that are not computable . Alan Turing, the brilliant mathematician and code breaker, proved that some computations might never finish (while others would take years or even centuries).

For example, we can easily compute a few moves ahead in a game of chess, but to examine all the moves to the end of a typical 80-move chess game is completely impractical. Even using one of the world’s fastest supercomputers, running at over one hundred thousand trillion operations per second, it would take over a year to get just a tiny portion of the chess space explored. This is also known as the scaling-up problem .

Early AI research often produced good results on small numbers of combinations of a problem (like noughts and crosses, known as toy problems) but would not scale up to larger ones like chess (real-life problems). Fortunately, modern AI has developed alternative ways of dealing with such problems. These can beat the world’s best human players, not by looking at all possible moves ahead, but by looking a lot further than the human mind can manage. It does this by using methods involving approximations, probability estimates, large neural networks and other machine-learning techniques.

But these are really problems of computer science, not artificial intelligence. Are there any fundamental limitations on AI performing intelligently? A serious issue becomes clear when we consider human-computer interaction. It is widely expected that future AI systems will communicate with and assist humans in friendly, fully interactive, social exchanges.

Theory of mind

Of course, we already have primitive versions of such systems. But audio-command systems and call-centre-style script-processing just pretend to be conversations . What is needed are proper social interactions, involving free-flowing conversations over the long term during which AI systems remember the person and their past conversations. AI will have to understand intentions and beliefs and the meaning of what people are saying.

This requires what is known in psychology as a theory of mind – an understanding that the person you are engaged with has a way of thinking, and roughly sees the world in the same way as you do. So when someone talks about their experiences, you can identify and appreciate what they describe and how it relates to yourself, giving meaning to their comments.

We also observe the person’s actions and infer their intentions and preferences from gestures and signals. So when Sally says, “I think that John likes Zoe but thinks that Zoe finds him unsuitable”, we know that Sally has a first-order model of herself (her own thoughts), a second-order model of John’s thoughts, and a third-order model of what John thinks Zoe thinks. Notice that we need to have similar experiences of life to understand this.

Physical learning

It is clear that all this social interaction only makes sense to the parties involved if they have a “ sense of self ” and can similarly maintain a model of the self of the other agent. In order to understand someone else, it is necessary to know oneself. An AI “self model” should include a subjective perspective, involving how its body operates (for example, its visual viewpoint depends upon the physical location of its eyes), a detailed map of its own space, and a repertoire of well understood skills and actions.

That means a physical body is required in order to ground the sense of self in concrete data and experience. When an action by one agent is observed by another, it can be mutually understood through the shared components of experience. This means social AI will need to be realized in robots with bodies. How could a software box have a subjective viewpoint of, and in, the physical world, the world that humans inhabit? Our conversational systems must be not just embedded but embodied.

A designer can’t effectively build a software sense-of-self for a robot. If a subjective viewpoint were designed in from the outset, it would be the designer’s own viewpoint, and it would also need to learn and cope with experiences unknown to the designer. So what we need to design is a framework that supports the learning of a subjective viewpoint.

Fortunately, there is a way out of these difficulties. Humans face exactly the same problems but they don’t solve them all at once. The first years of infancy display incredible developmental progress , during which we learn how to control our bodies and how to perceive and experience objects, agents and environments. We also learn how to act and the consequences of acts and interactions.

Research in the new field of developmental robotics is now exploring how robots can learn from scratch, like infants. The first stages involve discovering the properties of passive objects and the “physics” of the robot’s world. Later on, robots note and copy interactions with agents (carers), followed by gradually more complex modeling of the self in context. In my new book , I explore the experiments in this field.

So while disembodied AI definitely has a fundamental limitation, future research with robot bodies may one day help create lasting, empathetic, social interactions between AI and humans.

This article is republished from The Conversation by Mark Lee , Emeritus Professor in Computer Science, Aberystwyth University under a Creative Commons license. Read the original article .

Scientist says viruses may be the key to colonizing other planets

Did you know Neural is taking the stage this fall ? Together with an amazing line-up of experts, we will explore the future of AI during TNW Conference 2021. Secure your ticket now !

If you let NASA tell the story, some of the people walking around on this planet right now may end up taking a stroll on the surface of Mars during their lifetimes.

People such as Elon Musk believe we’ll colonize the red planet entirely and become a two-planet species. And this could be one of humankind’s most important endeavors – after all, who knows when another asteroid the size of the one that may have taken out the dinosaurs is going to hit again.

It could also be far more complex than NASA or SpaceX has actually considered.

We’re going to need a way to grow food, store water, and produce breathable air in order for humans to survive on Mars.

But colonizing a harsh world is about more than just not dying. In order for humankind to grow and prosper on Mars as we have on Earth, we’re going to need good old-fashioned Earth viruses. And lots of em’.

That’s according to the director of Arizona State University’s Beyond Center for Fundamental Concepts in Science, Professor Paul Davies.

Davies recently discussed the importance of viruses in an interview published in The Guardian :

As Davies and myriad other scientists suspect, it’s possible that viruses are not just part of Earth’s biome but an essential component of evolution .

This is because of a fascinating aspect of evolutionary growth called “ horizontal gene transfer .”

During horizontal gene transfer a species is believed to get certain traits through exposure from viruses rather than the traditional genetic route. According to Davies, some scientists believe most of the human genome is derived from viral sources.

In other words: humans are still evolving. It’s possible our further evolution will require access to viruses that modify our genome over vast periods of time.

If we were to successfully colonize Mars (which would involve solving innumerable problems in its own right), those people who lived, procreated, and died there could potentially diverge from the human race.

It’s conceivable that, after a certain number of generations of Martian colonists have been born, the human race could split into an Earth species and a Mars one solely based on exposure to viruses.

These are probably far-future problems, but the rate at which politicians and private-sector companies are attempting to conduct crewed missions to Mars with the expressed purpose of building a colony is alarming.

It’s impossible to know the ramifications of colonizing a planet without our Earthbound viruses – most of which are actually good , they’re not all COVID-19.

And it’s also impossible to know the ramifications of intentionally transporting and unleashing our planet’s diseases on the rest of the cosmos.

The good news, according to Davies, is that any aliens out there almost certainly have their own biomes and viruses that sustain their life. And, typically speaking, a virus is only harmful to the host it evolved to attack.

So, space viruses probably aren’t harmful to humans. But what happens when an alien virus and an Earth virus start mixing things up? And what happens to humanity when we leave our Earth viruses behind?

It’s obvious that there’s more to colonizing another world than just hauling supplies and figuring out how future generations can eventually terraform a barren wasteland. Here’s hoping the people authorizing these projects are listening to more than just billionaires and engineers.

5 real AI threats that make The Terminator look like Kindergarten Cop

It. Never. Fails. Every time an AI article finds its way to social media there’s hundreds of people invoking the terrifying specter of “SKYNET.”

SKYNET is a fictional artificial general intelligence that’s responsible for the creation of the killer robots from the Terminator film franchise. It was a scary vision of AI‘s future until deep learning came along and big tech decided to take off its metaphorical belt and really give us something to cry about.

At least the people fighting the robots in The Terminator film franchises get to face a villain they can see and shoot at. In real life, you can’t punch an algorithm.

And that makes it difficult to explain why, based on what’s happening now, the real future might be even scarier than the one from those killer robot movies.

Luckily, we have experts such as Kai Fu Lee and Chen Qiufan, whose new book, AI 2041: Ten Visions of our Future , takes a stab at predicting what the machines will do over the next two decades. And, based on this interview , there’s some scary shit headed our way.

According to Lee and Qiufan, the biggest threats humans face when it comes to AI involve its influence, lack of accountability or explainability, its inherent and explicit bias, its use as a bludgeon against privacy, and, yes, killer robots – but not the kind you’re thinking of.

The Facebooks

If we’re going to prioritize a list of existential threats to the human race, we should probably start with the worst of them all: social media.

Facebook‘s very existence is a danger to humanity. It represents a business entity with more power than the governing body of the nation in which it’s incorporated.

The US government has taken no meaningful steps to regulate Facebook‘s use of AI. And, for that reason, billions of humans across the planet are exposed to demonstrably harmful recommendation algorithms every day.

Facebook‘s AI has more influence over humankind than any other force in history. The social network has more active monthly users than Christianity .

It would be shortsighted to think decades of exposure to social networks, despite hundreds of thousands of studies warning us about the real harms, won’t have a major impact on our species.

Whether in 10, 20, or 50 years, the evidence seems to indicate we’ll live to regret turning our attention spans over to a mathematical entity that’s dumber than a snail .

The Amazons

The next threat on our tour-de-AI-horrors is the fascinating world of anti-privacy technology and the nightmare dystopia we’re headed for as a species.

Amazon‘s Ring is the perfect reminder that, for whatever reason, humankind is deeply invested in shooting itself in the foot at every possible opportunity.

If there’s one thing almost every free nation on the planet agrees on, it’s that human beings deserve a modicum of privacy .

Ring doorbell cameras destroy that privacy and effectively give both the government and a trillion-dollar corporation a neighbor’s eye-view of everything that’s happening in every neighborhood around the country.

The only thing stopping Amazon or the US government from exploiting the data in the buckets where all that Ring video footage is stored is their word .

If it ever becomes lucrative to use our data or sell it. Or a political shift gives the US government powers to invade our privacy that it didn’t previously have, our data is no longer safe.

But it’s not just Amazon. Our cars will soon be equipped with cloud-connected cameras purported to watch drivers for safety reasons. We already have active microphones listening in all of our smart devices.

And we’re on the very cusp of mainstreaming brain-computer-interfaces . The path to wearables that send data directly from your brain to big tech’s servers is paved with good intentions and horrible AI.

The next generation of surveillance tech, wearables, and AI-companions might eradicate the idea of personal privacy all-together.

The Googles

The difference between being the first result of a Google search or ending up at the bottom of the page can cost businesses millions of dollars . Search engines and social media feed aggregators can kill a business or sink a news story.

And nobody voted to give Google or any other company’s search algorithms that kind of power, it just happened.

Now, Google’s bias is our bias . Amazon‘s bias determines which products we buy. Microsoft and Apple‘s bias determine what news we read.

Our doctors, politicians, judges, and teachers use Google, Apple, and Microsoft search engines to conduct personal and professional business. And the inherent biases of each product dictate what they do and do not see.

Social media feeds often determine not just which news articles we read, but which news publishers we’re exposed to. Almost every facet of modern life is somehow promulgated via algorithmic bias.

In another 20 years, information could become so stratified that “alternative facts” no longer refer to those that diverge from reality, but those that don’t reflect the collective truth our algorithms have decided on for us.

Blaming the algorithms

AI doesn’t have to actually do anything to harm humans. All it has to do is exist and continue to be confusing to the mainstream. As long as developers can get away with passing off black box AI as a way to automate human decision-making, bigotry and discrimination will have a home in which to thrive.

There are certain situations where we don’t need AI to explain itself. But when an AI is tasked with making a subjective decision, especially one that affects humans, it’s important we be able to know why it makes the choices it does.

It’s a big problem when, for example, YouTube’s algorithm surfaces adult content to children’s accounts because the developers responsible for creating and maintaining those algorithms have no clue why it happens.

But what if there isn’t a better way to use black box AI? We’ve painted ourselves into a corner – almost every public-facing big tech enterprise is powered by black box AI, and almost all of it is harmful. But getting rid of it may prove even harder than extricating humanity from its dependence on fossil fuels – and for the same reasons.

In the next 20 years, we can expect the lack of explainability intrinsic to black box AI to lie at the center of any number of potential catastrophes involving artificial intelligence and loss of human life.

Assassinations

The final and perhaps least dangerous (but most obvious) threat to our species as a whole is that of killer drones . Note, that’s not the same thing as killer robots.

There’s a reason why even the US military, with its vast budget, doesn’t have killer robots. And it’s because they’re pointless when you can just automate a tank or mount a rifle on a drone.

The real killer robot threat is that of terrorists gaining access to simple algorithms, simple drones, simple guns, and advanced drone-swarm control technology.

Perhaps the best perspective comes from Lee who, in a recent interview with Andy Serwer , said:

Leave A Comment