How can we make sure everyone benefits from the next quantum revolution?

Over the past six years, quantum science has noticeably shifted , from the domain of physicists concerned with learning about the universe on extremely small scales, to a source of new technologies we all might use for practical purposes. These technologies make use of quantum properties of single atoms or particles of light. They include sensors, communication networks, and computers.

Quantum technologies are expected to impact many aspects of our society , including health care, financial services, defence, weather modelling, and cyber security. Clearly, they promise exciting benefits. Yet the history of technology development shows we cannot simply assume new tools and systems will automatically be in the public interest.

We must look ahead to what a quantum society might entail and how the quantum design choices made today might impact how we live in the near future. The deployment of artificial intelligence and machine learning over the past few years provides a compelling example of why this is necessary.

Let’s consider an example. Quantum computers are perhaps the best-known quantum technology, with companies like Google and IBM competing to achieve quantum computation. The advantage of quantum computers lies in their ability to tackle incredibly complex tasks that would take a normal computer millions of years. One such task is simulating molecules’ behaviour to improve predictions about the properties of prospective new drugs and accelerate their development.

One conundrum posed by quantum computing is the sheer expense of investing in the physical infrastructure of the technology. This means ownership will likely be concentrated among the wealthiest countries and corporations. In turn, this could worsen uneven power distribution enabled by technology.

Other considerations for this particular type of quantum technology include concerns about reduced online privacy .

How do we stop ourselves blundering into a quantum age without due forethought? How do we tackle the societal problems posed by quantum technologies, while nations and companies race to develop them?

Charting a path

Last year, CSIRO released a roadmap that included a call for quantum stakeholders to explore and address social risks. An example of how we might proceed with this has begun at the World Economic Forum (WEF) . The WEF is convening experts from industry, policy-making, and research to promote safe and secure quantum technologies by establishing an agreed set of ethical principles for quantum computing.

Australia should draw on such initiatives to ensure the quantum technologies we develop work for the public good . We need to diversify the people involved in quantum technologies — in terms of the types of expertise employed and the social contexts we work from — so we don’t reproduce and amplify existing problems or create new ones.

While we work to shape the impacts of individual quantum technologies, we should also review the language used to describe this “second quantum revolution”.

The rationale most commonly used to advocate for the field narrowly imagines public benefit of quantum technologies in terms of economic gain and competition between nations and corporations. But framing this as a “ race ” to develop quantum technologies means prioritising urgency, commercial interests and national security at the expense of more civic-minded concerns.

It’s still early enough to do something about the challenges posed by quantum technologies. It’s also not all doom and gloom, with a variety of initiatives and national research and development policies setting out to tackle these problems before they are set in stone.

We need discussions involving a cross-section of society on the potential impacts of quantum technologies on society. This process should clarify societal expectations for the emerging quantum technology sector and inform any national quantum initiative in Australia .

Article by Tara Roberson , Postdoctoral Research Fellow, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article .

Dear Meta CTO: Yes, people are awful, but your algorithms make them worse

Meta’s incoming CTO, Andrew “Boz” Bosworth, is making quite the splash. The kind of splash you make by cannonballing into your swimming pool, soaking all your guests, and then blaming them for getting wet.

In a Sunday interview with Axios on HBO , Bosworth was grilled about misinformation on social media.

The Facebook veteran mounted a stern defense of his company. According to Bosworth, it’s not platforms that are responsible for misinformation — it’s their users:

Bosworth does have a point: people are awful and stupid. We’re drawn to divisive content, susceptible to bullshit, and prone to confirmation biases. Yet Bosworth overlooks how algorithms influence these tendencies.

Facebook’s recommendation systems are frequently accused of spreading misinformation to maximize profit . Critics say the company eschews efforts to address this as doing so would limit growth.

As the creator of the News Feed, Bosworth knows Facebook’s algorithms better than most. However, he argued that users are responsible for what they consume.

This defense of free expression simplifies Meta’s influence. The company doesn’t only choose what appears on Facebook ; it also determines what the platform promotes.

Meta delegates many of these decisions to recommendation algorithms, which have shown a penchant for false and divisive content .

Frances Haugen, the Facebook whistleblower, has endorsed an alternative approach. She wants the company to ditch engagement-based rankings for chronological feeds:

The issue with this approach is obvious: depreciating engagement could reduce revenues. However, there are signs that the switch is possible. Last week, Meta’s Instagram announced plans to launch a chronological feed next year.

The new rankings won’t become the default, but the move suggests that further changes could come.

They’re certainly worth consideration. While we’re ultimately responsible for what we consume, Meta doesn’t have to amplify our worst instincts.

Can we be friends with robots? Research says yes

In the 2012 film “Robot and Frank,” the protagonist, a retired cat burglar named Frank, is suffering the early symptoms of dementia. Concerned and guilty, his son buys him a “home robot” that can talk, do household chores like cooking and cleaning, and reminds Frank to take his medicine. It’s a robot the likes of which we’re getting closer to building in the real world.

The film follows Frank, who is initially appalled by the idea of living with a robot, as he gradually begins to see the robot as both functionally useful and socially companionable. The film ends with a clear bond between man and machine, such that Frank is protective of the robot when the pair of them run into trouble.

This is, of course, a fictional story, but it challenges us to explore different kinds of human-to-robot bonds. My recent research on human-robot relationships examines this topic in detail, looking beyond sex robots and robot love affairs to examine the most profound and meaningful of relationships: friendship.

My colleague and I identified some potential risks – like the abandonment of human friends for robotic ones – but we also found several scenarios where robotic companionship can constructively augment people’s lives, leading to friendships that are directly comparable to human-to-human relationships.

Philosophy of friendship

The robotics philosopher John Danaher sets a very high bar for what friendship means. His starting point is the “true” friendship first described by the Greek philosopher Aristotle, which saw an ideal friendship as premised on mutual goodwill, admiration, and shared values. In these terms, friendship is about a partnership of equals.

Building a robot that can satisfy Aristotle’s criteria is a substantial technical challenge and is some considerable way off – as Danaher himself admits. Robots that may seem to be getting close, such as Hanson Robotics’ Sophia , base their behavior on a library of pre-prepared responses: a humanoid chatbot, rather than a conversational equal. Anyone who’s had a testing back-and-forth with Alexa or Siri will know AI still has some way to go in this regard.

In the video below, t he humanoid robot Sophia, developed by Hong Kong-based Hanson Robotics.

Aristotle also talked about other forms of “imperfect” friendship – such as “utilitarian” and “pleasure” friendships – which are considered inferior to true friendship because they don’t require symmetrical bonding and are often to one party’s unequal benefit. This form of friendship sets a relatively very low bar which some robots – like “sexbots” and robotic pets – clearly already meet.

Artificial amigos

For some, relating to robots is just a natural extension of relating to other things in our world – like people, pets, and possessions. Psychologists have even observed how people respond naturally and socially towards media artefacts like computers and televisions . Humanoid robots, you’d have thought, are more personable than your home PC.

However, the field of “robot ethics” is far from unanimous on whether we can – or should – develop any form of friendship with robots. For an influential group of UK researchers who charted a set of “ ethical principles of robotics ,” human-robot “companionship” is an oxymoron, and to market robots as having social capabilities is dishonest and should be treated with caution – if not alarm. For these researchers, wasting emotional energy on entities that can only simulate emotions will always be less rewarding than forming human-to-human bonds.

But people are already developing bonds with basic robots – like vacuum-cleaning and lawn-trimming machines that can be bought for less than the price of a dishwasher. A surprisingly large number of people give these robots pet names – something they don’t do with their dishwashers. Some even take their cleaning robots on holiday .

Other evidence of emotional bonds with robots includes the Shinto blessing ceremony for Sony Aibo robot dogs that were dismantled for spare parts, and the squad of US troops who fired a 21-gun salute, and awarded medals, to a bomb-disposal robot named “ Boomer ” after it was destroyed in action.

These stories and the psychological evidence we have so far, make clear that we can extend emotional connections to things that are very different to us, even when we know they are manufactured and pre-programmed. But do those connections constitute a friendship comparable to that shared between humans?

True friendship?

A colleague and I recently reviewed the extensive literature on human-to-human relationships to try to understand how, and if, the concepts we found could apply to bonds we might form with robots. We found evidence that many coveted human-to-human friendships do not in fact live up to Aristotle’s ideal.

We noted a wide range of human-to-human relationships, from relatives and lovers to parents, carers, service providers, and the intense (but unfortunately one-way) relationships we maintain with our celebrity heroes. Few of these relationships could be described as completely equal and, crucially, they are all destined to evolve over time.

All this means that expecting robots to form Aristotelian bonds with us is to set a standard even human relationships fail to live up to. We also observed forms of social connectedness that are rewarding and satisfying and yet are far from the ideal friendship outlined by the Greek philosopher.

We know that social interaction is rewarding in its own right and something that, as social mammals, humans have a strong need for . It seems probable that relationships with robots could help to address the deep-seated urge we all feel for social connection – like providing physical comfort, emotional support, and enjoyable social exchanges – currently provided by other humans.

Our paper also discussed some potential risks. These arise particularly in settings where interaction with a robot could come to replace interaction with people, or where people are denied a choice as to whether they interact with a person or a robot – in a care setting, for instance.

These are important concerns, but they’re possibilities and not inevitabilities. In the literature we reviewed we actually found evidence of the opposite effect: robots acting to scaffold social interactions with others, acting as ice-breakers in groups, and helping people to improve their social skills or to boost their self-esteem.

It appears likely that, as time progresses, many of us will simply follow Frank’s path towards acceptance: scoffing at first, before settling into the idea that robots can make surprisingly good companions. Our research suggests that’s already happening – though perhaps not in a way in which Aristotle would have approved.

This article by Tony Prescott , Professor of Cognitive Neuroscience and Director of the Sheffield Robotics Institute, University of Sheffield is republished from The Conversation under a Creative Commons license. Read the original article .

Leave A Comment