AI can now farm crickets (and hopefully solve world hunger in the process)

Earth’s expanding population and unequal distribution of natural resources are pushing the planet towards a food insecurity crisis.

One solution to the problem is adding an unusual ingredient to our diets: crickets.

The insects have a high protein content and low environmental footprint that could provide a sustainable alternative to meat and fish.

They might not have the most appetizing appearance, but looks can be deceiving: crickets are renowned for their subtle nutty flavor, crunchy texture, and exquisite astringency.

At least, that’s what I’ve been told. My religious beliefs sadly forbid me from indulging in the delicacy — but that doesn’t mean you have to miss out. And thanks to AI, the chirpy critters could be arriving on your plate sooner than you think.

A team led by the Aspire Food Group plans to bring the creatures from farm to fork by building the world’s first fully automated insect manufacturing site.

The crickets will then be turned into food products, including protein powder and bars.

The project is the first time that industrial automation, IoT, robotics, and AI will be deployed in climate-controlled, indoor vertical agriculture with living organisms.

Inside the facility, cricket production bins equipped with custom sensors will provide an overall picture of the plant’s health at any time.

Deep learning developed by Canadian startup DarwinAI will then analyze the data to unearth insights that can improve efficiencies.

This will create a feedback loop that allows plant conditions to be changed in real-time as conditions change.

DarwinAI CEO Sheldon Fernandez told TNW that the AI will analyze a range of data, including videos of the crickets to detect biological changes:

The project has received funding from Next Generation Manufacturing Canada ( NGen ), an industry-led organization that’s supported by the Canadian government.

NGen Canada CEO Jayson Myers told TNW that AI will play a critical role at the facility:

The plant will begin operations in the first quarter of 2022. It ultimately aims to produce nearly 20,000 metric tonnes of products annually.

The facility is located in London, Ontario, but the modular design and global distribution of crickets mean the tech can potentially be deployed anywhere. I t might not be long before you find yourself chowing down on the crunchy creatures. Grubs up!

Worried about AI ethics? Worry about developers’ ethics first

Artificial intelligence is already making decisions in the fields of business, health care and manufacturing. But AI algorithms generally still get help from people applying checks and making the final call.

What would happen if AI systems had to make independent decisions, and ones that could mean life or death for humans?

Pop culture has long portrayed our general distrust of AI. In the 2004 sci-fi movie I, Robot , detective Del Spooner (played by Will Smith) is suspicious of robots after being rescued by one from a car crash, while a 12-year-old girl was left to drown. He says :

Unlike humans, robots lack a moral conscience and follow the “ethics” programmed into them. At the same time, human morality is highly variable. The “right” thing to do in any situation will depend on who you ask.

For machines to help us to their full potential, we need to make sure they behave ethically . So the question becomes: how do the ethics of AI developers and engineers influence the decisions made by AI?

The self-driving future

Imagine a future with self-driving cars that are fully autonomous. If everything works as intended, the morning commute will be an opportunity to prepare for the day’s meetings, catch up on news, or sit back and relax.

But what if things go wrong? The car approaches a traffic light, but suddenly the brakes fail and the computer has to make a split-second decision. It can swerve into a nearby pole and kill the passenger, or keep going and kill the pedestrian ahead.

The computer controlling the car will only have access to limited information collected through car sensors, and will have to make a decision based on this. As dramatic as this may seem, we’re only a few years away from potentially facing such dilemmas.

Autonomous cars will generally provide safer driving, but accidents will be inevitable – especially in the foreseeable future, when these cars will be sharing the roads with human drivers and other road users.

Tesla does not yet produce fully autonomous cars, although it plans to. In collision situations, Tesla cars don’t automatically operate or deactivate the Automatic Emergency Braking (AEB) system if a human driver is in control.

In other words, the driver’s actions are not disrupted – even if they themselves are causing the collision. Instead, if the car detects a potential collision , it sends alerts to the driver to take action.

In “autopilot” mode, however, the car should automatically brake for pedestrians. Some argue if the car can prevent a collision, then there is a moral obligation for it to override the driver’s actions in every scenario. But would we want an autonomous car to make this decision?

What’s a life worth?

What if a car’s computer could evaluate the relative “value” of the passenger in its car and of the pedestrian? If its decision considered this value, technically it would just be making a cost-benefit analysis.

This may sound alarming, but there are already technologies being developed that could allow for this to happen. For instance, the recently re-branded Meta (formerly Facebook) has highly evolved facial recognition that can easily identify individuals in a scene.

If these data were incorporated into an autonomous vehicle’s AI system, the algorithm could place a dollar value on each life. This possibility is depicted in an extensive 2018 study conducted by experts at the Massachusetts Institute of Technology and colleagues.

Through the Moral Machine experiment, researchers posed various self-driving car scenarios that compelled participants to decide whether to kill a homeless pedestrian or an executive pedestrian.

Results revealed participants’ choices depended on the level of economic inequality in their country, wherein more economic inequality meant they were more likely to sacrifice the homeless man.

While not quite as evolved, such data aggregation is already in use with China’s social credit system, which decides what social entitlements people have.

The health-care industry is another area where we will see AI making decisions that could save or harm humans. Experts are increasingly developing AI to spot anomalies in medical imaging , and to help physicians in prioritizing medical care.

For now, doctors have the final say, but as these technologies become increasingly advanced, what will happen when a doctor and AI algorithm don’t make the same diagnosis?

Another example is an automated medicine reminder system. How should the system react if a patient refuses to take their medication? And how does that affect the patient’s autonomy, and the overall accountability of the system?

AI-powered drones and weaponry are also ethically concerning, as they can make the decision to kill. There are conflicting views on whether such technologies should be completely banned or regulated . For example, the use of autonomous drones can be limited to surveillance.

Some have called for military robots to be programmed with ethics. But this raises issues about the programmer’s accountability in the case where a drone kills civilians by mistake.

Philosophical dilemmas

There have been many philosophical debates regarding the ethical decisions AI will have to make. The classic example of this is the trolley problem .

People often struggle to make decisions that could have a life-changing outcome. When evaluating how we react to such situations, one study reported choices can vary depending on a range of factors including the respondent’s age, gender and culture.

When it comes to AI systems, the algorithms training processes are critical to how they will work in the real world. A system developed in one country can be influenced by the views, politics, ethics and morals of that country, making it unsuitable for use in another place and time.

If the system was controlling aircraft, or guiding a missile, you’d want a high level of confidence it was trained with data that’s representative of the environment it’s being used in.

Examples of failures and bias in technology implementation have included racist soap dispenser and inappropriate automatic image labelling .

AI is not “good” or “evil”. The effects it has on people will depend on the ethics of its developers. So to make the most of it, we’ll need to reach a consensus on what we consider “ethical”.

While private companies, public organizations and research institutions have their own guidelines for ethical AI, the United Nations has recommended developing what they call “ a comprehensive global standard-setting instrument ” to provide a global ethical AI framework – and ensure human rights are protected.

This article by Jumana Abu-Khalaf , Research Fellow in Computing and Security, Edith Cowan University and Paul Haskell-Dowland , Professor of Cyber Security Practice, Edith Cowan University , is republished from The Conversation under a Creative Commons license. Read the original article .

The quantum tech arms race is bringing us better AI and unhackable comms

Quantum technology, which makes use of the surprising and often counterintuitive properties of the subatomic universe, is revolutionizing the way information is gathered, stored, shared, and analyzed.

The commercial and scientific potential of the quantum revolution is vast, but it is in national security that quantum technology is making the biggest waves. National governments are by far the heaviest investors in quantum research and development.

Quantum technology promises breakthroughs in weapons, communications, sensing, and computing technology that could change the world’s balance of military power. The potential for strategic advantage has spurred a major increase in funding and research and development in recent years.

The three key areas of quantum technology are computing, communications, and sensing. Particularly in the United States and China, all three are now seen as crucial parts of the struggle for economic and military supremacy.

The race is on

Developing quantum technology isn’t cheap. Only a small number of states have the organizational capacity and technological know-how to compete.

Russia, India, Japan, the European Union, and Australia have established significant quantum research and development programs. But China and the US hold a substantial lead in the new quantum race.

And the race is heating up. In 2015 the US was the world’s largest investor in quantum technology, having spent around US$500 million. By 2021 this investment had grown to almost US$2.1 billion .

However, Chinese investment in quantum technology in the same period expanded from US$300 million to an estimated US$13 billion .

The leaders of the two nations, Joe Biden and Xi Jinping have both emphasized the importance of quantum technology as a critical national securitytool in recent years.

The US federal government has established a “ three pillars model ” of quantum research, under which federal investment is split between civilian, defense, and intelligence agencies.

In China, information on quantum security programs is more opaque, but the People’s Liberation Army is known to be supporting quantum research through its own military science academies as well as extensive funding programs into the broader scientific community.

Artificial intelligence and machine learning

Advances in quantum computing could result in a leap in artificial intelligence and machine learning .

This could improve the performance of lethal autonomous weapons systems (which can select and engage targets without human oversight). It would also make it easier to analyze the large data sets used in defense intelligence and cyber security.

Improved machine learning may also confer a major advantage in carrying out (and defending against) cyber attacks on both civilian and military infrastructure.

The most powerful current quantum computer (as far as we know) is made by the US company IBM, which works closely with US defense and intelligence.

Unhackable communication

Quantum communication systems can be completely secure and unhackable. Quantum communication is also required for networking quantum computers, which is expected to enhance quantum computational power exponentially.

China is the clear global leader here. A quantum communication network using ground and satellite connections already links Beijing, Shanghai, Jinan, and Heifei .

China’s prioritization of secure quantum communications is likely linked to revelations of US covert global surveillance operations . The US has been by far the most advanced and effective communications, surveillance, and intelligence power for the past 70 years – but that could change with a successful Chinese effort.

More powerful sensors

Quantum computing and communications hold out the promise of future advantage, but the quantum technology closest to military deployment today is quantum sensing.

New quantum sensing systems offer more sensitive detection and measurement of the physical environment. Existing stealth systems, including the latest generation of warplanes and ultra-quiet nuclear submarines, may no longer be so hard to spot.

Superconducting quantum interference devices (or SQUIDs), which can make extremely sensitive measurements of magnetic fields, are expected to make it easier to detect submarines underwater in the near future.

At present, undetectable submarines armed with nuclear missiles are regarded as an essential deterrent against nuclear war because they could survive an attack on their home country and retaliate against the attacker. Networks of more advanced SQUIDs could make these submarines more detectable (and vulnerable) in the future, upsetting the balance of nuclear deterrence and the logic of mutually assured destruction.

New technologies, new arrangements

The US is integrating quantum cooperation agreements into existing alliances such as NATO, as well as into more recent strategic arrangements such as the Australia–UK–US AUKUS security pact and the Quadrilateral Security Dialogue (“the Quad”) between Australia, India, Japan, and the US.

China already cooperates with Russia in many areas of technology, and events may well propel closer quantum cooperation.

In the Cold War between the US and the USSR, nuclear weapons were the transformative technology. International standards and agreements were developed to regulate them and ensure some measure of safety and predictability.

In much the same way, new accords and arrangements will be needed as the quantum arms race heats up.

Article by Stuart Rollo , Postdoctoral Research Fellow, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article .

Leave A Comment