Watch a drone refuel another aircraft in mid-air for the first time

An unmanned drone has refueled another aircraft during flight for the first time, a milestone that the US Navy says could help change future warfighting.

Boeing’s unmanned MQ-25 Stringray plugged a hose into a Navy F/A-18 Super Hornet and pumped 325 pounds of fuel into the fighter jet on June 4, the manufacturer announced on Monday.

The 4.5-hour test was conducted from the MidAmerica St. Louis Airport in Mascoutah, Illinois.

During the flight, the Super Hornet’s pilot flew as close as 20 feet from the Stingray, to ensure the performance and stability of the operation. The drone then extended its hose, and the jet moved in to connect with the tube and get a sweet hit of fresh fuel.

You can watch a clip of the test flight at the top of this article. A longer video is available on Boeing’s website .

The MQ-25 will now undergo further testing before being shipped for deck handling trials aboard a Navy carrier later this year.

The Navy plans to make the Stingray the world’s first operational carrier-based unmanned aircraft. Ultimately, the tests could have an impact on future aerial warfare.

“Unmanned systems alongside our traditional combatant force provide additional capability and capacity to give our warfighters the advantage needed to fight, win, and deter potential aggressors,” said Captain Chad Reed, program manager for the Navy’s Unmanned Carrier Aviation program, during a press conference. “The MQ-25 is that first step towards a future where the carrier-based fleet is augmented by unmanned systems.”

Aerial-refuelling isn’t the only example of human-machine collaboration that the US military is exploring.

DARPA is also investigating how AI can support human fighter jet pilots. The Pentagon plans to test the algorithms in live dogfights in late 2023 and 2024.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here .

Everything you need to know about artificial neural networks

Welcome to Neural Basics, a collection of guides and explainers to help demystify the world of artificial intelligence.

One of the most influential technologies of the past decade is artificial neural networks, the fundamental piece of deep learning algorithms , the bleeding edge of artificial intelligence.

You can thank neural networks for many of applications you use every day, such as Google’s translation service, Apple’s Face ID iPhone lock and Amazon’s Alexa AI-powered assistant . Neural networks are also behind some of the important artificial intelligence breakthroughs in other fields, such as diagnosing skin and breast cancer, and giving eyes to self-driving cars .

The concept and science behind artificial neural networks have existed for many decades. But it has only been in the past few years that the promises of neural networks have turned to reality and helped the AI industry emerge from an extended winter .

While neural networks have helped the AI take great leaps, they are also often misunderstood. Here’s everything you need to know about neural networks.

Similarities between artificial and biological neural networks

The original vision of the pioneers of artificial intelligence was to replicate the functions of the human brain, nature’s smartest and most complex known creation. That’s why the field has derived much of its nomenclature (including the term “artificial intelligence”) from the physique and functions of the human mind.

Artificial neural networks are inspired from their biological counterparts. Many of the functions of the brain continue to remain a mystery, but what we know is that biological neural networks enable the brain to process huge amounts of information in complicated ways .

The brain’s biological neural network consists of approximately 100 billion neurons, the basic processing unit of the brain. Neurons perform their functions through their massive connections to each other, called synapses. The human brain has approximately 100 trillion synapses, about 1,000 per neuron.

Every function of the brain involves electrical currents and chemical reactions running across a vast number of these neurons.

How artificial neural networks functions

The core component of ANNs is artificial neurons. Each neuron receives inputs from several other neurons, multiplies them by assigned weights, adds them and passes the sum to one or more neurons. Some artificial neurons might apply an activation function to the output before passing it to the next variable.

At its core, this might sound like a very trivial math operation. But when you place hundreds, thousands and millions of neurons in multiple layers and stack them up on top of each other, you’ll obtain an artificial neural network that can perform very complicated tasks, such as classifying images or recognizing speech.

Artificial neural networks are composed of an input layer, which receives data from outside sources (data files, images, hardware sensors, microphone…), one or more hidden layers that process the data, and an output layer that provides one or more data points based on the function of the network. For instance, a neural network that detects persons, cars and animals will have an output layer with three nodes. A network that classifies bank transactions between safe and fraudulent will have a single output.

Training artificial neural networks

Artificial neural networks start by assigning random values to the weights of the connections between neurons. The key for the ANN to perform its task correctly and accurately is to adjust these weights to the right numbers. But finding the right weights is not very easy, especially when you’re dealing with multiple layers and thousands of neurons.

This calibration is done by “training” the network with annotated examples. For instance, if you want to train the image classifier mentioned above, you provide it with multiple photos, each labeled with its corresponding class (person, car or animal). As you provide it with more and more training examples, the neural network gradually adjusts its weights to map each input to the correct outputs.

Basically, what happens during training is the network adjust itself to glean specific patterns from the data. Again, in the case of an image classifier network, when you train the AI model with quality examples, each layer detects a specific class of features. For instance, the first layer might detect horizontal and vertical edges, the next layers might detect corners and round shapes. Further down the network, deeper layers will start to pick out more advanced features such as faces and objects.

When you run a new image through a well-trained neural network, the adjusted weights of the neurons will be able to extract the right features and determine with accuracy to which output class the image belongs.

One of the challenges of training neural networks is to find the right amount and quality of training examples. Also, training large AI models requires vast amounts of computing resources. To overcome this challenge, many engineers use “ transfer learning ,” a training technique where you take a pre-trained model and fine-tune it with new, domain-specific examples. Transfer learning is especially efficient when there’s already an AI model that is close to your use case.

Neural networks vs classical AI

Traditional, rule-based AI programs were based on principles of classic software. Computer programs are designed to run operations on data stored in memory locations, and save the results on a different memory location. The logic of the program is sequential, deterministic and based on clearly-defined rules. Operations are run by one or more central processors.

Neural networks, however are neither sequential, nor deterministic. Also, regardless of the underlying hardware, there’s no central processor controlling the logic. Instead, the logic is dispersed across the thousands of smaller artificial neurons. ANNs don’t run instructions; instead they perform mathematical operations on their inputs. It’s their collective operations that develop the behavior of the model.

Instead of representing knowledge through manually coded logic, neural networks encode their knowledge in the overall state of their weights and activations. Tesla AI chief Andrej Karpathy eloquently describes the software logic of neural networks in an excellent Medium post titled “ Software 2.0 ”:

The “classical stack” of Software 1.0 is what we’re all familiar with — it is written in languages such as Python, C++, etc. It consists of explicit instructions to the computer written by a programmer. By writing each line of code, the programmer identifies a specific point in program space with some desirable behavior.

In contrast, Software 2.0 can be written in much more abstract, human unfriendly language, such as the weights of a neural network. No human is involved in writing this code because there are a lot of weights (typical networks might have millions), and coding directly in weights is kind of hard (I tried).

Neural networks vs other machine learning techniques

Artificial neural networks are just one of the several algorithms for performing machine learning , the branch of artificial intelligence that develops behavior based on experience. There are many other machine learning techniques that can find patterns in data and perform tasks such as classification and prediction. Some of these techniques include regression models, support vector machines (SVM), k-nearest methods and decision trees.

When it comes to dealing with messy and unstructured data such as images, audio and text, however, neural networks outperform other machine learning techniques.

For example, if you wanted to perform image classification tasks with classic machine learning algorithms, you would have to do plenty of complex “feature engineering,” a complicated and arduous process that would require the efforts of several engineers and domain experts. Neural networks and deep learning algorithms don’t require feature engineering and automatically extract features from images if trained well.

This doesn’t mean, however, that neural network is a replacement for other machine learning techniques. Other types of algorithms require less compute resources and are less complicated, which makes them preferable when you’re trying to solve a problem that doesn’t require neural networks.

Other machine learning techniques are also interpretable (more on this below), which means it’s easier to investigate and correct decisions they make. This might make them preferable in use cases where interpretability is more important than accuracy.

The limits of neural networks

In spite of their name, artificial neural networks are very different from their human equivalent. And although neural networks and deep learning are the state-of-the-art of AI today, they’re still a far shot from human intelligence . Therefore, neural networks will fail at many things that you would expect from a human mind:

Neural networks need lots of data: Unlike the human brain, which can learn to do things with very few examples, neural networks need thousands and millions of examples.

Neural networks are bad at generalizing: A neural network will perform accurately at a task it has been trained for, but very poorly at anything else, even if it’s similar to the original problem. For instance, a cat classifier trained on thousands of cat pictures will not be able to detect dogs. For that, it will need thousands of new images. Unlike humans, neural networks don’t develop knowledge in terms of symbols (ears, eyes, whiskers, tail)—they process pixel values. That’s why they will not be able to learn about new objects in terms of high-level features and they need to be retrained from scratch.

Neural networks are opaque: Since neural networks express their behavior in terms of neuron weights and activations, it is very hard to determine the logic behind their decisions. That’s why they’re often described as black boxes . This makes it hard to find out if they’re making decisions based on the wrong factors.

AI expert and neuroscientist Gary Marcus has explained the limits of deep learning and neural networks in an in-depth research paper last year.

Also neural networks aren’t a replacement for good-old fashioned rule-based AI in problems where the logic and reasoning is clear and can be codified into distinct rules. For instance, when it comes to solving math equations, neural networks perform very poorly .

There are several efforts to overcome the limits of neural network, such a DARPA-funded initiative to create explainable AI models . Other interesting developments include developing hybrid models that combine neural networks and rule-based AI to create AI systems that are interpretable and require less training data.

Although we still have a long way to go before we reach the goal of human-level AI ( if we’ll ever reach it at all ), neural networks have brought us much closer. It’ll be interesting to see what the next AI innovation will be.

This article was originally published by Ben Dickson on TechTalks , a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here .

What facial recognition and the racist pseudoscience of phrenology have in common

‘Phrenology’ has an old-fashioned ring to it. It sounds like it belongs in a history book, filed somewhere between bloodletting and velocipedes. We’d like to think that judging people’s worth based on the size and shape of their skull is a practice that’s well behind us. However, phrenology is once again rearing its lumpy head.

In recent years, machine-learning algorithms have promised governments and private companies the power to glean all sorts of information from people’s appearance. Several startups now claim to be able to use artificial intelligence (AI) to help employers detect the personality traits of job candidates based on their facial expressions. In China, the government has pioneered the use of surveillance cameras that identify and track ethnic minorities. Meanwhile, reports have emerged of schools installing camera systems that automatically sanction children for not paying attention, based on facial movements and microexpressions such as eyebrow twitches.

Perhaps most notoriously, a few years ago, AI researchers Xiaolin Wu and Xi Zhang claimed to have trained an algorithm to identify criminals based on the shape of their faces, with an accuracy of 89.5%. They didn’t go so far as to endorse some of the ideas about physiognomy and character that circulated in the 19th century, notably from the work of the Italian criminologist Cesare Lombroso: that criminals are underevolved, subhuman beasts, recognizable from their sloping foreheads and hawk-like noses. However, the recent study’s seemingly high-tech attempt to pick out facial features associated with criminality borrows directly from the ‘photographic composite method’ developed by the Victorian jack-of-all-trades Francis Galton – which involved overlaying the faces of multiple people in a certain category to find the features indicative of qualities like health, disease, beauty, and criminality.

Facial recognition and phrenology

Technology commentators have panned these facial-recognition technologies as ‘literal phrenology’; they’ve also linked it to eugenics, the pseudoscience of improving the human race by encouraging people deemed the fittest to reproduce. (Galton himself coined the term ‘eugenics,’ describing it in 1883 as ‘all influences that tend in however remote a degree to give to the more suitable races or strains of blood a better chance of prevailing speedily over the less suitable than they otherwise would have had.’)

In some cases, the explicit goal of these technologies is to deny opportunities to those deemed unfit; in others, it might not be the goal, but it’s a predictable result. Yet when we dismiss algorithms by labeling them as phrenology, what exactly is the problem we’re trying to point out? Are we saying that these methods are scientifically flawed and that they don’t really work – or are we saying that it’s morally wrong to use them regardless?

T here is a long and tangled history to the way ‘phrenology’ has been used as a withering insult. Philosophical and scientific criticisms of the endeavor have always been intertwined, though their entanglement has changed over time. In the 19th century, phrenology’s detractors objected to the fact that phrenology attempted to pinpoint the location of different mental functions in different parts of the brain – a move that was seen as heretical, since it called into question Christian ideas about the unity of the soul. Interestingly, though, trying to discover a person’s character and intellect based on the size and shape of their head wasn’t perceived as a serious moral issue. Today, by contrast, the idea of localizing mental functions is fairly uncontroversial. Scientists might no longer think that destructiveness is seated above the right ear, but the notion that cognitive functions can be localized in particular brain circuits is a standard assumption in mainstream neuroscience.

Phrenology had its share of empirical criticism in the 19th century, too. Debates raged about which functions resided where, and whether skull measurements were a reliable way of determining what’s going on in the brain. The most influential empirical criticism of old phrenology, though, came from the French physician Jean Pierre Flourens’s studies based on damaging the brains of rabbits and pigeons – from which he concluded that mental functions are distributed, rather than localized. (These results were later discredited.) The fact that phrenology was rejected for reasons that most contemporary observers would no longer accept makes it only more difficult to figure out what we’re targeting when we use ‘phrenology’ as a slur today.

The statistical biases

Both ‘old’ and ‘new’ phrenology have been critiqued for their sloppy methods. In the recent AI study of criminality, the data were taken from two very different sources: mugshots of convicts, versus pictures from work websites for nonconvicts. That fact alone could account for the algorithm’s ability to detect a difference between the groups. In a new preface to the paper, the researchers also admitted that taking court convictions as synonymous with criminality was a ‘serious oversight.’ Yet equating convictions with criminality seems to register with the authors mainly as an empirical flaw: using mugshots of convicted criminals, but not of the ones who got away introduces a statistical bias. They said they were ‘deeply baffled’ at the public outrage in reaction to a paper that was intended ‘for pure academic discussions.’

Notably, the researchers don’t comment on the fact that conviction itself depends on the impressions that police, judges , and juries form of the suspect – making a person’s ‘criminal’ appearance a confounding variable. They also fail to mention how the intense policing of particular communities, and inequality of access to legal representation, skews the dataset. In their response to criticism, the authors don’t back down on the assumption that ‘being a criminal requires a host of abnormal (outlier) personal traits’. Indeed, their framing suggests that criminality is an innate characteristic, rather than a response to social conditions such as poverty or abuse. Part of what makes their dataset questionable on empirical grounds is that who gets labeled ‘criminal’ is hardly value-neutral.

One of the strongest moral objections to using facial recognition to detect criminality is that it stigmatizes people who are already overpoliced. The authors say that their tool should not be used in law-enforcement, but cite only statistical arguments about why it ought not to be deployed. They note that the false-positive rate (50%) would be very high, but take no notice of what that means in human terms. Those false positives would be individuals whose faces resemble people who have been convicted in the past. Given the racial and other biases that exist in the criminal justice system, such algorithms would end up overestimating criminality among marginalized communities.

The most contentious question seems to be whether reinventing physiognomy is fair game for the purposes of ‘pure academic discussion’. One could object on empirical grounds: eugenicists of the past such as Galton and Lombroso ultimately failed to find facial features that predisposed a person to criminality. That’s because there are no such connections to be found. Likewise, psychologists studying the heritability of intelligence, such as Cyril Burt and Philippe Rushton, had to play fast and loose with their data to manufacture correlations between skull size, race , and IQ. If there were anything to discover, presumably the many people who have tried over the years wouldn’t have come up dry.

The problem with reinventing physiognomy is not merely that it has been tried without success before. Researchers who persist in looking for cold fusion after the scientific consensus has moved on also face criticism for chasing unicorns – but disapproval of cold fusion falls far short of opprobrium. At worst, they are seen as wasting their time. The difference is that the potential harms of cold fusion research are much more limited. In contrast, some commentators argue that facial recognition should be regulated as tightly as plutonium, because it has so few nonharmful uses. When the dead-end project you want to resurrect was invented for the purpose of propping up colonial and class structures – and when the only thing it’s capable of measuring is the racism inherent in those structures – it’s hard to justify trying it one more time, just for curiosity’s sake.

However, calling facial-recognition research ‘phrenology’ without explaining what is at stake probably isn’t the most effective strategy for communicating the force of the complaint. For scientists to take their moral responsibilities seriously, they need to be aware of the harms that might result from their research. Spelling out more clearly what’s wrong with the work labeled ‘phrenology’ will hopefully have more of an impact than simply throwing the name around as an insult.

This article was originally published at Aeon by Catherine Stinson and has been republished under Creative Commons.

Leave A Comment