Everything you need to know about neuromorphic computing

In July, a group of artificial intelligence researchers showcased a self-driving bicycle that could navigate around obstacles, follow a person, and respond to voice commands. While the self-driving bike itself was of little use, the AI technology behind it was remarkable. Powering the bicycle was a neuromorphic chip, a special kind of AI computer.

Neuromorphic computing is not new. In fact, it was first proposed in the 1980s. But recent developments in the artificial intelligence industry have renewed interest in neuromorphic computers.

The growing popularity of deep learning and neural networks has spurred a race to develop AI hardware specialized for neural network computations. Among the handful of trends that have emerged in the past few years is neuromorphic computing, which has shown promise because of its similarities to biological and artificial neural networks.

How deep neural networks work

At the heart of recent advances in artificial intelligence are artificial neural networks (ANN), AI software that roughly follows the structure of the human brain. Neural networks are composed of artificial neurons, tiny computation units that perform simple mathematical functions.

Artificial neurons aren’t of much use alone. But when you stack them up in layers, they can perform remarkable tasks, such as detecting objects in images and transforming voice audio to text. Deep neural networks can contain hundreds of millions of neurons, spread across dozens of layers.

When training a deep learning algorithm, developers run many examples through the neural network along with the expected result. The AI model adjusts each of the artificial neurons as it reviews more and more data. Gradually it becomes more accurate at the specific tasks it has been designed for, such as detecting cancer in slides or flagging fraudulent bank transactions.

The challenges of running neural networks on traditional hardware

Traditional computers rely on one or several central processing units (CPUs). CPUs pack a lot of power and can perform complex operations at fast speeds. Given the distributed nature of neural networks, running them on classic computers is cumbersome. Their CPUs must emulate millions of artificial neurons through registers and memory locations, and calculate each of them in turn.

Graphics Processing Units (GPUs), the hardware used for games and 3D software, can do a lot of parallel processing and are especially good at performing matrix multiplication, the core operation of neural networks. GPU arrays have proven to be very useful in neural network operations.

The rise in popularity of neural networks and deep learning have been a boon to GPU manufacturers. Graphics hardware company Nvidia has seen its stock price rise in value severalfold in the past few years.

However, GPUs also lack the physical structure of neural networks and must still emulate neurons in software, albeit at a breakneck speed. The dissimilarities between GPUs and neural networks cause a lot of inefficiencies, such as excessive power consumption.

Neuromorphic chips

Contrary to general-purpose processors, neuromorphic chips are physically structured like artificial neural networks. Every neuromorphic chip consists of many small computing units that correspond to an artificial neuron. Contrary to CPUs, the computing units in neuromorphic chips can’t perform a lot of different operations. They have just enough power to perform the mathematical function of a single neuron.

Another essential characteristic of neuromorphic chips is the physical connections between artificial neurons. These connections make neuromorphic chips more like organic brains, which consist of biological neurons and their connections, called synapses. Creating an array of physically connected artificial neurons is what gives neuromorphic computers their real strength.

The structure of neuromorphic computers makes them much more efficient at training and running neural networks. They can run AI models at a faster speed than equivalent CPUs and GPUs while consuming less power. This is important since power consumption is already one of AI’s essential challenges .

The smaller size and low power consumption of neuromorphic computers make them suitable for use cases that require to run AI algorithms at the edge as opposed to the cloud.

Neuromorphic chips are characterized by the number of neurons they contain. The Tianjic chip, the neuromorphic chip used in the self-driving bike mentioned at the beginning of this article, contained about 40,000 artificial neurons and 10 million synapses in an area of 3.8 square millimeters. Compared to a GPU running an equal number of neurons, Tianjic performed 1.6-100x faster and consume 12-10,000x less power.

But 40,000 is a limited number of neurons, as much as the brain of a fish . The human brain contains approximately 100 billion neurons.

AlexNet, a popular image classification network used in many applications, has more than 62 million parameters. OpenAI’s GPT-2 language model contains more than one billion parameters.

But the Tianjic chip was more of a proof of concept than a neuromorphic computer purposed for commercial uses. Other companies have already been developing neuromorphic chips ready to be used in different AI applications.

One example is Intel’s Loihi chips and Pohoiki Beach computers. Each Loihi chip contains 131,000 neurons and 130 million synapses. The Pohoiki computer, introduced in July, packs 8.3 million neurons. The Pohoiki delivers 1000x better performance and is 10,000x more energy efficient than equivalent GPUs.

Neuromorphic computing and artificial general intelligence (AGI)

In a paper published in Nature , the AI researchers who created the Tianjic chip observed that their work could help bring us closer to artificial general intelligence (AGI). AGI is supposed to replicate the capabilities of the human brain. Current AI technologies are narrow : they can solve specific problems and are bad at generalizing their knowledge.

For instance, an AI model designed to play a game like StarCraft II will be helpless when introduced to another game, say Dota 2. That will require a totally different AI algorithm .

According to Tianjic designers, their AI chip was able to solve multiple problems, including object detection, speech recognition, navigation, and obstacle avoidance, all in a single device.

But while neuromorphic chips might bring us a step closer to emulating the human brain, we still have a long way to go. Artificial general intelligence requires more than bundling several narrow AI models together.

Artificial neural networks, at their core, are statistical machines, and statistics can’t help to solve problems that require reasoning, understanding, and general problem–solving. Examples include natural language understanding and navigating open worlds .

Creating more efficient ANN hardware won’t solve those problems. But perhaps having AI chips that look much more like our brains will open new pathways to understand and create intelligence.

This article was originally published by Ben Dickson on TechTalks , a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here .

GPT-3 sucks at pick-up lines — here’s what that tells us about computer-generated language

Have you ever wondered what flirting with artificial intelligence would look like? Research scientist and engineer Janelle Shane has given us an idea by training a neural network – an algorithm loosely inspired by biological brain structures – to produce chat-up lines .

Some of the results are hilarious and completely nonsensical, such as the inelegant: “2017 Rugboat 2-tone Neck Tie Shirt”. But some of them turned out pretty well. At least, if you’re a robot:

But how were these lines generated, and why do the results vary so much in terms of quality and cohesiveness? That’s down to the types of neural networks Shane worked with: all based on GPT-3 , the world’s largest language model to date.

Language modelling

GPT stands for generative pre-trained transformer. Its current version, developed by OpenAI , is the third in a line of ever-improving natural language processing systems trained to produce human-like text or speech.

Natural language processing , or NLP, refers to the application of computers to process and generate large amounts of coherent spoken or written text. Whether you ask Siri for a weather update, request Alexa to turn on the lights, or you use Google to translate a message from French into English, you’re able to do so because of developments in NLP.

It takes a variety of NLP tasks – from speech recognition to picking apart sentence structures – for applications such as Siri to successfully requests. The virtual assistant, much like any other language-based tool, is trained using many thousands of sentences, ideally as varied and diverse as possible.

Because human language is extremely complex, the best NLP applications rely increasingly on pre-trained models that allow “ contextual bidirectional learning ”. This means considering a word’s wider context in a sentence, scanning both left and right of any given word to identify the word’s intended meaning. More recent models can even pay attention to more nuanced features of human language, such as irony and sarcasm .

Computer compliments

GPT-3 is such a successful language-generating AI because it doesn’t need retraining over and over again to complete a new task. Instead, it uses what the model has already learned about language and applies it to something new – such as writing articles and computer code , generating novel dialogue in video games , or formulating chat-up lines.

Read more: Robo-journalism: computer-generated stories may be inevitable, but it’s not all bad news

Compared to its predecessor GPT-2, the third-generation model is 116 times bigger and has been trained on billions of words of data. To generate its chat-up lines, GPT-3 was simply asked to automate the text for an article headlined: “These are the top pickup lines of 2021! Amaze your crush and get results!”

Because GPT-3’s training updates have been added gradually over time, this same prompt could also be used on smaller, more basic variants – generating weirder and less coherent chat-up lines:

But GPT-3’s “DaVinci” variant – its largest and most competent iteration to date – delivered some more convincing attempts which might actually pass for effective flirting – with a little fine-tuning:

The latest variant of GPT-3 is currently the largest contextual language model in the world, and is able to complete a number of highly impressive tasks. But is it smart enough to pass as a human?

Almost human

As one of the pioneers of modern computing and a firm believer in true artificial intelligence, Alan Turing developed the “Imitation Game” in 1950 – today known as the “ Turing Test ”. If a computer’s performance is indistinguishable from that of a human, it passes the Turing Test. In language generation alone, GPT-3 could soon pass Alan Turing’s test.

But it doesn’t really matter if GPT-3 passes the Turing Test or not. Its performance is likely to depend on the specific task the model is used for – which, judging by the technology’s flirting, should probably be something other than the delicate art of the chat-up line.

Read more: GPT-3: new AI can write like a human but don’t mistake that for thinking – neuroscientist

And, even if it were to pass the Turing Test, in no way would this make the model truly intelligent. At best, it would be extremely well trained on specific semantic tasks . Maybe the more important question to ask is: do we even want to make GPT-3 more human?

Learning from humans

Shortly after its reveal in summer 2020, GPT-3 made headlines for spewing out shockingly sexist and racist content. But this was hardly surprising. The language generator was trained on vast amounts of text on the internet, and without remodeling and retraining it was doomed to replicate the biases , harmful language and misinformation that we know to exist online.

Clearly, language models such as GPT-3 do not come without potential risks . If we want these systems to be the basis of our digital assistants or conversational agents, we need to be more rigorous and selective when giving them reading material to learn from.

Still, recent research has shown that GPT-3’s knowledge of the internet’s dark side could actually be used to automatically detect online hate speech, with up to 78% accuracy. So even though its chat-up lines look unlikely to kindle more love in the world, GPT-3 could may be set, at least, to reduce the hate.

This article by Stefanie Ullmann , Postdoctoral Research Associate, Centre for the Humanities and Social Change, University of Cambridge , is republished from The Conversation under a Creative Commons license. Read the original article .

How a theoretical mouse could crack the stock market

A team of physicists at Emory University recently published research indicating they’d successfully managed to reduce a mouse’s brain activity to a simple predictive model. This could be a breakthrough for artificial neural networks. You know: robot brains.

Let there be mice: Scientists can do miraculous things with mice such as grow a human ear on one’s back or control one via computer mouse . But this is the first time we’ve heard of researchers using machine learning techniques to grow a theoretical mouse brain.

Per a press release from Emory University:

In other words: We can observe a mouse’s brain activity in real-time, but there are simply too many neuronal interactions for us to measure and quantify each and every one – even with AI. So the scientists are using the equivalent of a math trick to make things simpler.

How’s it work? The research is based on a theory of criticality in neural networks. Basically, all the neurons in your brain exist in an equilibrium between chaos and order. They don’t all do the same thing, but they also aren’t bouncing around randomly.

The researchers believe the brain operates in this balance in much the same way other state-transitioning systems do. Water, for example, can change from gas to liquid to solid. And, at some point during each transition, it achieves a criticality where its molecules are in either both states or neither.

The researchers hypothesized that brains, organic neural networks, function under the same hypothetical balance state. So, they ran a bunch of tests on mice as they navigated mazes in order to establish a database of brain data.

Next, the team went to work developing a working, simplified model that could predict neuron interactions using the experimental data as a target. According to their research paper, their model is accurate to within a few percentage points.

What’s it mean? This is early work but, there’s a reason why scientists use mice brains for this kind of research: because they’re not so different from us. If you can reduce what goes on in a mouse’s head to a working AI model, then it’s likely you can eventually scale that to human-brain levels.

On the conservative side of things, this could lead to much more robust deep learning solutions. Our current neural networks are a pale attempt to imitate what nature does with ease. But the Emory team’s mouse models could represent a turning point in robustness, especially in areas where a model is likely to be affected by outside factors.

This could, potentially, include stronger AI inferences where diversity is concerned and increased resilience against bias. And other predictive systems could benefit as well, such as stock market prediction algorithms and financial tracking models. It’s possible this could even increase our ability to predict weather patterns over long periods of time.

Quick take: This is brilliant, but it’s actual usefulness remains to be seen. Ironically, the tech and AI industries are also at a weird, unpredictable point of criticality where brute-force hardware solutions and elegant software shortcuts are starting to pull away from each other.

Still, if we take a highly optimistic view, this could also be the start of something amazing such as artificial general intelligence (AGI) – machines that actually think. No matter how we arrive at AGI, it’s likely we’ll need to begin with models capable of imitating nature’s organic neural nets as closely as possible. You got to start somewhere.

Leave A Comment