Google announces new AI-powered heart and breathing monitors for Pixel phones

Google announced today that it’s adding AI-powered measurements of heart and respiration rates to the Google Fit app.

The tech uses a combination of sensors and computer vision algorithms to take measurements through a smartphone camera.

The Big G said the features will be available from next month on Pixel phones, with more Android devices to follow.

Users will then be able to measure their breathing rate by placing their head and upper torso in view of the phone’s front-facing camera.

Their heart rate will be estimated by putting a finger on the rear-facing lens. Users can then choose to save the results in the app to monitor how they change over time.

Shwetak Patel, director of health technologies at Google Health, compared the approach to a fingertip pulse oximeter:

Patel said the algorithms have been tested on people with a diverse range of ages, genders, skin colors, health statues and under a variety of lighting conditions.

Early study results shared at a Google Health event today showed the respiration algorithm is accurate within one breath per minute on average, while the heart rate algorithm is accurate within 2% on average.

Google stressed that the tool is designed for personal wellbeing rather than medical use.

“While the sensor outputs are not medical diagnoses, they’re still useful measures of fitness and health,” said Patel, who’s also a computer science professor at the University of Washington. “So after you go for a run you can quickly use the app to be able to look at what your heart rate is.”

Google’s new trillion-parameter AI language model is almost 6 times bigger than GPT-3

A trio of researchers from the Google Brain team recently unveiled the next big thing in AI language models: a massive one trillion-parameter transformer system.

The next biggest model out there, as far as we’re aware, is OpenAI’s GPT-3, which uses a measly 175 billion parameters.

Background: Language models are capable of performing a variety of functions but perhaps the most popular is the generation of novel text. For example, you can go here and talk to a “philosopher AI” language model that’ll attempt to answer any question you ask it (with numerous notable exceptions ).

[Read next: How Netflix shapes mainstream culture, explained by data ]

While these incredible AI models exist at the cutting-edge of machine learning technology, it’s important to remember that they’re essentially just performing parlor tricks. These systems don’t understand language , they’re just fine-tuned to make it look like they do.

That’s where the number of parameters comes in – the more virtual knobs and dials you can twist and tune to achieve the desired outputs the more finite control you have over what that output is.

What Google‘s done: Put simply, the Brain team has figured out a way to make the model itself as simple as possible while squeezing in as much raw compute power as possible to make the increased number of parameters possible. In other words, Google has a lot of money and that means it can afford to use as much hardware compute as the AI model can conceivably harness.

In the team’s own words :

Quick take: It’s unclear exactly what this means or what Google intends to do with the techniques described in the pre-print paper. There’s more to this model than just one-upping OpenAI, but exactly how Google or its clients could use the new system is a bit muddy.

The big idea here is that enough brute force will lead to better compute-use techniques which will in turn make it possible to do more with less compute. But the current reality is that these systems don’t tend to justify their existence when compared to greener, more useful technologies. It’s hard to pitch an AI system that can only be operated by trillion-dollar tech companies willing to ignore the massive carbon footprint a system this big creates.

Context: Google‘s pushed the limits of what AI can do for years and this is no different. Taken by itself, the achievement appears to be the logical progression of what’s been happening in the field. But the timing is a bit suspect.

H/t: Venture Beat

AI resurrects legendary Spanish singer to hawk beer

The celebrated Spanish singer Lola Flores died in 1995, but a brewery is using AI to bring her back to life.

Sevillan beer company Cruzcampo made a deepfake of the iconic Andalusian the star of a new ad campaign.

The company pitches the commercial as a celebration of the diversity of Spanish accents.

“Do you know why I was understood all over the world? Because of my accent,” says Flores’ AI reincarnation. “And I’m not just referring to the way I talk…”

The company recreated her voice, face, and features using hours of audiovisual material, more than 5,000 photos, and a painstaking composition and post-production process, according to El País .

The video below (in Spanish) gives more details on how it was made:

Flores’ daughters Rosario and Lolita were personally involved in the project, and my Andalusian colleague Pablo said he could imagine Lola supporting the message.

But others were quick to condemn the campaign for putting words in her mouth that she didn’t say — just to market beer.

One thing they all agreed on was that the deepfake Flores is an impressively realistic recreation of the singer.

The ad was released shortly after a report named deepfakes the most concerning use of AI for crime and terrorism. But the campaign shows the tech can also turn the dead into effective booze peddlers.

¡Salud!

Leave A Comment