This rad-ass exoskeleton uses AI to walk for you

Engineers at Canada’s University of Waterloo are developing AI-powered exoskeleton legs that can walk autonomously.

The system captures a user’s surroundings through a camera. Computer vision and deep learning algorithms then analyze the scene to determine the best movements for the upcoming terrain.

“Our control approach wouldn’t necessarily require human thought,” said project lead Brokoslaw Laschowski in a press release. “Similar to autonomous cars that drive themselves, we’re designing autonomous exoskeletons that walk for themselves.”

The devices could give people with impaired mobility a more natural control system than current exoskeletons, which are typically operated through smartphone apps or joysticks.

“That can be inconvenient and cognitively demanding,” said Laschowski. “Every time you want to perform a new locomotor activity, you have to stop, take out your smartphone and select the desired mode.”

The researchers overcame these limitations by fitting exoskeleton users with wearable cameras. AI software then processes the video to spot stairs, doors, and other features in their surroundings.

The system still needs further refinement before the exoskeletons are fully-functional. The next stage of the project will involve sending instructions to motors so that the legs can operate across uneven terrain and avoid obstacles. The researchers also plan to boost their battery lives by using human-motion to self-charge the devices.

But the system could prove far more convenient than most existing exoskeletons — as long as it’s not too easy to hack.

You can read more about the project in the journal IEEE Transactions on Medical Robotics and Bionics .

You can read the latest paper on the ExoNet project here on the preprint server bioRxiv.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here .

IBM launches new machine-learning pipeline starter kits for overworked devs

IBM today announced a new machine-learning, end-to-end pipeline starter kit for its Cloud Native Toolkit.

The big idea here is that wrangling the myriad open-source and enterprise ML and AI platforms and solutions into production can be a challenging prospect.

Per IBM‘s developer blog :

Developers can spend their time building, training, and deploying models or they can spend all day formatting and wrenching a pipeline together.

IBM‘s new machine-learning, end-to-end pipeline starter kit streamlines the entire process and gives devs everything they need to deploy solutions, including the tutorials and open-source support tools necessary.

Christopher Ferris, CTO, Open Tech at IBM told Neural:

Why go with this approach?

Ferris says:

The new toolkits are available on IBM‘s developer site . And step-by-step instructions on getting started can be found at the end of this IBM blog post .

New to computer vision and medical imaging? Start with these 10 projects

Leave A Comment