3 ways IoT is disrupting insurance

Insurance is one of the oldest practices of human societies, practiced even before the first coins were minted. Chinese and Babylonian traders in the second and third millennia BC used very crude methods to protect their cargo against sunken ships and theft. Since then, the insurance industry has come a long way, but until recently, some of the fundamental challenges remained the same. Insurers had to rely on historical data and customer reports to calculate risks and determine prices, which are not very accurate methods.

As a result, good customers pay more and careless customers less than they should, claims processing takes a long time, and fraudsters take advantage of holes in the system to rake in a lot of money at the expense of everyone. But with the advent of the internet of things (IoT), the insurance industry is undergoing a revolution.

IoT enables the real-time collection and analysis of data from the physical world, which provides an unprecedented opportunity to improve accuracy, reduce costs, and prevent fraud. In a 2018 report , the insurance market Lloyd’s outlined some of the benefits that the growth of IoT will bring to the insurance industry, including better risk understanding, avoiding preventable losses, capturing patterns and behaviors, and enabling proactive monitoring.

Today, many of these benefits are manifesting themselves in new practices by insurance companies, as well as partnerships between insurtech companies and established players in the finance industry. Here are a few examples of how IoT is making a big difference for the insurance industry.

Supply chain insurance

Cargo insurance is a three-centuries-old industry. It plays an incredibly important role in global trade, but it’s also ripe for change and improvement.

“There’s so much useful—but until now hidden—data that is relevant to understanding risk in supply chains, and thanks to IoT technology we’re finally able to access that data and put it to use,” says Ben Hubbard, co-founder and CEO of Parsyl, a U.S.-based commercial insurance platform for small businesses.

Granular data from IoT sensors takes the guesswork out of evaluating risk and processing claims by giving insurers data on what actually happens to goods as they move through the supply chain, as well as who’s responsible when issues occur.

“Most importantly, we can analyze that data over time to have a meaningful understanding of actual risk based on objective facts, resulting in improved risk management and more accurate premiums,” Hubbard says.

Parsyl launched in 2017, and were awarded a place in Lloyd’s Lab in May to develop its solution. Parsyl combines proprietary smart sensors and rich data analytics to understand when, where, and how environmental conditions such as temperature and humidity affect sensitive goods, both in transit and storage.

The combination of IoT and machine learning is providing insurance customers with new insights on risks and performance. “For example, customers can discover the best and worst times of year to transport goods, which vendors and shipping lanes are the highest risk, or the impact of different packaging on product quality,” Hubbard says. “All of this helps to pinpoint where problems are occurring and drive data-driven risk management measures to avoid them from happening in the future.”

Flood insurance

Flood insurance is vital to homes and businesses, especially in areas where floods are seasonal risks. Policies and risks are traditionally determined by geographical parameters and postal codes. Many businesses get denied insurance and others face high premiums because their insurers don’t have accurate data. Because of the lack of visibility into events and damages, claims can take months to verify and settle. Case in point: every year, an average of $41B of flood damage goes uncovered by insurance, leaving people, businesses, and governments to pay the bill.

More recently, insurers are leveraging IoT to provide more tailored policies and premiums. “IoT sensors provide property-level flood data for all our clients, allowing us to employ a risk-based pricing model,” says Dr. Ian Bartholomew, co-founder of insurtech startup FloodFlash.

FloodFlash, which became part of Lloyd’s Labs roster of startups last year, uses IoT sensors to measure flood levels in customer properties. “The resolution of data provided by our sensors allow us to provide quotes for individual buildings rather than postcodes — so premiums are tailored to specific businesses,” Bartholomew says.

FloodFlash’s customers can select a custom plan based on the depth of flood they want to insure against and the payout they’d like to receive. The company installs an internet-connected sensor in its property and monitors the flood levels in real-time. When flood reaches the agreed depth, FloodFlash pays out without requiring loss adjustment or complicated document filing and processing.

“Whilst many of our clients see cost reductions thanks to our pricing, the most important thing for a lot of small businesses is to protect their cash-flow and to recover fast. Thanks to live updates from our sensor network we paid claims in full a single day after Storms Ciara and Dennis, an unofficial record in flood insurance,” Bartholomew says.

Home insurance

Traditionally, the home insurance model has relied on user-reported data to calculate risk and third-party adjusters to deal with the aftermath of a disaster or begin a claim. But with the rise of smart home and IoT devices, the insurance model is starting to pivot from a reactive model to a proactive model.

“IoT devices in the home provide data that can be leveraged by insurers to help understand risk while smart security and sensor devices can be used as a preventative method within the home, sending homeowners and their insurers alerts when something is not right,” says Mitchell Klein, Executive Director of Z-Wave Alliance.

For example, water damage is among the leading causes of home insurance claims. Leak sensors deployed near potential accident areas such as water heaters and washing machines can raise alerts before a catastrophic event occurs. If a hose or pipe breaks when no one is home, the sensor can command a smart valve to shut off the water to minimize damage and notify the homeowner. Similarly, sensors embedded in walls or roofing will proactively report structural integrity and risk, and therefore policy cost.

Many insurance companies are now partnering with smart home companies to offer bundled incentives. For example, customers can buy and install smart sensors in their homes and provide the insurer access to the data. In exchange they receive a discount on their premium.

“For home insurance providers, partnering with IoT companies can lead to stronger relationships with customers and improve the risk management process,” Klein says. “Mitigating risk while minimizing loss claims is a win for all stakeholders.”

Faster, more accurate, less boring

With the IoT industry growing at an accelerating pace and technologies such as 5G fast developing, it is clear that we’re still scratching the surface of the potential that exists. Insurance paperwork is fast becoming a thing of the past and is being replaced by automated processes and always-accessible mobile apps. Sensors are making it easier to calculate risks and adjust policies. Insurers know their customers better and are able to establish much more personal relationships.

All in all, the insurance industry is about to become less confusing and much more pleasant.

‘Zoom fatigue’ is real — here’s how you can avoid it

With much of the world in lockdown, our time spent on video calls has risen rapidly . Video conferencing has expanded from being a tool for business meetings to something we use to socialize, worship, and even date on.

There is no doubt that platforms like Zoom are very useful. But all this time spent on video calls has its problems. We rely on it connect with people, yet it can leave us feeling tired and empty. It has given us some semblance of normal life during lockdown, but it can make relationships seem unreal. This feeling has spurred talk of a new psychological affliction: “Zoom fatigue.”

When we interact with another person through the screen, our brains have to work much harder . We miss many of the other cues we’d have during a real-life conversation like the smell of the room or some detail in our peripheral vision. This additional information helps our brains make sense of what is going on.

When that extra information is gone, our brains have to work harder to make sense of what is happening. This can sometimes put us at a disadvantage. For instance, a meta-analysis of job interviews found that people tended to fare worse when they were interviewed through video link than in person .

The greater effort it takes to make sense of what’s going on means we often take mental shortcuts. This can result in mistakes. One study found that medics who attended a seminar via video conferencing tended to focus on whether they liked the presenter, while those who attended in person focused on the quality of presenter’s arguments .

Another study found that when courts made decisions about a refugee’s appeal using a video call, they were less trusting and understanding . Applicants were more likely to lie and judges were less likely to spot falsehoods. A third study found that court sketch artists made less accurate drawings when gathering information via video call .

Our biases can get worse if the line is glitchy. Even a one second delay can make us think people on the other end of the line are less friendly . One experiment found that when the video quality was low, people were much more cautious in their communication .

Emotionally exhausting

Video conferences can be emotionally exhausting as well. One study of interpreters working in the United Nations and the European Union who did remote translations felt alienated . Therapists conducting sessions through video calls reported concerns they had “lost connection” with their clients .

A study of student-teacher interaction found that when an oral exam was conducted through a video link, students who were already predisposed to feeling anxious would become even more anxious than in a face-to-face exam. As a consequence, they tended to perform worse . The student’s anxiety was heightened when they could see a big picture of themselves on the screen.

A strange quirk of video conferencing is that when we sit there we see ourselves mirrored back at us. This can make us more self-conscious and less certain in our interactions . We may try harder but we also find it more stressful .

The spread of video conferencing can also trigger a desperate search for recognition. One analysis of remote employees found that those working away from the centre of an organisation often experience it as a form of “exile” . These exiled employees feel overlooked and try everything to make themselves seen. They search for interesting material and anecdotes to share with co-workers. They take on additional tasks which they hope will “catch the eye” of their managers.

Simple solutions

There are some relatively simple things you can do to make video conferencing less tiring . Avoid multitasking while on a video call to cut your cognitive workload and help you pay attention. Take a break between calls and get away from the screen to give you time to reflect, regroup and recover. Hiding the image of yourself during a video conference can make you feel less self-conscious and more focused on what others are saying.

There are also other ways of communicating, as well as video calls. Text messaging, email and phone calls can be better than video conferencing. For instance, one study found that during a voice only call, participants conveyed some information in a more accurate way than during a video call . Even letters have their upsides. One study found that hand writing a thank-you note makes recipients much more happy than we expect . Another showed therapeutic benefits for those who write them .

There are also times when no communication works best of all. A recent experiment found that teams that silently solved a puzzle together tended to outperform those that spoke as they worked . Sometimes it’s best to simply embrace the silence.

This article is republished from The Conversation by Andre Spicer , Professor of Organizational Behavior, Cass Business School, City, University of London under a Creative Commons license. Read the original article .

Where’s XR at today and what does it mean for your company?

Interested in XR? Join the RISE Spotlight online event on XR for free on December 15th!

While XR (extended reality) technologies have been hyped since 2014, it’s only now in the midst of the 2020 coronavirus pandemic and global economic crisis that we are really seeing the true value of virtual, augmented, and mixed realities as vital to the future of business success.

Across industries including healthcare, manufacturing, education, design, tourism, consumer goods, and marketing, XR is helping companies secure the competitive advantage needed to survive and thrive in the years to come.

The greatest challenge the XR community faces is one the industry created itself. Early hype and evangelical proclamations oversold the limited abilities of VR and AR technologies in the early days, fuelling disappointed expectations which the industry has been trying to crawl back from for years.

It may be helpful to remember that while AR and VR have been developing next to each other since the 1960s, the industry as we know it today is less than seven years old.

That said, the improvements in such a relatively short period have been remarkable, but even so, people adapt and adopt at a much slower pace than the big tech companies often presume.

Forecasts and futures

It’s too simple to judge the success of this industry on how many headsets have been sold (or not sold); instead we should focus on the true business cases for XR.

The future of the industry relies on its ability to live up to the promises that XR can save companies time and money, accelerate processes, measure engagement, bring people together in unique and memorable ways, and create new revenue streams that don’t only justify costs but proportionally outweigh them.

It is projected that by 2030 XR will boost the global economy by $1.5 trillion , with the growth of jobs enhanced by VR and AR jumping from under one million in 2019 to over 20 million by 2030.

This growth will partially be attributed to the prevalence of edge computing and 5G. Edge computing is the practice of capturing, processing, and analyzing data near where it is created, and 5G is super high-speed internet.

These innovations will provide the practical infrastructure necessary for mass transmission of large data sets at higher speeds, ensuring a seamless immersive experience anywhere at any time, whether it’s through a mobile, laptop, or headset.

Reducing latency, improving image quality, and enabling new ecosystems of high-volume, real-time data applications, these expediting capabilities will bolster the viability and benefits of XR in our everyday lives.

Fighting Covid, tackling lockdown

One recent example from the medical industry of how VR is being used to save time and money while enabling collaboration is iMD-VR . A team of scientists from the University of Bristol has been using VR and cloud computing as a means to assist the medical community in the global fight against Covid-19.

They’ve created a 3D model researchers can step inside to visualize the unique complexities of the virus, as well as test potential vaccines and cures via molecular dynamics simulations. This level of real-time international collaboration, as well as the ability to visualize and contextualize something invisible to the human eye, wouldn’t be possible otherwise.

It is not only a great illustration of how VR can extend our capabilities beyond our physical means, but also how it can help accelerate vital knowledge sharing across geographic locations that could result in saving lives.

Many industries are turning to XR as a way to cope with their remote collaboration needs during varying stages of lockdowns around the world. Global strategic design and innovation consultancy Seymourpowell uses VR to enable collaborative design across global teams, encouraging employees to dial in to participate in immersive meetings via tablet, phone, laptop, or VR headset.

The platform they use, Reality Works , was originally created in 2017 as a tool for their transport team to collaboratively create full-scale 3D vehicle designs, but now they’ve adapted it and expanded use throughout the company, even hosting impactful client pitches in VR and offering the platform to their clients.

Virtual meetings and events

We are seeing evidence that a short-term investment in an immersive platform and instigating a virtual meet-up work culture can save companies time and money in the long term.

Earlier this year executive training organization The Leadership Network moved all their physical masterclasses into the metaverse via their Gemba VR platform. Removing three nights’ accommodation, business travel and subsistence from the equation saved customers an average of £1,800 per person. It also cut down the hours employees had to be ‘out of office’, gaining companies 44% more productivity time throughout the week.

Under the pandemic, the events industry has particularly suffered with many turning to Zoom, Hopin, and Teams as an alternative to physical conferences. Between screen fatigue, the lack of networking options, and every event starting to look and feel the same, there is a good case to be made for the advantages of hosting in VR.

European VR/AR tradeshow Virtuality completely digitalized their physical arena to reflect everything you might expect from a conference space: exhibition halls, booths, auditoriums, networking lounges, all accessible from anywhere in the world via PC, Mac, and Oculus Quest. To accomplish this they’ve partnered with Manzalab Group using their digital solution Teemew Event .

Many VR platforms designed to support meetings have expanded their offer to include conferencing features, like the immersive education platform Engage , which can now host up to 150 people at one time. It is unique in that it offers full-bodied avatars, the ability to run events inside 360 videos, and it also offers spatial recording, which means post-event people can still experience a fully 3D replay.

Tracking eyes, hands… and brains

Advancements in eye and hand-tracking capabilities now included in many headsets offer new ways to measure customer engagement and prove ROI.

A global consumer goods corporation partnered with Accenture to build a multi-user VR merchandising evaluation system where they can safely host customer focus groups to evaluate the effectiveness of product placement, advertisements, and store layouts before making costly decisions.

The simulation ultimately resulted in higher product sales and a greater profit margin as they were able to effectively market test before implementation, ensuring that when it came to deployment they got it right the first time.

Taking things one step further, the integration of bio-data or brain-computer-interface (BCI) technology into headset experiences can give us an even deeper insight into the nuances of customer behavior and decision-making.

EEG brainwave technology MyndPlay was integrated into OculusGo headsets to allow marketers to see which adverts perked an individual’s attention the most so they could then offer people a more personalized product. With recent studies showing 80% of customers are more likely to purchase a product or service from a brand that provides personalized recommendations and experiences, this is a trend we may see more of in the years to come.

The role of social

Using augmented reality to let shoppers ‘try before you buy’ has become even more important to retailers in 2020, adding value to the at-home shopping experience.

Earlier this year Gucci partnered with Snapchat for the platform’s first global branded AR shoe try-on lenses. The AR lens overlays a digital version of four pairs of shoes on a mobile user’s feet and allows immediate purchasing via the Snap app. According to Snap data, Snapchat reaches 75% of people ages 13 to 34 and 90% of people ages 13 to 24 in the US, helping brands bond with Gen Z.

Also attempting to engage the next generations, Burger King ran an immersive sweepstake during the MTV VMAs that asked viewers to scan an on-screen QR code to activate an AR experience featuring rapper Lil Yachty. People were treated to an exclusive performance, as well as coupons. This drove downloads of their app, which has become crucial to many quick-service brands since the pandemic.

The adoption of AR into our everyday lives through social media platforms like Snap and Instagram was so gradual and natural many people don’t even realize they’re using AR technology.

AR has enjoyed a faster consumer adoption than the uptake of VR for several reasons: it’s less expensive to create and free to use, it can be activated through hardware we all already own and have on our bodies most of the time, and it services a very basic function, even if that function is to simply make us look cool online.

The evolution of AR and MR (mixed reality) technologies has the potential to be quite profound, however, fundamentally changing the way we interact with the world around us. Recently acquired by Facebook , Scape Technologies uses AI, computer vision, and cloud computing to geo-pin AR and MR content to specific locations.

Effectively this means that in the future the entire world will become real estate for interactive, shoppable digital signage viewed via phones, glasses, and, sooner than one may think, contact lenses or implants.

While today we might use AR to map a path to physical locations while receiving pop-up ads on our phones, tomorrow these ads may be integrated and activated by our physical environments opening up new opportunities for personalization, gamification, and revenue streams.

As we go back to physical environments — whether it be retail shops, museums, or other entertainment facilities — AR activations will play a significant role in our ability to deliver information and engaging experiences while keeping everyone safe.

Moving off mobile

Moving this engagement from the mobile to a ‘heads up’ experience is a space many startups are currently vying for. Predicted to disrupt the dreams of young companies in this arena is Apple, which has secured a number of patents for its forthcoming AR glasses.

Said to use the iPhone as the computer behind the glasses’ AR functions, this would instantly give Apple a market advantage, as well as remove the weight and subsequent unattractiveness of many of the prototypes we’ve been seeing.

One of Apple’s latest patents focused on the ability of lenses to automatically adjust according to the eyesight of its user. It suggests that the optical module associated with individual eyes will be able to modify displayed images to correct the user’s vision.

News of fresh innovations coming to the world of XR, along with evidence of the formation of subindustries, indicate that the industry is continuing to evolve and mature. As the technologies become more democratized, price points will continue to come down and uptake will continue to go up.

With alpha-innovators beginning to prove ROI as a result of XR, more companies will have to follow suit if they want to stay in the game. While some might view the constant developments and upgrades as a sign to hold off investment until the hype curve has flattened, the companies adopting these technologies today know that by then it will be too late.

RISE Spotlight: XR in Today’s Reality takes place on December 15th. You can find out more about this RISE Spotlight here and register here .

This article was originally published on ISE.

Leave A Comment