Blog-2-Balancing-the-scale1
Four steps that matter the most to achieve great results across new world customer interactions
April 15, 2015
Show all

Artificial Intelligence In Insurance: Virtual or Reality

It is undeniable that massive streams of change are coming to the insurance sector.

Just a few days ago, AXA announced investing €100m InsurTech incubator. Disruptive technology will be a big part of that change led by data, advanced analytics and the Internet of Things.
Such upcoming disruption will also require the delivery of unique customer engagements supported by disruptive experience design thinking techniques.

Andrew McQuade, managing director of Kyngswoode Services an Insurance IT services company, introduced the theory, AI’s early usage within the connected vehicle space and its ability to identify cyber crimes.
Alexandre Dalyac, CEO of Tractable, reviewed AI within the context of motor insurance claims servicing, ways to close the leakage gap and improve future pricing strategies.

What is AI?

In general, AI refers to a program, a system or a series of algorithms that can mimic or re-create the thought processes and intelligent behaviour demonstrated by the human brain. AI makes activities such as reasoning, learning, planning, problem solving, making observations, analysing and categorising information possible. John McCarthy, who coined the term in 1955 at the Massachusetts Institute of Technology, defines it as "the science and engineering of making intelligent machines". Other experts call this field as "the study and design of intelligent agents or robots."

Why AI, now?

First, because of current market uncertainties, enterprises cannot count on steady growth. They need sophisticated planning and simulation tools and techniques to identify new growth opportunities that move them beyond today’s reality.

Second, the volume, velocity, variety and validity of data (the "four Vs" of data) demand new ways of thinking and innovating. High volumes of data combined with varieties and types of data (i.e. text, pictures, audio, video, blogs), have changed the way information and insights are processed, driving the need for streaming and accessing it in real time, ensuring quality and context to speed insightful decision making.

Third, advanced business analytics technologies have matured. At the same time, users’ expectations for simplicity have increased. Today’s analytical applications offer domain-centric capabilities that enterprises can more readily find, buy and use to improve the value delivered by their business operations.

What are the most common types of AI ?

Talking to mathematicians and experts in the AI field, there are many sub-techniques within this space. I have selected five categories, which seem to be the hottest areas at present.

Machine Learning, for instance, builds algorithms that allow computers to learn to do things and makes data-driven predictions or decisions instead of being explicitly programmed or needing programming instructions. Good examples of machine learning applications include Google’s search engine and “Driverless Car”.

Other examples would include forensic analysis, predictive policing and cyber crime. In the area of crime detection, big data and machine learning algorithms are combined to make sense of behaviour, patterns and crime statistics to determine types of crime, as well as when and where crimes will take place. A little like the Minority Report story.

I was told that in engineering departments, Machine Learning is sometimes called Pattern Recognition. Pattern recognition can be used to identify know and unknown patterns. It includes voice and speech recognition, of which call centres, IVRs and Siri are good examples of trained cases. Customers’ speech and accents can now be pretty well recognised by such technology. Indian bank,ICICI, offers hands free voice recognition banking to simplify interactions with its customers. An extreme application of this raised concerns last February, whenSamsung launched its smart TV. The product captures private conversations and shares the details with a third party partner, who then decided the best use for it.

Natural Language Processing is concerned with the interactions between computers and natural human languages. Semantic and sentiment analysis would fit this category. For instance, social media sentiment analysis is able to distinguish positive comments from negative ones. Facebook M, amongst others, is a great example of this. It allows Facebook to learn what people are saying in videos and whilst writing comments to make better predictions on their likes, dislikes, wants and needs. Emotion analysis, also called facial expression recognition, is a growing area of interest in the cognitive customer experience space. Emotion analysis can help an organisation better understand true emotions and a customer’s lifestyle through the analysis of emotional patterns. It can also help improve the organisation’s overall interaction and communication style and approach with such customer.

Neural Networks are used to find relationships among data points by allowing a system to “learn” new categories and identify known or unknown object properties. Different neural network models are trained using a collection of data from given sources. When successfully trained, the neural networks are used to perform classification or prediction of new data from the same or similar sources. An example often utilised in this space, is the use of neural nets to recognise hand written digits. Numbers like 1, 6 or 9 can be expressed differently across cultures. Trying to express this algorithmically can be difficult. Neural networks approach the problem in a different way. The idea is to take a large number of hand-written digits, known as training examples, and then develop a system that can learn from those training examples. As more data gets trained, the network learns and becomes more accurate with its digit predictions.

Deep Learning is a particular approach that builds and trains neural networks. It is like a series of electronic circuits that learn new things as it goes. The outputs are usually a prediction of some sort.

When applied to image recognition, Alexandre Dalyac noted during his ACORD presentation, Deep Learning has shown high rates of correctly recognising images, whether the image is a picture of a car, a truck, a cat or many other things.

Deep learning relies on a training process to discover the most useful patterns across a set of input images. One still needs to input millions of data points, and make choices about the internal layout of the networks before the training starts, this means that an understanding of the problem is still required, but the automatic discovery process enabled through the AI algorithms makes life far easier.

As most companies have to deal today with messy volumes of real world data, over the next few years Deep Learning will gradually become an essential element of an executive business toolkit. And in a recent review, Yoshio Nagata confirmed rightly “Deep Learning sparks a third generation AI boom”. AI research, he states, enjoyed a second boom in the 1980s; the first came in the 1960s. AI is back and it's all thanks to deep learning.

Deep learning techniques are continuously evolving and the arrival of new techniques could lead to highly reliable anti-crime and monitoring systems, the next generation of self-driving cars, home and nursing-care robots as well as other business applications like enhanced micro-segmentation, intelligent pricing, prescriptive forecasting and augmented customer experiences.

Deep learning relies on a training process to discover the most useful patterns across a set of input images. One still needs to input millions of data points, and make choices about the internal layout of the networks before the training starts, this means that an understanding of the problem is still required, but the automatic discovery process enabled through the AI algorithms makes life far easier.

As most companies have to deal today with messy volumes of real world data, over the next few years Deep Learning will gradually become an essential element of an executive business toolkit. And in a recent review, Yoshio Nagata confirmed rightly “Deep Learning sparks a third generation AI boom”. AI research, he states, enjoyed a second boom in the 1980s; the first came in the 1960s. AI is back and it's all thanks to deep learning.

Deep learning techniques are continuously evolving and the arrival of new techniques could lead to highly reliable anti-crime and monitoring systems, the next generation of self-driving cars, home and nursing-care robots as well as other business applications like enhanced micro-segmentation, intelligent pricing, prescriptive forecasting and augmented customer experiences.

What does this mean for Insurance?

Marketing: Natural language processing techniques will become invaluable to help an organisation differentiate in the eyes of its customers. A combination of sentiment analysis, machine learning or pattern recognition can help marketers better understand their customers (i.e. via micro-segmentation) and their customers’ needs, and design unique engagement journeys as well as promotional campaigns. One of my clients is already using such techniques to identify what products, solutions, brands or ventures will be successful in the future, in which location, and define actionable growth plans based on unique market drivers.

Underwriting & Intelligent Pricing: There are already initiatives out there combining a variety of relevant data sources with clever pricing and optimisation engines. Adding pattern recognition and deep learning techniques to the mix can help identify fraudulent behaviour, either through an interaction, an online footprint or behavioural DNA. These are the same capabilities used by Amazon to identify the profile and the true identity of the multitude of viewers browsing its site.

Claims Management: Machine Learning, and in particular Deep Learning are expected to become game changers in the claims space because of their ability to sieve through massive amounts of data quickly, accurately and consistently. These algorithms will help accelerate claims assessment and identify claims leakages never seen before. They will speed pattern recognition and claims processing, thereby reducing costs while improving the customer engagement. They will also facilitate the detection of new sources of claims fraud and help organisations design really relevant remedial and preventative actions.

To learn more about the cool innovative ideas coming to market that will disrupt the insurance market, consider attending the next ACORD Open Innovation Forums on 19th November 2015.
Also, do not hesitate to share below any innovative usage of AI you might have for the Insurance Sector. All thoughts are most welcome. Also do not hesitate to tweet them to me at @SabineVdL.

Leave a Reply

Your email address will not be published. Required fields are marked *