Tag Archive for: Monique Hesseling

Insurtech: Winning The Battle But Losing The War?

Author: Monique Hesseling

It was exciting to visit InsurTech Connect last week; there was a great energy and optimism on what we collectively can innovate for our insurance industry and customers,  and it was wonderful to see insurers, technology providers of all sizes and maturity levels and capital providers from all around the world coming together and try to move our industry forward.  As somebody eloquently pointed out: it was insuretech meeting maturetech.

Having attended many insurance and technology conference over the years, I really saw progress around innovation this time: we have all spoken a lot about innovation for a long time, and built plans and strategies around it, but this time I saw actual workable and implementable technical applications, and many  carriers ready and able to implement, or at least  run a pilot.

All of this is wonderful, of course. The one thing however that struck me as a possible short term gain but a longer term disadvantage is the current focus on individual use cases and related point solutions.  Now, I do understand that it is easier to “sell” innovation in a large insurance company if it comes with a clear use case and tight business plan including an acceptable ROI, supported by a specific technology solution for this use case. I saw a lot of those last week, and the market seems to be ready to adopt and implement function specific analytics and operational solutions around claims, underwriting, customer service, customer engagement and risk management. Each one of these comes with a clear business case and often a great technical solution, specifically created to address this business case. Which short term leads to a lot of enthusiasm for insuretech and carrier innovation.

However, to truly innovate and benefit from everything new technologies bring to the table, innovation will have to start taking place at a more foundational level; to access and use all available data sources, structured and unstructured, and benefit fully from insights, carriers will have to create an enterprise wide big data infrastructure. Quickly accessing and analyzing all these new and different data types (think unstructured text, pictures or IoT) just cannot  be done in a classical data environment.  Now, creating a new data infrastructure is obviously a big undertaking. So I appreciate that carriers (and therefore tech firms) tend to focus first on smaller, specific use case driven, projects.  I fear however, that some insurance companies will end up with a portfolio of disconnected projects and new technologies, which will quickly lead to data integration and business analytics issues and heated discussions between the business and IT. I start seeing that happening already with some carriers I work with.  So I would suggest to focus on the “data plumbing” first: get buy-in for a new brave data world. Get support and funding for a relevant big data infrastructure, in incremental steps. Start with a small lake, for innovation. Run some of these use cases on this, and quickly scale up to an enterprise level. It is harder to get support and funding for “plumbing” than for a sexy use case around claims or underwriting, but it seems to be even harder to get the plumbing done when the focus is on individual business use cases. Please do not give up on the “plumbing”:  we might win the innovation battle, possibly lots of battles,  but lose the war.

Using Smart City Data for Catastrophe Management: It Still Is All About the Data …

Author: Monique Hessling

Recently I had the pleasure to work with SMA’s Mark Breading on a study about the development of smart cities and what this means for the insurance industry (Breading M, 2017. Smart cities and Insurance. Exploring the Implications. Strategy Meets Action).

Smart Cities and InsuranceIn this study Mark touched on a number of very relevant smart city related technical developments that impact the insurance industry, such as driverless cars, smart buildings, improved traffic management, energy reduction, or sensor-driven better controlled and monitored health and well-being.  Mark also explored how existing risks and how carriers assess them might change, and how these might be reduced due to new technologies.  However, new risks, especially around liability (think cyber, or who is to blame for errors made by technology in a driverless car or in automated traffic management) will evolve. Mark concluded that insurers will have to be ready to address these changes in their products, risk assessments and risk selection.

Here in the USA, the last weeks have shown again how much impact weather can have on our cities, lives, communities, businesses and assets. We all have seen the devastation in Texas, Florida, the Caribbean and other areas and felt the need to help.  As quickly as possible. Insurance carriers and their teams too go out of their way (often literary) to assist their clients in these overwhelmingly trying times. And I learned in working with some of them that insurers and their clients can benefit from “smart city technologies’ also in times of massive losses.

I have seen SynerScope and other technology being used to monitor wind and water levels, overlaying this data with insured risks and exposure data. Augmented with drone and satellite pictures and/or smart building and energy grid sensor data (part of this is often publicly available and open), this information gives a very quick first assessment of damage (property and some business interruption) to specific locations. I have seen satellite and drone pictures of exposures being machine analyzed, augmented with other data and deployed in an artificial intelligence/machine learning environment  that by using similarity analyses quickly identifies other insured exposures that most likely have incurred similar damages. This enables adjustors to  proactively get involved in addressing this potential claim, hopefully limiting damages and getting the insured back to normal as soon as possible. Another use of this application of course is fraud detection.

Smart City Data

Smart city projects  use technology to make daily life better for citizens, business and government. The (big) data these projects generate however can also be very helpful in dealing with a catastrophe and its aftermath. We don’t always think creatively about re-using our data for new purposes. Between carriers, governments and technology providers we should explore this more. To make our cities even smarter, also in bad times.

Download the report:

Smart Cities and Insurance

 

Data Lakes, Buckets and Coffee Cups

Author: Monique Hessling

Over the last years, primarily large carriers and especially the more “cutting edge” ones (for all the doubters: yes there is such a thing as a cutting edge insurer), have invested in building data lakes. The promise was that these lakes would enable them to more easily use and analyze “big data”, and gain insights that would change the way we all do business. Change our business for the better, of course. More efficient, better customer experiences, better products, lower costs. In my conversations with all kinds of carriers, I have learned that I am not the only one who struggles to totally grasp this concept:

A midsize carrier’s CIO in Europe informed me that his company was probably too small for a whole lake, and asked me if he could start with just a “data bucket”. His assumption was that many buckets ultimately would construe a lake. Another carrier’s CIO explained to me that she is the proud owner of a significant lake. It is just running pretty dry since she analyzes, categorizes and normalizes all data before dumping it in. She explained that she was filling a big lake with coffee cups full of data. It would take her a long time to get that lake filled..

You might notice that these comments all dealt with the plumbing of a big data infrastructure; the carriers did not touch on analytics and valuable insights yet. Let alone on operationalizing insights or measurable business value. Many carriers seem to be struggling with the classical pain-point of ETL, also in this new world.

By digging into this issue with big data SMEs , learned that this ETL issue is more a matter of perception than a technological problem. Data does not have to be analyzed and normalized before being dumped into lakes. And it can still be used for analytical exercises. Hadoop companies such as Hortonworks, Cloudera or MapR, or integrated cloud solutions such as the recently announced Deloitte/SAP HANA/AWS solution provide at least part of the solution to dive and snorkel in a lake without restricting oneself to tipping a toe in a bucket of very clean and much analyzed data.

And specialized firms such as SynerScope can prevent weeks, months or even longer of filling that lake with coffee cups full of clean data by providing capabilities to fill lakes with many different types of data fast (often within days) and at a low cost. Adding their capabilities in specialized deep machine learning to these big data initiatives allows for secure, traceable and access controlled use of “messy data” and creates quick business value.

Now, for all of us data geeks, it feels very uncomfortable to work with, or enable others to work with data that has not been vetted at all. But we’ll have to accept that with the influx of the massive amounts of disparate data sources carriers want to use, it will become more and more cost and time prohibitive to check, validate and control every piece of data being used by our businesses at point of intake into the lake. Isn’t it much smarter to take a close look at data at the point where we actually use it? Shifting our thinking that way, coupled with technology available, will enable much faster value out of our big data initiatives. I appreciate that this creates a huge shift in how most of us have learned to deal with data management. However, sometimes our historical truths need to be thrown overboard and into the lake before we can sail to a brighter future.

Dataworks Summit Munich and Dreams Coming True

Author: Monique Hesseling

Last week the SynerScope team attended the Dataworks Summit in Munich: “the industry’s premier big data community event”. It was a successful and well-attended event. Attendees were passionate about big data and its applicability to different industries. The more technical people learned (or in the case of our CTO and CEO: demonstrated) how to get most value quickly out of data lakes. Business folks were more interested in sessions and demonstrations on how to get actionable insights out of big data, use cases and KPIs. Most attendees came from the EMEA region, although I regularly detected American accents also.

It has been a couple of years since I last attended a Hadoop/big data event -I believe it was 2013- and it was interesting last week to see the field maturing. Only a few years ago, solution providers and sessions focused primarily on educating attendees on the specifics of Hadoop, data lakes, definitions of big data and theoretical use cases: “wouldn’t it be nice if we could..”. Those days are gone. Already in 2015, Betsy Burton from Gartner discussed in her report “Hype Cycle for Emerging Technologies ”  that big data quickly had moved through  the hype cycle and had become a megatrend, touching on many technologies and ways of automation. This became obvious in this year’s Dataworks Summit. Technical folks questioned how to quickly give their business counterparts access and control over big data driven analytics. Access control, data privacy and multi-tenancy were key topics in many conversations. Cloud versus local still came up, although the consensus seemed to be that cloud is becoming unavoidable, with some companies and industries adopting faster than others. Business people inquired about use cases and implementation successes. Many questions dealt with text analysis, although a fair number of people wanted to discuss voice analysis capabilities and options, especially for call center processes. SynerScope’s AI/machine learning case study of machine-aided tagging and identifying pictures of museum artifacts also got a lot of interest. Most business people however had a difficult time coming up with business cases in their own organizations benefitting from this capability.

This leads me to an observation that was made in some general sessions also: IT and technical people tend to see Hadoop/data lake/big data initiatives as a holistic undertaking, creating opportunities for all sorts of use cases in the enterprise. Business people tend to run their innovation by narrowly defined business cases, which forces them to limit the scope to a specific use case. This makes it difficult to justify and get funding for big data initiatives beyond pilot phases. We probably all would benefit if both business and IT would consider big data initiatives holistically at the enterprise level.  As was so eloquently stated in Thursday’s general session panel: “Be brave! The business needs to think bigger. Big Data addresses big issues. Find your dream projects”!  I thought it was a great message, and it must be rewarding for everybody working in the field that we can start helping people with their dream projects. I know that at SynerScope we get energized by listening to our clients’ wishes and dreams and making these into realities. There still is a lot of work to be done to fully mature big data and big insights, and make dreams come true, but we all came a long way since 2013.  I am sure the next step on this journey to maturity will be equally exciting and rewarding.

Innovation in action: Horses, doghouses and winter time…

Author: Monique Hesselink

During a recent long flight from Europe, I read up on my insurance trade publications. And although I now know an awful lot more about block chain, data security, cloud, big data and IoT than when I boarded in Frankfurt, I felt unsatisfied by my readings (for the frequent flyers; yes, the airline food might have had something to do with that feeling). I missed real live case studies, examples of all this new technology in action in normal insurance processes, or integration into down-to-earth daily insurer practices. Maybe not always very disruptive, but at least pragmatic and immediately adding value.I know the examples I was looking for are out there, so I got together with a couple of insurance and technology friends and we had a great time identifying and discussing them. For example, the SynerScope team in the Netherlands told me that their exploratory analysis on unstructured data (handwritten notes in claims files, pictures) demonstrated  that an unexplained uptick in home owners claims was caused by events involving horses. Now think about this for a moment: in the classical way of analyzing loss causes we start with a hypothesis and then either verify or falsify that. Honestly, even in my homeland I do not think that any data analyst or actuary would create a hypothesis that horses would be responsible for an uptick in home owners losses. And obviously “damage caused by horse” is not a loss category on the structured claims input either, under home owners coverage. So up to not too long ago, this loss cause either would not have been recognized as significant, or it would have taken analysts enormous amount of time and a lot of luck identifying it by sifting through mass amounts of unstructured data. The SynerScope team figured it out with one person in a couple of days. Machine augmented learning can create very practical insights.

In our talks, we discovered these type of examples all over the world; here in the USA, a former regional executive at a large carrier told me that she found an uptick in house fires in the winter in the South. One would assume that people mistakenly set their house on fire in the winter with fireplaces, electrical heaters etc to stay warm. Although that is true, a significant part of the house fires in rural areas was caused by people putting heating lamps in dog houses: to keep Fido warm. Bad idea.. Again; there was no loss code for “heating lamp in doghouse” in structured claims reporting processes, nor was it a hypothesis that analysts thought to pose. So it took the trending of  loss data over years before the carrier noticed this risk and took action to prevent and mitigate these dreadful losses. Exploratory analysis on unstructured claims file information in a deep machine learning environment, augmented with domain expertise and a human eye -as in the horse example I mentioned earlier- would have identified this risk much faster. We went on and on about case studies like those..

Now, although I am a great believer and firm supporter of healthy disruption in our industry, I think we can support innovation by assisting our carriers with these kind of very practical use cases and value propositions. We might want to focus on practical applications that can be supported by business cases, augmented with some less business case driven innovation and experimenting. I firmly believe that a true partnership between carriers, instech firms and distribution channels and a focus on innovation around real-life use cases will allow for fast incremental innovation and will keep everybody enthusiastic about the opportunities of the many new and exciting technologies available. While doing what we are meant to do; protecting homes, horses and human lives.