Webinar SynerScope with Hortonworks

Do Insurers Spend Too Much Time Understanding Data vs. Finding Value In It?

Recorded Tuesday, April 25, 2017  |  57 minutes

Every insurance company regardless of line of business is focused on being more data-centric. Risk assessments based data is at the heart of analysis. Understanding and paying valid claims quickly is key to customer retention and loyalty. Creating new insurance offerings to meet market and customer demands is imperative to remain relevant.

Today insurance companies have more data available to them than ever before. Whether it is big data from open data sets, IoT data, customer behavior data, photos from risk assessments, drone aerial inspections of property or traditional risk/claim/customer data. It’s all data! The challenge remains “how quickly can you ask questions of the data and make insightful business decisions”?

During this webinar you will learn how to:

  • Spend little or no time on data hygiene and data transformation
  • Make data accessible across the enterprise through data usage and collaboration
  • Quickly identify what new open data or existing data is most valuable for your risk assessments
  • Leverage deep learning on the underwriting and claims processes for a positive impact to your combined ratio
Speakers
  • Cindy Maike, GM of Insurance Solutions at Hortonworks
  • Monique Hesseling, Strategic Business Advisor at SynerScope
  • Pieter Stolk, VP of Customer Engagement at SynerScope

Data Lakes, Buckets and Coffee Cups

Author: Monique Hessling

Over the last years, primarily large carriers and especially the more “cutting edge” ones (for all the doubters: yes there is such a thing as a cutting edge insurer), have invested in building data lakes. The promise was that these lakes would enable them to more easily use and analyze “big data”, and gain insights that would change the way we all do business. Change our business for the better, of course. More efficient, better customer experiences, better products, lower costs. In my conversations with all kinds of carriers, I have learned that I am not the only one who struggles to totally grasp this concept:

A midsize carrier’s CIO in Europe informed me that his company was probably too small for a whole lake, and asked me if he could start with just a “data bucket”. His assumption was that many buckets ultimately would construe a lake. Another carrier’s CIO explained to me that she is the proud owner of a significant lake. It is just running pretty dry since she analyzes, categorizes and normalizes all data before dumping it in. She explained that she was filling a big lake with coffee cups full of data. It would take her a long time to get that lake filled..

You might notice that these comments all dealt with the plumbing of a big data infrastructure; the carriers did not touch on analytics and valuable insights yet. Let alone on operationalizing insights or measurable business value. Many carriers seem to be struggling with the classical pain-point of ETL, also in this new world.

By digging into this issue with big data SMEs , learned that this ETL issue is more a matter of perception than a technological problem. Data does not have to be analyzed and normalized before being dumped into lakes. And it can still be used for analytical exercises. Hadoop companies such as Hortonworks, Cloudera or MapR, or integrated cloud solutions such as the recently announced Deloitte/SAP HANA/AWS solution provide at least part of the solution to dive and snorkel in a lake without restricting oneself to tipping a toe in a bucket of very clean and much analyzed data.

And specialized firms such as SynerScope can prevent weeks, months or even longer of filling that lake with coffee cups full of clean data by providing capabilities to fill lakes with many different types of data fast (often within days) and at a low cost. Adding their capabilities in specialized deep machine learning to these big data initiatives allows for secure, traceable and access controlled use of “messy data” and creates quick business value.

Now, for all of us data geeks, it feels very uncomfortable to work with, or enable others to work with data that has not been vetted at all. But we’ll have to accept that with the influx of the massive amounts of disparate data sources carriers want to use, it will become more and more cost and time prohibitive to check, validate and control every piece of data being used by our businesses at point of intake into the lake. Isn’t it much smarter to take a close look at data at the point where we actually use it? Shifting our thinking that way, coupled with technology available, will enable much faster value out of our big data initiatives. I appreciate that this creates a huge shift in how most of us have learned to deal with data management. However, sometimes our historical truths need to be thrown overboard and into the lake before we can sail to a brighter future.

Dataworks Summit Munich and Dreams Coming True

Author: Monique Hesseling

Last week the SynerScope team attended the Dataworks Summit in Munich: “the industry’s premier big data community event”. It was a successful and well-attended event. Attendees were passionate about big data and its applicability to different industries. The more technical people learned (or in the case of our CTO and CEO: demonstrated) how to get most value quickly out of data lakes. Business folks were more interested in sessions and demonstrations on how to get actionable insights out of big data, use cases and KPIs. Most attendees came from the EMEA region, although I regularly detected American accents also.

It has been a couple of years since I last attended a Hadoop/big data event -I believe it was 2013- and it was interesting last week to see the field maturing. Only a few years ago, solution providers and sessions focused primarily on educating attendees on the specifics of Hadoop, data lakes, definitions of big data and theoretical use cases: “wouldn’t it be nice if we could..”. Those days are gone. Already in 2015, Betsy Burton from Gartner discussed in her report “Hype Cycle for Emerging Technologies ”  that big data quickly had moved through  the hype cycle and had become a megatrend, touching on many technologies and ways of automation. This became obvious in this year’s Dataworks Summit. Technical folks questioned how to quickly give their business counterparts access and control over big data driven analytics. Access control, data privacy and multi-tenancy were key topics in many conversations. Cloud versus local still came up, although the consensus seemed to be that cloud is becoming unavoidable, with some companies and industries adopting faster than others. Business people inquired about use cases and implementation successes. Many questions dealt with text analysis, although a fair number of people wanted to discuss voice analysis capabilities and options, especially for call center processes. SynerScope’s AI/machine learning case study of machine-aided tagging and identifying pictures of museum artifacts also got a lot of interest. Most business people however had a difficult time coming up with business cases in their own organizations benefitting from this capability.

This leads me to an observation that was made in some general sessions also: IT and technical people tend to see Hadoop/data lake/big data initiatives as a holistic undertaking, creating opportunities for all sorts of use cases in the enterprise. Business people tend to run their innovation by narrowly defined business cases, which forces them to limit the scope to a specific use case. This makes it difficult to justify and get funding for big data initiatives beyond pilot phases. We probably all would benefit if both business and IT would consider big data initiatives holistically at the enterprise level.  As was so eloquently stated in Thursday’s general session panel: “Be brave! The business needs to think bigger. Big Data addresses big issues. Find your dream projects”!  I thought it was a great message, and it must be rewarding for everybody working in the field that we can start helping people with their dream projects. I know that at SynerScope we get energized by listening to our clients’ wishes and dreams and making these into realities. There still is a lot of work to be done to fully mature big data and big insights, and make dreams come true, but we all came a long way since 2013.  I am sure the next step on this journey to maturity will be equally exciting and rewarding.

SynerScope addresses your “white space” of unknown big data in your data-lake

The Netherlands, April 4, 2017 – As every organization is fast becoming a digital organization, powerful platforms that extend the use of data are imperative to use in the enterprize world.

By implementing SynerScope on top of your Hadoop, you are able to solve the whitespace of unknown data, due to the tight integration between the scalability of Hadoop with the best of SynerScope’s Artificial Intelligence (AI) including deep learning.  The result is a reduced Total Cost of Ownership, working with Big Data and it also creates, extremely fast, great value out of your data-lake.

As data science developments happen in your data lake, you currently encounter data latency problems. Hortonworks covers the lifecycle of data: data moving, data at rest and data analytics as a core infrastructure play.

Ixiwa, SynerScope’s backend product, will support and orchestrate data access layers and it will also make your whole data-lake span multiple services.

Hadoop is a platform for distributed storage and processing.  Place SynerScope on top of Hadoop and you gain advantage from deep learning intelligence, through SynerScope’ s Iximeer. It will bootstrap AI projects by providing out of the box interaction between domain expert, analysts and the data itself.

“AI needs good people and good data to get somewhere, so we basically help AI to make the best decision in parallel with first insight, then basic rules, then tuning”, says CTO Jorik Blaas.

We are proud to announce that as of today Hortonworks has awarded SynerScope all 4 (Operations, Governance, YARN and Security) certifications for our platform, which is a first in the history of their technology partners. 

For more info about the awarded badges go to https://hortonworks.com/partner/synerscope/

If you are interested and want to know more about us, there is the opportunity to visit us at the DataWorks Summit in Munich, April 5-6. We like to welcome you at our booth 1001 as well as at the IBM booth 704 and we will be presenting at the breakout session: “A Complete Solution for Cognitive Business” 12:20pm, room 5.

About SynerScope:

Synerscope enable users to analyze large amounts of structured and unstructured data. Iximeer is a visual analysis platform that represents big insights arising from analyzing AI data into a uniform contextual environment that links together various data sources: numbers, text, sensors, media/video and networks. Users can identify hidden patterns across data without specialized skills.

It supports collaborative data discovery, thereby reducing the efforts required for cleaning and modelling data. Ixiwa ingests data, generates metadata from both structured and unstructured files, and loads data into an in-memory database for fast interactive analysis. The solutions are delivered as an appliance or in the cloud. SynerScope can work with a range of databases, including SAP HANA as well as a number of NoSQL and Hadoop sources.

SynerScope operates in the following sectors: Banking, Insurance, Critical Infrastructure, and Cyber Security. Learn more at Synerscope.com.

SynerScope has strategic partnerships with Hortonworks, IBM, NVIDIA, SAP, Dell.