Tag Archive for: Insurtech

Using Dynamic Data Labelling to Drive Business Value

Dynamic Data Labelling with Ixivault

Before deriving any value from data, you need to find and retrieve relevant data. Search allows you to achieve that goal. However, for ‘search’ to work, we need two things: A search term needs to be defined by humans; data must be indexed for the computer to find it with cost and speed efficiency and to keep the user engaged. But search efficiency breaks under the sheer scale of all-available data and the presence of dark data (with no indexes or labels attached), when considering either finance or response time points of view.

Technologies like enterprise search never took off for this exact reason. Without labels, it’s ineffective to ask a system to select results from the data. At the very moment of creating the data the creator knows exactly what a file contains. But as time passes our memories fail, and other people might be tasked with finding and retrieving data long after we’ve moved on. Searching data in enterprise applications often means painstakingly looking up each subject or object recorded. For end-user applications like MS Office, we lack even that possibility. Without good labels search and retrieval options are near impossible.  And, while the people who create data know exactly what’s in it, the people who come after, and the programs we create to manage that data, cannot perform the same mental hat trick of pulling meaning from unsorted data.

At SynerScope we offer a solution easily recover data that was either lost over time or vaguely defined from the start.  We first lift such ‘unknown’ data into an automated, AI-based, sorting machine. Once sorted, we involve a human data specialist, who can then work with sub-groups of data rather than individual files. Again, unsupervised, our solution presents the user with the discerning words that represent each sub-group in relation to each other. In essence, the AI presents the prime label options for files and content in each subgroup, no matter what the size in number of files, pages, or paragraphs. The human reviewer only has to select and verify a label option, rather than taking on the heavy lifting task of generating labels.

Thus labeled, the data is ready for established processes for enterprise data. Cataloging, access management, analysis, AI, machine learning, and remediation are common end goals for data after Synerscope Ixivault generates metadata and labels.

SynerScope also allows for ongoing, dynamic relabeling of data as new needs appear. That’s important in this age of fast digital growth, with a constant barrage of new questions and digital needs. Ixivault’s analysis and information extraction capabilities can evolve and adapt to future requirements with ease, speed, and accuracy.

How Does Unlabeled Data Come about?

Data is constantly created and collected. When employees capture or create data, they are adding to files and logs. Humans are also very good at mentally categorizing data – we can navigate with ease through most recent data, unsorted and all. Whether that means navigating a stack of papers or nested folders – our associative brain can remember the general idea of what is in each pile of data – so long as that data doesn’t move. But we’re very limited by the scale we can handle. We have mental pictures of scholars and professors working in rooms where data is piled to the ceiling everywhere, but where little cleaning was ever allowed. This paradigm doesn’t hold for digital data in enterprises. Collaboration, analysis, AI needs and regulations always put too much pressure on knowing where data is.

Catalogs and classification solutions can help, but automation levels for filling process are too low. That leads to gaps and arrears in labeling data. The AI for fully automatic labeling isn’t there yet. Cataloging and classifying business documentation is even harder than classifying digital images and video footage.

Digital Twinning and Delivering Value with Data

Before broadband, there was no such thing as a digital twin for people, man-made objects, or natural objects. Only necessary information was stored in application-based data silos. By 2007, the arrival of the iPhone and its revolution in mobile and mobile devices changed that. Everyone and everything were online, all the time, and constantly generating data. The digital twin, a collection of data representing a real person or a natural or man-made object was born.

In most organizations, these digital twins remain mostly in the dark. Most organizations collect vast quantities of data on clients, customer cases, accounts, and projects. It stays in the dark because it’s compiled, stored, and used in silos. When the people who created the data retire or move to another company, its meaning and content fade quickly – because no one else knows what’s there or why. And, without proper labels your systems will have a hard time handling any of it.

GDPR, HIPPA, CCPA etc. forces organizations to understand what data they have regarding real people, and they demand the same for any historic data stored from the days before those regulations existed.

Regulations evolve, technologies evolve, markets evolve, and your business evolves, all driving highly dynamic changes to what you need to know from your data. If you want to keep up, ensuring that you can use that data to drive business value – while avoiding undue risks from business regulations, data privacy and security regulation – you must be able to search your data. Failing this, you could get caught in a chaotic remediation procedure, accompanied by unsorted data that doesn’t reduce the turmoil, but adds to the chaos.

Dynamic Data Labelling with Ixivault

Ixivault helps you to match data to new realities in a flexible, efficient way, with a dynamic, weakly-supervised system for data labeling. The application installs in your own secure Microsoft Azure client-tenant, using the very data stores you set up and control, so all data always remains securely under your governance. Our solution, and its data sorting power, helps your entire workforce – from LOB to IT –  to categorize, classify, and label data by content – essentially lifting it out of the dark.

Your data is then accessible for all your digital processes.  Ixivault shows situations and objects grouped by similarity of documentation and image recordings and allows you to compare groups for differences in the content.  This simplifies and speeds the tasks of assigning labels to the data. Any activity that requires comparison between cases, objects, situations, data, or a check against set standards is made simple.  Ixivault also improves the quality of data selection, which helps in a range of applications ranging from Know Your Customer and Customer Due Diligence to analytics and AI based predictions using historical data.

For example, insurance companies can use that data to find comparable cases, match them to risks and premium rates, and thereby identify outliers – allowing the company to act in pricing, underwriting or binding or all of them.

SynerScope’s type of dynamic labelling creates opportunities to match any data, fast and flexible. As perception and the cultural applications of data change over time, you can also match data with the evolving needs for information extraction, change labels as data contexts change, and to continue driving value from the data you have at your disposal.

If you want to know more about Ixivault or its dynamic matching capabilities in your organization, contact us for personalized information.

Real-time Insight in All Data, Present and Past

The promise of data-driven working is great: risk-based inspection, finding new revenue models, reducing costs and delivering better products and services. Every company wants this, but it often fails.
The most important business data cannot be fully unlocked by traditional analysis tools. Why does it go wrong and how do you manage to convert all data into insight?

The more you know, the more efficient and better you can make your products and services. That is why data-driven working is high on the agenda of many organizations. But an Artificial Intelligence or BI tools deliver, only partially or not at all, on the promise of data-driven work. That’s because they can only analyze part of the entire mountain of data. And what is an analysis worth if you can only examine half or a quarter of the data you have.

Insights are hidden in unstructured data

Many organizations started measuring processes in ERP and CRM systems over the past 20 years. They store financial data, machine data and all kinds of sensor data. These measurement data are easy to analyze, but do not tell where things go wrong during the entire operation.
This so-called structured data provides only partial insight, while you look for answers in the analysis of all data. It is estimated that 80% to 90% of all data from organizations is unstructured: we are talking about uncategorized data that stored in systems, notes, e-mails, handwritten notes on work-drawings and all kinds of documents across the organization. This valuable resource remains unexplored.

The unexplored gold mine of unstructured data

Organizations have terabytes of it: project information, notes, invoices, tenders, photos and films that together can yield an enormous amount of insights. This fast-growing gold mine is more of a data maze. Over the years, digitization took place step-by-step, process-by-process and department-by-department. During this digitization in slow motion, no one thought that it was useful to coordinate all information in such a way that you can easily analyze it later.

Artificial Intelligence and BI tools get lost

Departments of factories, offices or government agencies created their own data world through this so called ‘island automation’. Separate silos of application data, process data such as spreadsheets, presentations, invoices, tenders, and texts in all kinds of file formats. Moreover, departments and people all categorize information differently, and not structured like a computer would. Not everyone administers equally neatly, or categories are missing, so that colleagues simply write a lot of data away in the “other” field. The problem is that BI and AI tools cannot properly look into this essential and unstructured information. They lack signage, so they get lost in the maze of unstructured data.

Turning archives into accessible knowledge (and skills)

For many companies the future lies in the past. Because most organizations have boxes full of archived material from the pre-digital era, they are now digitizing at a rapid pace. Decades of acquired knowledge and experience are stored but hidden in these archives. Because, like many digital files, these are not well structured. Who categorized their project notes or files neatly into different categories, if they were available at all? If you want to use this unstructured data now, it will take you hundreds of hours of manual work to analyze. SynerScope’s technology searches terabytes or petabytes of data within 72 hours and provides immediate answers from all data.

Unstructured data harbor new revenue models

How that works? A non-life insurer did not know exactly where 25% of their insurance payments went. That is why SynerScope automatically examined the raw texts of millions of damage claims of the last 20 years. The word broken screen came up immediately for claims above 100 euros. The graph showed that screen breakage was rare until 2010, but then grew explosively. What happened? The insurer had never made the category of smartphones or tablets. As a result, they missed a major cost item, or to put it positively: for years they overlooked a new revenue model.


Turn data into progress

Thanks to the power of cloud computing in Azure, SynerScope is able to analyze large amounts of data in real time. And it doesn’t matter what kind of data it is. Spreadsheets, meeting minutes, drone images, filing cabinets full of invoices, you name it! Do you have hundreds of terabytes or even petabytes of satellite or drone data? Then it will be in the model tomorrow! Thanks to the analysis of the present and the past, organizations with SynerScope’s software live up to the promise of data-driven working. Leading companies such as Achmea, ExxonMobil, Stedin, VIVAT and De Volksbank are converting their data into progress with the solution from SynerScope.
Do you also want insight into your present & past to get a grip on the future?
Then request a demo!

SynerScope in DIA Top 100

Proud to have entered the list of Digital Insurance Agenda Insurtech 100! Thank you for considering us, we are in great company! Click on the picture below to see the full list.

Insurtech: Winning The Battle But Losing The War?

Author: Monique Hesseling

It was exciting to visit InsurTech Connect last week; there was a great energy and optimism on what we collectively can innovate for our insurance industry and customers,  and it was wonderful to see insurers, technology providers of all sizes and maturity levels and capital providers from all around the world coming together and try to move our industry forward.  As somebody eloquently pointed out: it was insuretech meeting maturetech.

Having attended many insurance and technology conference over the years, I really saw progress around innovation this time: we have all spoken a lot about innovation for a long time, and built plans and strategies around it, but this time I saw actual workable and implementable technical applications, and many  carriers ready and able to implement, or at least  run a pilot.

All of this is wonderful, of course. The one thing however that struck me as a possible short term gain but a longer term disadvantage is the current focus on individual use cases and related point solutions.  Now, I do understand that it is easier to “sell” innovation in a large insurance company if it comes with a clear use case and tight business plan including an acceptable ROI, supported by a specific technology solution for this use case. I saw a lot of those last week, and the market seems to be ready to adopt and implement function specific analytics and operational solutions around claims, underwriting, customer service, customer engagement and risk management. Each one of these comes with a clear business case and often a great technical solution, specifically created to address this business case. Which short term leads to a lot of enthusiasm for insuretech and carrier innovation.

However, to truly innovate and benefit from everything new technologies bring to the table, innovation will have to start taking place at a more foundational level; to access and use all available data sources, structured and unstructured, and benefit fully from insights, carriers will have to create an enterprise wide big data infrastructure. Quickly accessing and analyzing all these new and different data types (think unstructured text, pictures or IoT) just cannot  be done in a classical data environment.  Now, creating a new data infrastructure is obviously a big undertaking. So I appreciate that carriers (and therefore tech firms) tend to focus first on smaller, specific use case driven, projects.  I fear however, that some insurance companies will end up with a portfolio of disconnected projects and new technologies, which will quickly lead to data integration and business analytics issues and heated discussions between the business and IT. I start seeing that happening already with some carriers I work with.  So I would suggest to focus on the “data plumbing” first: get buy-in for a new brave data world. Get support and funding for a relevant big data infrastructure, in incremental steps. Start with a small lake, for innovation. Run some of these use cases on this, and quickly scale up to an enterprise level. It is harder to get support and funding for “plumbing” than for a sexy use case around claims or underwriting, but it seems to be even harder to get the plumbing done when the focus is on individual business use cases. Please do not give up on the “plumbing”:  we might win the innovation battle, possibly lots of battles,  but lose the war.

Using Smart City Data for Catastrophe Management: It Still Is All About the Data …

Author: Monique Hessling

Recently I had the pleasure to work with SMA’s Mark Breading on a study about the development of smart cities and what this means for the insurance industry (Breading M, 2017. Smart cities and Insurance. Exploring the Implications. Strategy Meets Action).

Smart Cities and InsuranceIn this study Mark touched on a number of very relevant smart city related technical developments that impact the insurance industry, such as driverless cars, smart buildings, improved traffic management, energy reduction, or sensor-driven better controlled and monitored health and well-being.  Mark also explored how existing risks and how carriers assess them might change, and how these might be reduced due to new technologies.  However, new risks, especially around liability (think cyber, or who is to blame for errors made by technology in a driverless car or in automated traffic management) will evolve. Mark concluded that insurers will have to be ready to address these changes in their products, risk assessments and risk selection.

Here in the USA, the last weeks have shown again how much impact weather can have on our cities, lives, communities, businesses and assets. We all have seen the devastation in Texas, Florida, the Caribbean and other areas and felt the need to help.  As quickly as possible. Insurance carriers and their teams too go out of their way (often literary) to assist their clients in these overwhelmingly trying times. And I learned in working with some of them that insurers and their clients can benefit from “smart city technologies’ also in times of massive losses.

I have seen SynerScope and other technology being used to monitor wind and water levels, overlaying this data with insured risks and exposure data. Augmented with drone and satellite pictures and/or smart building and energy grid sensor data (part of this is often publicly available and open), this information gives a very quick first assessment of damage (property and some business interruption) to specific locations. I have seen satellite and drone pictures of exposures being machine analyzed, augmented with other data and deployed in an artificial intelligence/machine learning environment  that by using similarity analyses quickly identifies other insured exposures that most likely have incurred similar damages. This enables adjustors to  proactively get involved in addressing this potential claim, hopefully limiting damages and getting the insured back to normal as soon as possible. Another use of this application of course is fraud detection.

Smart City Data

Smart city projects  use technology to make daily life better for citizens, business and government. The (big) data these projects generate however can also be very helpful in dealing with a catastrophe and its aftermath. We don’t always think creatively about re-using our data for new purposes. Between carriers, governments and technology providers we should explore this more. To make our cities even smarter, also in bad times.

Download the report:

Smart Cities and Insurance

 

Webinar SynerScope with Hortonworks

Do Insurers Spend Too Much Time Understanding Data vs. Finding Value In It?

Recorded Tuesday, April 25, 2017  |  57 minutes

Every insurance company regardless of line of business is focused on being more data-centric. Risk assessments based data is at the heart of analysis. Understanding and paying valid claims quickly is key to customer retention and loyalty. Creating new insurance offerings to meet market and customer demands is imperative to remain relevant.

Today insurance companies have more data available to them than ever before. Whether it is big data from open data sets, IoT data, customer behavior data, photos from risk assessments, drone aerial inspections of property or traditional risk/claim/customer data. It’s all data! The challenge remains “how quickly can you ask questions of the data and make insightful business decisions”?

During this webinar you will learn how to:

  • Spend little or no time on data hygiene and data transformation
  • Make data accessible across the enterprise through data usage and collaboration
  • Quickly identify what new open data or existing data is most valuable for your risk assessments
  • Leverage deep learning on the underwriting and claims processes for a positive impact to your combined ratio
Speakers
  • Cindy Maike, GM of Insurance Solutions at Hortonworks
  • Monique Hesseling, Strategic Business Advisor at SynerScope
  • Pieter Stolk, VP of Customer Engagement at SynerScope

SynerScope announces Gold ISV partner status for Hortonworks

January 16, 2017, The Netherlands – SynerScope, the Big Data Analytics innovator, today announced its GOLD ISV status for Hortonworks Data Platform (HDP) and Hortonworks Data Flow (HDF). Hortonworks, a leading innovator of open and connected data platforms has certified SynerScope’s solution and expertise for the Insurance Industry.  As a member of the Hortonworks Partnerworks community, SynerScope is able to develop, test, certify, deploy and support joint solutions as well as gain access to technical and marketing resources.

Jan-Kees Buenen, CEO of SynerScope, said:  “Insurance companies often need to act really quickly due to suddenly changing circumstances. This requires fast decision-making based on current data.  Customers are keen to find and fully understand data patterns to enable insight and action that directly impact business objectives.  We believe our partnership with Hortonworks will give the insurance industry instruments for real-time, ultra-fast decision making by domain experts for continuous improvement.”

The insurance industry is undergoing a radical transformation and regulators are demanding greater focus on solvability. The Hortonworks certified SynerScope solution enables customers to reduce costs and risks, improve efficiency, generate new revenue, drive growth and comply with regulations. SynerScope has linked its advanced high speed visualization technology to HDP. This combination provides machine learning and advanced analysis but also helps avoid creating any specific database lock-ins.  NoSQL, Search and in-memory SQL integrated databases complete the technology stack that allows insurance carriers and other enterprises to become data driven in strategic analysis and operations.

Cindy Maike, general manager for insurance at Hortonworks, commented: “Our customers want insights delivered at the speed of business. Synerscope accelerates customer success with reduced implementation times with its insurance industry knowledge and expertise.”

 

 

SynerScope featured in Financial Times

Excited to be featured in FT!

We are happy with this report on Risk Management Technology in the Financial Times. SynerScope was one of ten companies in a full page article titled “Insurtech startups causing a stir”.

The start-ups are targeting all parts of insurance. Some focus on distribution, using new technology to reach consumers that traditional insurers miss. Others, like SynerScope, are looking at analytics, helping insurers to use data to make better underwriting decisions. Blockchain — the technology that underpins bitcoin — is increasingly popular, while health insurance has been a big area of start-up activity in the US. Nor have start-ups ignored the potential of the “internet of things” — the growing use of data-collecting devices in everyday items, from cars using telematics systems to connected homes.” (FT, October 3, 2016)

The Financial Times has an average daily readership of 2.2m. The world edition has a daily circulation of over 200k. According to the Global Capital Markets Survey, which measures readership habits amongst most senior financial decision makers in the world’s largest financial institutions, the Financial Times is considered the most important business read.

Click on the image to read the report:

SynerScope demonstrates the power of Artificial Intelligence for Insurers

October 4, 2016 The Netherlands –

This week data analytics specialist SynerScope will use Insurance Analytics Europe in London to showcase the enormous potential of Artificial Intelligence-driven applications for the insurance industry.

The Internet of Things is creating an unprecedented volume of information which threatens to overwhelm conventional means of analysis. Together with AI computing company NVIDIA, SynerScope is using a branch of Artificial Intelligence called Deep Learning to unlock the ‘black box’ of data and exploit its potential to help the insurance industry make better underwriting decisions. SynerScope is used to looking beneath the surface. Its background is in MRI scanning and equipment of the kind used to look beneath the layers of paint on old masters’ artwork. Now SynerScope is applying its expertise to the growing volumes of data that insurers collect and turning them into actionable insights like reducing claims and preventing accidents. (see FT article October 3, 2016)

SynerScope is using the NVIDIA DGX 1, designed as a ‘plug and play’ deep learning supercomputer in a box, to accelerate Deep Learning for the insurance world. Deep Learning, which teaches computers to solve problems and find meaning by training them with huge amounts of data, is replacing handcrafted features with efficient algorithms for unsupervised or semi-supervised feature learning and hierarchical feature extraction.

The end-goal for SynerScope is help insurers open up their data lake through the application of deep learning as a compliment to their staff’s expertise.

Alastair Houston, Financial Service Industries Sales Manager at NVIDIA, said: “Across the world, industries seeking to realise the potential of Deep Learning are converging on NVIDIA’s computing platform. Now, by partnering with SynerScope at Insurance Analytics Europe, our aim is to educate the insurance sector about the game-changing impact deep learning can have, uncovering valuable insights which would otherwise lie undiscovered in their data lakes.” 

By combining NVIDIA’s deep learning compute platform with its applications for the insurance industry, SynerScope is eliminating the need for substantial data cleaning and data modelling and ensuring efficiency in creating insight. For further information please see SynerScope at booth 10 at the event.

 

About SynerScope

SynerScope, the next generation platform that provides analytics solutions to help discover critical insights from massive amounts of data, including dark data, and turning it into useful information and insights.

SynerScope combines Scientific Visualization Technologies (MRI scanner), ultrafast predictive analytics and machine learning on top of its proprietary enterprise data navigation, -search and -linking.

SynerScope’s offering is delivered through a tight integration of technologies. Its unique back-end software called Legato 2.0, parses the data sources in an instant at scale to provide a head-start to any analytics projects by providing a coherent overview. It works equally well with data where no reliable data-scheme is available. From Legato 2.0 we ingest the data into our front-end software Marcato 4.0, it’s screens designed for collaborative discovery from raw data.

This technology stack provides enterprises high speed detection of abnormal behaviours and anomalies in complex data. SynerScope operates in the following sectors: Banking, Insurance, Critical Infrastructure, and now also in Cyber Security. Learn more at Synerscope.com.

SynerScope has strategic partnerships with NVIDIA, Hortonworks, IBM, Dell, SAP.

 

Media contacts

Toby Walsh

+44 777 337 4545

toby@twpr.co

 

Marieke Beijsens

+31 (0)6 2364 4933

marketing@synerscope.com

Synerscope is now certified partner with Hortonworks

We are thrilled to announce the certification of SynerScope on Hortonworks data platform! As of now Hadoop based data can be accessed directly from SynerScope.