The Art of Adding Common Sense To Data And knowing When To Dismiss It…

SynerScope

At Synerscope, helping people and notably domain experts, making sense of data is our core business. We bring both structured and unstructured data to use and we focus on combining the most advanced and innovative technologies with the knowledge and brainpower of the people in our customers’ business. Our specially developed visual interface allows domain experts to be directly plugged into the AI analysis process, so they can truly operate side-by-side with data scientists to drive results.

Thinking of the first needs in business continuity following on from the corona outbreak, we recently developed a GDPR compliant Covid-19 radar platform (https://www.synerscope.com/). Our Covid-19 radar supports the tracking and tracing of your personnel and provides HR specialists with ways for rapid and targeted intervention in the event an outbreak should hit your company after having come out of lockdown. Returning to work safely requires your HR departments to be equipped with more information to act upon. We can deliver these insights through our novel ways of handling all the data at hand.

Data, Artificial Intelligence & Common Sense

Due to Covid-19, data and the insights it provides have become an absolute must, as organizations base their decisions on data & analytics strategies in order to survive the outfall of the pandemic.

Making sense of the enormous influx of corona related data that is coming our way, we need Artificial Intelligence and Machine Learning to help master this. However, human common sense & creativity are equally needed in order to teach AI its tricks, as data becomes only meaningful when there is context.

It will be extra impactful as the call to rescue the economy and open up companies sooner rather than later is getting stronger. We have to take into account that we need to track, trace and isolate any cluster of direct contacts around new corona cases as quickly as possible, as not to hinder any progress made while also adhering to the prevalent GDPR rulings in Europe.

 AI Wizardry: it all stems from common sense

For any automated AI to deliver its best value, the models need training and the data input needs to be well selected. Training AI and selecting data are both dependent on human sense making, thus are best applied early on in the process. The quality and speed of human information discovery and knowledge generation depends on both the expertise and the richness of context in the data. Luckily that same AI can help experts to digest a much wider context faster than ever before. When you couple large diversity of expertise with a wider context of data the outcome of the analytical process clearly wins.

When and how do you best involve those people that bring context and common sense to the table? They will add the most value when locating useful data resources and validating the outcomes of algorithmic operations. Leveraging AI should therefore be an ongoing process: with the help of automated unsupervised AI domain experts a data scientist can identify data sources, amass and sort the data, then have data scientists add some more AI and ML wizardry. The outcomes will be presented to the people with the best common sense and with plenty of domain experience. Based on their judgment the algorithms can be optimized and repeated. Following this approach, companies are able to accelerate their progress in AI and learn and improve faster. Equipped with the right tooling and a small helping hand from data scientists, the domain experts will for sure find their way in and around the data themselves: we believe these so called citizen data scientists have a big future role to play in gaining more insight from data!

Reasoning by association

People with business sense help to continuously improve AI by injecting their common sense in the AI system. And what’s more: they add a key substance that typical AI is still missing, which is the capability of causal reasoning and reasoning by association. Enhancing AI with associative and creative thinking is groundbreaking. That’s exactly where our SynerScope technology sets itself apart.

We shouldn’t view AI as a black box, it is too important for that. In our vision, humans and technology are complementary: humans should have the overall lead and control but during the analytics process we should recognize for which steps we give the controls to our computer and let technology guide us through the data. Think of the self-driving car for a minute; while its ABS is fully automatic we still keep a finger at the steering wheel.

Unknown unknowns

As always, there is another side to the value of human contribution in AI. People tend to stick to what they already know, to relate to the context they are familiar with, i.e. to keep a steady course. But we want to ‘think out of the box’ and expect AI to help us with that.
Genuine paradigm shifting AI should help us master the ‘unknown unknowns’. Present us hidden insights and business opportunities that traditional ways of thinking will never unearth, like the cure for a disease that nobody ever thought of, or get the best scenario out of a number of variables too large to be handled by the human brain alone.

To select from the patterns of data that AI helps reveal, again you will need people, but a different kind: people that are fresh in the field, without being primed by too much knowledge, and are not encapsulated in corporate frameworks. And with the right technology solutions to help them find their way in the data, they are able to thrive and your company along with them.

Enabling humans to master all data

Synerscope creates and uses technology to augment intelligence on its way to a data-driven universe. We have developed groundbreaking inventions, some of which are patented. Our products & solutions are in use with some of the world’s Fortune 1000 companies. We are happy to partner with the leading Cloud providers of this world. SynerScope’s solutions provide Intelligence Augmentation (IA) to domain expert users that will make them 20-50x faster at extracting information from raw data. We focus on situations and use cases where insight goals and data inputs are by default not well predefined. And in situations where reaching full-context knowledge requires linking data from text, numbers, maps, networked interactions, transactions, sensors and digital images, in short we combine structured data with unstructured data and everything in between.

If you would like to listen & learn more about our view of the data & analytics market and our solutions, please click here to watch a short video (video SynerScope The Movie: a futuristic vision) or you can contact us by email: info@synerscope.com or call us on +31-88-ALLDATA.

Inspired by Inspire

Some take-aways from Microsoft Inspire 2019

What’s happens in Vegas should not stay in Vegas…. Microsoft Inspire is one of the major events for the global data community. It is exciting to learn about the developing vision of Microsoft as the leader in all things data for everyone.

From 14-18 July, 2019, Microsoft Inspire invited its partners from over the whole world to Las Vegas. As a Microsoft Partner, we were there to learn, network, do business and have fun with other partners and the Microsoft solution specialists, managers and senior leadership. We will share our experiences and thoughts with you in a series of blogs.

In his keynote Satya Nadella, CEO of Microsoft, covered the increasing intensity and impact of tech to society. Gaming, modern home and workplace, business applications, IoT devices, AI and Machine Learning…  this tech intensity changes the way we live and work. One thing is for sure: we have become data-driven and data will rule the world “That’s what’s leading to us building out Azure as the world’s computer. We now have 54 data center regions.” (Nadella Quote)

To facilitate the new world of data, Microsoft works on the creation of a ‘limitless data estate’ based on Azure Databases and Azure Analytics. Limitless in scale, variety and place: in the cloud and at the edge.

Nadella propagated the democratization of AI and of app development. AI will expand to the many sorts of apps, services and devices that we use in daily life. Consumers and workers will benefit from the help of their digital environment in their doing and decision-making.

All those emerging innovations need to be created. Nadella forecasts that 500 million apps will be built in the next 5 years, more than we saw emerging in the past 40 years. But who makes them? One of the growing pains the digital transformation our society is going through, is the increasing shortage of application developers.

Citizen development, by ordinary users who build and then share their applications, will help to meet this huge challenge. The tools for easy and no-code development are here to stay. This allows everybody to become a developer.

Big Data is Dead, long live Big Data AI.

Nadella’s keynote was also about the big numbers. Of connected devices (50 billion in 10 years time), of users (3,2 billion worldwide at the end of 2019) and – of course – of data. To illustrate the scale of the data explosion: 90% of all data is created in the last 2 years! And mind this: 73% of the data (old and new) still needs to be analyzed.

This leads to a growing ocean of dark data with hidden treasures beneath its surface. You only need to find them and bring them to the surface! And exactly that’s where Synerscope comes in: we can help to create value by accelerating and optimizing the safe and controlled exploration and exploitation of your data. As Gil Press noted in Forrester early July: Big Data is Dead, long live Big Data AI.

In our next blog we will explain how to deal with the ocean of dark data in a safe and cost-effective way. Stay tuned!

Search and Match: Navigate Muddy International Waters

Geopolitics entering Banking IT

Author: Jan-Kees Buenen

The global financial system is weaponized in the war that the US and the international community are waging on rogue states and individuals. Regulators harness the international financial and banking system for strategic-political purposes. Countries like North-Korea, Iran and Russia are hit through boycotts. But also high-level individuals and their business entities are targeted. Oligarchs in Putin’s circle have been blacklisted. And recently, the US authorities have cut off members of the Iranian Islamic Revolutionary Guard Corps from international financial services. Global efforts on anti-terrorism financing are also intensified through Customer Due Diligence (CDD).

Banks and other financial service providers need to step up their initiatives for identifying suspect transactions, people and businesses. This goes far beyond the traditional detection procedures for money laundering and fraud detection. Banks falling short may be punished severely, as several European cases have demonstrated recently. There is only one way for financials to meet the challenges that this new geopolitical environment brings about. And that is automated leveraging of all their data. All data in all processes should be scrutinized to find suspect people, entities and money or to see patterns that may indicate suspect activities. A single truth version of the data has become a must have!

In finding illegal things, any bank has two scenarios that may apply:

  1. Pattern recognition: look for unusual patterns or for patterns that are typical for suspect actions and transactions. In the long run, this scenario may be very effective, as is proven on a daily basis in the cyber security domain where pattern recognition is now one of the main defense practices. But pattern recognition requires immense and time-consuming efforts in creating a monitoring infrastructure, building a rule base, machine learning and validation of the outcomes. Nevertheless, it’s coming. Augmented analytics, an approach that uses machine learning, natural language processing and other automation tools, is poised to dominate the data analysis and business intelligence markets by 2020, according to Gartner.
  2. Search and match: find the needle in the haystack by vetting all available data for clues. Basically, well known concepts and initiatives follow this approach, such as Know Your Customer (KYC), Customer Due Diligence (CDD), Periodic Due Diligence (PDD) and Anti Money Laundering (AML). These are just some of the typical processes that banks and financial service providers have or should have in place. The Search and Match scenario also comes with substantial investments in time and money as the workloads are increasing dramatically as the scope of the associated processes is stretched and will be stretched even further. Regulators are bringing significant more customers under review. Search and Match processes are to a large extent done manually, driving the need for improving economics. And while these processes need to scale up, the human factor limits the scaling capabilities of the process. You simply do not resolve the scaling problem by bringing in more people, even if they could be found in an overheated labor market.

How to scale?

search and match

So how to scale then? Synerscope’s view on the challenge:

  1. A single truth of data in the structured and unstructured domain comes first
  2. Reduce noise / cut the crap
    To keep costs in check SynerScope delivers a triage solution that reduces the number of files to be inspected. This will significantly lower the cycle time of the individual client files from 4 hours to 20 minutes per file.
  3. Look into unstructured data
    In order to find a needle, the whole haystack needs to be examined. Not only the parts of the haystack that are clean and dry, but also the messy, dirty patches. The same applies to the data sources of the bank. The unstructured data in social media, emails, Whatsapp messages, docs and sheets must be analyzed as well to get a 360o view of the subject of investigation and its relations. Not only the ‘easy’ part of clean and structured data in databases must be inspected. Remember: 80% of data is generally considered unstructured data and is left unused for decision-making.
  4. Use visualization as a means for accessing and interrogating data
    The right tooling for visualization can offer much more than just presenting analysis in a pretty way. The tooling will help to maneuver in and between complex data sets in search for those ‘needles’ you need to find.

The SynerScope platform is a point solution for Search and Match in all its appearances. Most companies already have a lot of automation available. Unfortunately, these technologies are not always making it easy to be compliant with the new regulations. The SynerScope platform helps you to be compliant without having to replace your existing critical automation, by analyzing and assessing your whole data lake of structured and unstructured data.

SynerScope helps you to steer clear of criminals, adversaries and unwanted customers in a complex global network of transactions. Visit us at www.synerscope.com

How to manage End User Computing and avoid GDPR or IFRS fines

Author: Jan-Kees Buenen

I’ve long said that End User Computing (EUC) is here to stay, whether we like it or not.

EUC applications such as spreadsheets and database tools can provide a significant benefit to companies by allowing humans to directly manage and manipulate data. Unlike rigid systems like ERP, EUC offers flexibility to businesses and users to quickly deploy initiatives in response to market and economic needs.

However, EUC has become the villain in the big data story. EUC flexibility and speed often lacks lineage, logs and audit capabilities.

The risks of the incomplete governance and compliance mechanisms of EUC are not new. Organizations are pretty aware of the accidents they cause: financial errors, data breaches, audit findings. In the context of increasing data regulation (like GDPR and IFRS) companies struggle to embed EUC in a safe way in their information chains.

GDPR and the impact of EUC

GDPR (General Data Protection Regulation) was enforced on May 25, 2018. It is a legal framework that requires businesses to protect the personal data and privacy of European Union citizens.

Article 32 of the GDPR addresses the security of the processing of personal data. These requirements for data apply to EUC as well.

Article 17 provides the right to be “forgotten” for any individual. Companies have to precisely control data so there is no leftover data lying in unmonitored applications if the user decides to be deleted from all the systems.

The recent financial penalty of 53 Million euro against Google is a concrete example of what may happen to other companies. In accordance with GDPR, Google was fined for lack of transparency, inadequate information and lack of valid consent regarding the ads personalization.

The challenge of EUC applications: they generate data that largely remain in silos, also known as dark data.

IFRS and the impact of EUC

IFRS (International Financial Reporting Standards) aims at bringing transparency, accountability and efficiency to financial markets around the world.

The new compliance requirements, like the new IFRS9 and IFR17, include data at much more defined levels than ever before. Data that currently flows to and from EUC has to be traced, linked and precisely controlled by knowing its content.

Having a higher emphasis on the control environment, workflow and ability to adjust at a very detailed level is key as disclosure and reporting requirements increase.

Using SynerScope to manage the data linked to End User Computing

Organizations have to recognize that EUC falls under the purview of data governance. Any organization that deals with data – basically every organization – has to manage and control such apps so they are able to act immediately to ensure compliance.

SynerScope solutions offer 2 key ways to reclaim management and control over data:

1. Single Pane of Glass

The first solution to reclaim control is to gather the company’s entire data footprint together. Both structured and unstructured data in one unique space: a single pane of glass.

SynerScope offers an advanced analytical approach to include and converge unstructured and semi-structured data sources. All applications from different back-ends are gathered in a unique space. A single, powerful platform for operational analytics that replaces disjointed and disparate data processing silos.

2. Data protection within EUC

The second approach to reclaim control over EUCs is to track and trace all applications, their data and the respective users.

Synerscope combines a top-down overview with all the underlying data records, making it easy to investigate why a certain business metric is off, and where the changes came from. It fluently analyzes textual documents and contracts to help spot the differences between tons of thousands of documents in the blink of an eye.

Furthermore, an extra layer on the top of all data to control outcomes and keep data to check for governance and compliance.

Two powerful tools to get control and insight into End User Computing Data

SynerScope Ixiwa provides a more effective approach to data catalogue and data lake management for business users. Ixiwa is a data lake (Hadoop and Spark-based) management product that ingests data automatically, collects metadata about the ingested data (automatically) and classifies that data for the company. While Ixiwa will often be deployed as a stand-alone solution, it can also be viewed as complementary to third party data cataloguing tools, which tend to focus on structured data only and/or have only limited unstructured capability.

SynerScope Iximeer complements Ixiwa. It is a visual analysis tool that has the ability to apply on-demand analytics against large volumes of data, for real-time decision-making.

Figure 1: SynerScope Ixiwa and Iximeer provide a more efficient and visual approach to data management and analytics

What to do next?

If your organization is concerned about the new IFRS or GDPR regulations and you are searching for solutions to ensure compliance, please contact us to learn more.

 

What can 3 world-class Michelin chefs teach us about Big Data?

Author: Jan-Kees Buenen

Heston Blumenthal. Grant Achatz. Joan Roca. Names that are on the list of the most awarded chefs in the world.

They have been leaders in the global gastronomic sector in the last years and are known for their innovative, emotional and modernist approaches. They are famous for fusing food and tech.

And what possibly links these master chefs with big data?

Firstly, they turn common ingredients (“raw data”) into stunning dishes (“using new unique insights”) with the help of science. They have proven that combining science, technology & human touch creates incredible results. One could even go further and say that the combination of these passionate minds and advanced technologies make un-imaginable results possible.

Look at Heston Blumenthal, the mad scientist and genius chef. He has great knowledge of his business and he’s completely devoted to his passion. What makes him so successful, however , is taking science into his routine. Unique insights are generated by putting together daily ingredients and high-tech utensils, like dehydrators and cartouches. Scientific tools enable him to leverage his creativity.

Secondly, they don’t work in a messy and dirty environment. Their staff is very organized. The kitchen and their labs are always clean. They only work with the best ingredients. They take utmost care of their mise-en-place to never miss any single ingredient when arranging the plate. It is this final touch, sometimes done in the blink of an eye, that brings together all those efforts that create a high-quality experience. “It’s an immense amount of work in a very strict, almost military-like, environment”, says Grant Achatz.

In Big Data we could well learn from those chefs. Cognitive analytics, artificial intelligence and deep learning are the scientific instruments to help create better information. However, insert messy or badly known data, have poor “data-kitchen” practices, operate the data preparation and data science stations at a too big distance from business reality and no science in the world will show good results. When you have a “spaghetti” of models and spreadsheets with messy data you very likely get “garbage in garbage out” results.

Finally, these master chefs had the courage to leave the well-trodden paths of cuisine and they never stop learning. “I’m always pushing the creative envelope and experimenting in the kitchen. But the key to becoming a top chef is taking time to master the fundamentals”, Joan Rocca asserts.

Big Data also requires business leaders to master the basics, push the creative envelope and continue experimenting. Data volumes and sources – sensor, speech, images, audio, video – will continue to grow. The volume and speed of data available from digital channels will continue to outpace manual decision-making. Business leaders, like master chefs, will need to leave the path that once made sense in order to succeed and stand-out in the data-driven world.

Signature dishes like Sound of the sea, Apple Ballloons and Pig Trotter Carpaccio delight so many of us. To achieve similar cutting-edge results with data, why not take your inspiration from chefs?

 

Read this article on LinkedIn: https://www.linkedin.com/pulse/what-can-3-world-class-michelin-chefs-teach-us-big-data-buenen/

Why 2019 will be all about End User Computing?

Author: Jan-Kees Buenen

As 2019 approaches, it’s time to look back at all the IT initiatives in that have happened and see what has shaped 2018.  End user computing, applications aimed to better integrate end users into the computing system environment, will surely definitely be high on the list.

With the pace of change in data monitoring and management in 2018, the world of End User Computing (EUC) looks a lot different than it did just one year ago. Developments in AI, cloud-based technology and analytics brought a new horizon to the scene. 

Some products were replaced, others simply disappeared and several innovations emerged to fill holes many IT pros and business leaders didn’t even know they had. Many of these new technologies were launched to allow businesses to have timely insights.

In recent years, employees couldn’t solve a task or a problem promptly simply because they were lacking in data. New technologies allow businesses and users to evolve exponentially in their daily analysis. Real-time decisions are now possible. Companies can quickly deploy solutions to adapt to changes in the most dynamic markets.

A lot of business leaders still lose sleep over some well-known challenges concerning EUC management. A quick search finds us hundreds of cases in 2018 of misstated financial statement, compliance violations, cybersecurity threats and audit findings. All issues resulting from breakdowns in EUC control.

EUC requirements for today’s enterprises are typically complex and costly. “Employees working from multiple devices and remotely represent a tremendous IT task to manage endpoints…”, said Jed Ayres in a blog post on Forbes, “… add to that constant operating system (OS) migration demands and increasing cost pressures, and EUC might be IT’s greatest challenge”,

2018 was also the year in which leading players from different (and unexpected!) sectors tried to help companies solve the challenges around EUC management. Solutions aimed at solving many of the traditional end user computing issues mentioned above. Some examples:

  • KPMG released a Global Insights Pulse survey that shows a growing number of finance organizations using emerging technologies to enable next-generation finance target operating models
  • Deloitte launched an enterprise-level program for managing EUCs. The initiative provides organizations with a framework for managing and controlling EUC holistically
  • Forrester, for the first time, released an evaluation of the top 12 unified endpoint management solutions available on the market

AND WHAT WILL HAPPEN IN 2019?

Given the increase in the regulation and security constraints, we can expect organizations to continue to face EUC challenges in 2019. It will be crucial for business leaders and IT pros to find solutions that manage, secure and optimize data if they want to succeed in the digital transformation value chain.

New holistic approaches and innovative technologies indicate an exciting year of 2019, with an increasing range of disruptive EUC solutions. Innovations deployed by leading players like Microsoft and Citrix and highly specialized companies and start-ups.

As we wind down 2018 and look ahead to a bright new year, yes, it will be all about end user computing.

Sometimes Too Good To Be True Blocks Disruptive Innovation

Author: Jan-Kees Buenen

Over the last few months we have had the opportunity to present our SynerScope Ixiwa solution to many prospect corporations and potential business partners. I have learned a lot from these conversations, both on how to make our offering even more user-friendly, as well as on which technical developments should be prioritized.

Disruptive technology

The most interesting insight however that I got from all these meetings directly results from having truly disruptive technology: generally people do not believe some of the things Ixiwa can do. They think it is too good to be true. For example, people do not believe that we can tag, analyze, categorize and match content of a data lake at a record level without having pre-created metadata. Other people struggle with believing that we can match structured and unstructured data without pre-defined search logic. They question if our patented many-to-many correlator truly can group content, elements or objects in logical clusters. Or they wonder if it is true that business users can easily add and use data-sets in the lake without IT support. Frankly stated: they think we oversell the capabilities of our platform.

Faster, high quality and value generating insights

So we learned we need to show what we can do as early as possible in the relationship; we demo on real data in a real data lake in a live environment so that our partners and clients can first hand experience how our technology works. And when we get to this point, people become enthusiastic and brainstorm about the projects they could do with our technology. Which of course is what we want; deploying our technology to help customers and partners get faster, high quality and value generating insights out of their data.

Demo our capabilities

So please do us a favor and give us a chance by allowing us to demo our capabilities, even if you don’t immediately believe our story. We promise we won’t disappoint you.

Insurtech: Winning The Battle But Losing The War?

Author: Monique Hesseling

It was exciting to visit InsurTech Connect last week; there was a great energy and optimism on what we collectively can innovate for our insurance industry and customers,  and it was wonderful to see insurers, technology providers of all sizes and maturity levels and capital providers from all around the world coming together and try to move our industry forward.  As somebody eloquently pointed out: it was insuretech meeting maturetech.

Having attended many insurance and technology conference over the years, I really saw progress around innovation this time: we have all spoken a lot about innovation for a long time, and built plans and strategies around it, but this time I saw actual workable and implementable technical applications, and many  carriers ready and able to implement, or at least  run a pilot.

All of this is wonderful, of course. The one thing however that struck me as a possible short term gain but a longer term disadvantage is the current focus on individual use cases and related point solutions.  Now, I do understand that it is easier to “sell” innovation in a large insurance company if it comes with a clear use case and tight business plan including an acceptable ROI, supported by a specific technology solution for this use case. I saw a lot of those last week, and the market seems to be ready to adopt and implement function specific analytics and operational solutions around claims, underwriting, customer service, customer engagement and risk management. Each one of these comes with a clear business case and often a great technical solution, specifically created to address this business case. Which short term leads to a lot of enthusiasm for insuretech and carrier innovation.

However, to truly innovate and benefit from everything new technologies bring to the table, innovation will have to start taking place at a more foundational level; to access and use all available data sources, structured and unstructured, and benefit fully from insights, carriers will have to create an enterprise wide big data infrastructure. Quickly accessing and analyzing all these new and different data types (think unstructured text, pictures or IoT) just cannot  be done in a classical data environment.  Now, creating a new data infrastructure is obviously a big undertaking. So I appreciate that carriers (and therefore tech firms) tend to focus first on smaller, specific use case driven, projects.  I fear however, that some insurance companies will end up with a portfolio of disconnected projects and new technologies, which will quickly lead to data integration and business analytics issues and heated discussions between the business and IT. I start seeing that happening already with some carriers I work with.  So I would suggest to focus on the “data plumbing” first: get buy-in for a new brave data world. Get support and funding for a relevant big data infrastructure, in incremental steps. Start with a small lake, for innovation. Run some of these use cases on this, and quickly scale up to an enterprise level. It is harder to get support and funding for “plumbing” than for a sexy use case around claims or underwriting, but it seems to be even harder to get the plumbing done when the focus is on individual business use cases. Please do not give up on the “plumbing”:  we might win the innovation battle, possibly lots of battles,  but lose the war.

Using Smart City Data for Catastrophe Management: It Still Is All About the Data …

Author: Monique Hessling

Recently I had the pleasure to work with SMA’s Mark Breading on a study about the development of smart cities and what this means for the insurance industry (Breading M, 2017. Smart cities and Insurance. Exploring the Implications. Strategy Meets Action).

Smart Cities and InsuranceIn this study Mark touched on a number of very relevant smart city related technical developments that impact the insurance industry, such as driverless cars, smart buildings, improved traffic management, energy reduction, or sensor-driven better controlled and monitored health and well-being.  Mark also explored how existing risks and how carriers assess them might change, and how these might be reduced due to new technologies.  However, new risks, especially around liability (think cyber, or who is to blame for errors made by technology in a driverless car or in automated traffic management) will evolve. Mark concluded that insurers will have to be ready to address these changes in their products, risk assessments and risk selection.

Here in the USA, the last weeks have shown again how much impact weather can have on our cities, lives, communities, businesses and assets. We all have seen the devastation in Texas, Florida, the Caribbean and other areas and felt the need to help.  As quickly as possible. Insurance carriers and their teams too go out of their way (often literary) to assist their clients in these overwhelmingly trying times. And I learned in working with some of them that insurers and their clients can benefit from “smart city technologies’ also in times of massive losses.

I have seen SynerScope and other technology being used to monitor wind and water levels, overlaying this data with insured risks and exposure data. Augmented with drone and satellite pictures and/or smart building and energy grid sensor data (part of this is often publicly available and open), this information gives a very quick first assessment of damage (property and some business interruption) to specific locations. I have seen satellite and drone pictures of exposures being machine analyzed, augmented with other data and deployed in an artificial intelligence/machine learning environment  that by using similarity analyses quickly identifies other insured exposures that most likely have incurred similar damages. This enables adjustors to  proactively get involved in addressing this potential claim, hopefully limiting damages and getting the insured back to normal as soon as possible. Another use of this application of course is fraud detection.

Smart City Data

Smart city projects  use technology to make daily life better for citizens, business and government. The (big) data these projects generate however can also be very helpful in dealing with a catastrophe and its aftermath. We don’t always think creatively about re-using our data for new purposes. Between carriers, governments and technology providers we should explore this more. To make our cities even smarter, also in bad times.

Download the report:

Smart Cities and Insurance

 

Data Lakes, Buckets and Coffee Cups

Author: Monique Hessling

Over the last years, primarily large carriers and especially the more “cutting edge” ones (for all the doubters: yes there is such a thing as a cutting edge insurer), have invested in building data lakes. The promise was that these lakes would enable them to more easily use and analyze “big data”, and gain insights that would change the way we all do business. Change our business for the better, of course. More efficient, better customer experiences, better products, lower costs. In my conversations with all kinds of carriers, I have learned that I am not the only one who struggles to totally grasp this concept:

A midsize carrier’s CIO in Europe informed me that his company was probably too small for a whole lake, and asked me if he could start with just a “data bucket”. His assumption was that many buckets ultimately would construe a lake. Another carrier’s CIO explained to me that she is the proud owner of a significant lake. It is just running pretty dry since she analyzes, categorizes and normalizes all data before dumping it in. She explained that she was filling a big lake with coffee cups full of data. It would take her a long time to get that lake filled..

You might notice that these comments all dealt with the plumbing of a big data infrastructure; the carriers did not touch on analytics and valuable insights yet. Let alone on operationalizing insights or measurable business value. Many carriers seem to be struggling with the classical pain-point of ETL, also in this new world.

By digging into this issue with big data SMEs , learned that this ETL issue is more a matter of perception than a technological problem. Data does not have to be analyzed and normalized before being dumped into lakes. And it can still be used for analytical exercises. Hadoop companies such as Hortonworks, Cloudera or MapR, or integrated cloud solutions such as the recently announced Deloitte/SAP HANA/AWS solution provide at least part of the solution to dive and snorkel in a lake without restricting oneself to tipping a toe in a bucket of very clean and much analyzed data.

And specialized firms such as SynerScope can prevent weeks, months or even longer of filling that lake with coffee cups full of clean data by providing capabilities to fill lakes with many different types of data fast (often within days) and at a low cost. Adding their capabilities in specialized deep machine learning to these big data initiatives allows for secure, traceable and access controlled use of “messy data” and creates quick business value.

Now, for all of us data geeks, it feels very uncomfortable to work with, or enable others to work with data that has not been vetted at all. But we’ll have to accept that with the influx of the massive amounts of disparate data sources carriers want to use, it will become more and more cost and time prohibitive to check, validate and control every piece of data being used by our businesses at point of intake into the lake. Isn’t it much smarter to take a close look at data at the point where we actually use it? Shifting our thinking that way, coupled with technology available, will enable much faster value out of our big data initiatives. I appreciate that this creates a huge shift in how most of us have learned to deal with data management. However, sometimes our historical truths need to be thrown overboard and into the lake before we can sail to a brighter future.