Is Your Organization Prepared to Manage Dark Data?

The Business Value of Mining Dark Data in Azure Infrastructure

As organizations accelerate the pace of digital transformations, most are moving to the cloud. In 2019, 91% of organizations had at least one cloud service. But, 98% of organizations still maintain on-premises servers, often on legacy infrastructure and systems. At the same time, moving to the cloud is a given for organizations wanting to take advantage of new tools, dashboards, and data management. The global pandemic has created a prime opportunity for many to make that shift. That also means shifting data from old infrastructure to new. For most, it means analyzing, processing, and dealing with massive quantities of “Dark data”.

Most importantly, that dark data is considerable. In 2019, Satya Nadella discussed Microsoft’s shift towards a new, future-friendly Microsoft Azure. In it, he explained that 90% of all data had been created in the last 2 years.  Yet, more than 73% of total data had not yet been analyzed. This includes data collected from customers as well as that generated by knowledge workers with EUC (End-user computing, such as MSFT Office, email, and a host of other applications. As a result, the process of big data creation has only accelerated and (unfortunately) more dark data exists now than ever before.

As organizations make the shift to the cloud, move away from legacy infrastructure and towards microservices with Azure, now is the time to unpack dark data.

Satya Nadella discusses Microsoft’s shift towards a new, future-friendly Microsoft Azure

Dealing with (Dark) Data

The specter of dark data has haunted large organizations for more than a decade. The simple fact of having websites, self-service, online tooling, and digital logs means data accumulates. Whether that’s automatically collected from analytics and programs, stored by employees who then leave the company, or part of valuable business assets that are tucked away as they are replaced – dark data exists. Most companies have no real way of knowing what they have, whether it’s valuable, or even whether they’re legally allowed to delete it. Retaining dark data is primarily about compliance. Yet, storing data for compliance-only purposes means incurring expenses and risks without deriving any real value. And simply shifting dark data to cloud storage means incurring huge costs for the future organization – when dark data will have grown to even more unmanageable proportions.

Driving Value with Dark Data

Dark data is expensive, difficult to store, and difficult to migrate as you move from on-premises to cloud-hosted infrastructure. But it doesn’t have to be that way. If you know what data you have, you can set it into scope, delete data you no longer need, and properly manage what you do need. While you’ll never use dark data on a daily, weekly, or even monthly basis – it can drive considerable value, while preventing regulatory issues that might arise if you fail to unlock that data.

  • Large-scale asset replacement can result in requiring decades-old data stored on legacy systems.
  • GDPR and other regulations may require showing total data assets, which means unlocking dark data to pass compliance requirements
  • Performing proper trend analysis means utilizing the full extent of past data alongside present data and future predictions.

Dark Data is a Business Problem

As your organization shifts to the cloud, it can be tempting to leave the “problem” of dark data to IT staff. Here, the choice will often be to discard or shift it to storage without analysis. But dark data is not an IT problem (although IT should have a stake in determining storage and risk management). Instead, dark data represents real business opportunities, risks, and regulatory compliance. It influences trend and performance analysis, it influences business operations, and it can represent significant value.

For example, when Stedin, a Dutch utility company serving more than 2 million homes, was obligated to install 2 million smart meters within 36 months, they turned to dark data. Their existing system, which utilized current asset records in an ERP was only enabling 85% accuracy on “first time right” quotes for engineer visits. The result was millions in avoidable resource costs and significant customer dissatisfaction. With Synerscope’s help, Stedin was able to analyze historical data from 11 different sources – creating a complete picture of resources and creating a situational dashboard covering more than 70% of target clients. The result was an increase to a 99.8% first time right quote – saving millions and helping Stedin to complete the project within deadline.

Synerscope delivers the tools and expertise to assess, archive, and tag archived data – transforming dark data from across siloed back-ends and applications into manageable and useable assets in the Azure cloud. This, in turn, gives business managers the tools to decide which data is relevant and valuable, which can be discarded, and which must be retained for compliance purposes.

If you’d like to know more, feel free to contact us to start a discussion around your dark data.

 

Ixivault™ – Complete View & Control of Your Data in the Cloud

Ixivault™ – software to get on top of all your enterprise data

The cloud offers compute, memory and storage horsepower and efficient cost to get every bit of information extracted from data. But it needs new software instead of a lift&shift of traditional analytical software to really obtain the benefits. And then there is the question what data to transfer to the cloud? Data silos, structured and unstructured data, Dark Data all stand in the way of an easy transfer. GDPR exacerbates this as it introduces three main challenges that Compliance, Risk and Data Protection functions must deal with, and on which they base their advice to the LOBs and Executive leadership:

  • Does your cloud computing connect with the sensitivity of the data you entrusted or want to entrust to the cloud? If a cloud computing solution is chosen where data processing and/or storing are shared between enterprise customers, the risk of data leakage is present.
  • The question which law applies: again the choice of software or software platform determines if sovereignty principles about the physical location at which certain sensitive data is held can be met.
  • The externalization of privacy requires that no cracks exist between the contracts an enterprise makes with software or platform vendors, the services companies, and the cloud vendor.

These are real hurdles in taking maximum advantage of the speed, scalability, flexibility and cost efficiency of the cloud. Much of this potential is lost when an enterprise at best feels safe transferring only parts of their data. The holes in this ‘Emmental’ cheese of data gets even bigger when we realize that most enterprises have near 70% of Dark Data. (Satya Nadella at Microsoft Inspire 2019)

SynerScope Ixivault™

We propose to scan all content of the enterprise data and using the cloud to perform that in a safe way. For this purpose, SynerScope introduces its Ixivault™ on Microsoft Azure. The setup is entirely within the enterprise’s own Azure tenant through Azure marketplace. The loading to the cloud and bulk scanning happens there (called a vault). The unknown dark data is transformed into known data and silo-ed data is linked so that a grounded decision on its release for further use can be made. Data that cannot be released for wider use in the enterprise cloud tenant is deleted from the vault. Data that’s safe is published for BI, data science and domain expert’s use by different functional departments in the company. All the original data sets stay safely in the company’s data center.

SynerScope turns Dark Data into Bright Data, ready to be used by human combined with machine intelligence for extracting information and value. Our three main solutions Ixivault™, Ixiwa™ and Iximeer™ are designed to handle any type of data in any combination, fast and flexible. Unstructured text, image and IoT data can easily be linked with structured data from ERP, CRM and other operational systems. To facilitate widespread secure and safe working with the data we support the (self) publishing of data to analytic processes (human-machine) under full linkage to the Data Protection Impact Assessment (DPIA) requirements. Granular content-based access control, integration of masking and hashing functions ensure that no eyes will see, and no process will use any un-eligible data (NoXi principle).

To further service the compliance, risk and data protection functions of the company, our systems will log every touch point of the data. Humans and machines can be audited for having worked inside the boundaries of Standard Operating Procedures (SOPs) set with each Project-DPIA. Providing evidence for an always-appropriate use of data is efficiently supported in SynerScope.

Azure

We have built our solution specifically to operate in your own Azure cloud-tenant environment. As an enterprise you directly agree with Microsoft Azure on the SLA’s with regards to data security. Adding SynerScope doesn’t affect these SLA’s and publishing of data inside your enterprise is linked to your own Azure AD set up. SynerScope prefers an Azure set up where all data lives in the ADLS/Blob store (and in its original application source system). We believe data should remain open for use in different applications.

SynerScope is proud to be a partner of Microsoft. We continuously look to exploit the expanding functions of Azure modules. Our serverless architecture provides the flexibility to deploy in any enterprise’s Azure tenant; we gladly welcome you to discuss opportunities to take advantage of your own tenant architecture.

 

IxivaultTM Azure

SynerScope Solution

Flexibility and speed of the SynerScope solution is secured by our patented data scanning, matching and visualization technology.
All this we have developed in aid of your move towards a data driven architecture in the cloud, including full compliance and with a view to provide you with significant cost reduction.
The solution is tried and tested to full satisfaction by our clients in the financial & insurance industry & the critical infrastructure industry.

“SynerScope’s platform might best be described as a data lake management product that covers everything from automated ingestion, through discovery and cataloguing to data preparation.” (BloorInDetail – 2018)

SynerScope will bring you:

Cost Reduction

  • Data warehouse optimization / application rationalization
  • Optimized use of data, cloud data storage and your cloud data warehouse architecture
  • SynerScope as migration and data publishing tool
  • Broad use of data in the LOB by domain experts, citizen data scientists or data scientists proper.

Efficiency improvement IT-operations

  • Incident/ problem root cause analysis reducing time to repair
  • Reduction of incident and problems
  • Usage of historical data and archives

The SynerScope Solution:

  • Extracting and accelerating insight from data with patented products
  • Open technology on Microsoft Azure
  • Deployment via Microsoft marketplace
  • Stand alone deployment also possible
  • Is complementary to third party data cataloguing tools, and adds considerably to the ease of use of unstructured data
  • Security and compliance proof (traceable & transparent)
  • Fast deployment

SynerScope References:

  • Financial & Insurance industry
  • Critical Infrastructure
  • Government (safety regions and smart city)

Real-time Insight in All Data, Present and Past

The promise of data-driven working is great: risk-based inspection, finding new revenue models, reducing costs and delivering better products and services. Every company wants this, but it often fails.
The most important business data cannot be fully unlocked by traditional analysis tools. Why does it go wrong and how do you manage to convert all data into insight?

The more you know, the more efficient and better you can make your products and services. That is why data-driven working is high on the agenda of many organizations. But an Artificial Intelligence or BI tools deliver, only partially or not at all, on the promise of data-driven work. That’s because they can only analyze part of the entire mountain of data. And what is an analysis worth if you can only examine half or a quarter of the data you have.

Insights are hidden in unstructured data

Many organizations started measuring processes in ERP and CRM systems over the past 20 years. They store financial data, machine data and all kinds of sensor data. These measurement data are easy to analyze, but do not tell where things go wrong during the entire operation.
This so-called structured data provides only partial insight, while you look for answers in the analysis of all data. It is estimated that 80% to 90% of all data from organizations is unstructured: we are talking about uncategorized data that stored in systems, notes, e-mails, handwritten notes on work-drawings and all kinds of documents across the organization. This valuable resource remains unexplored.

The unexplored gold mine of unstructured data

Organizations have terabytes of it: project information, notes, invoices, tenders, photos and films that together can yield an enormous amount of insights. This fast-growing gold mine is more of a data maze. Over the years, digitization took place step-by-step, process-by-process and department-by-department. During this digitization in slow motion, no one thought that it was useful to coordinate all information in such a way that you can easily analyze it later.

Artificial Intelligence and BI tools get lost

Departments of factories, offices or government agencies created their own data world through this so called ‘island automation’. Separate silos of application data, process data such as spreadsheets, presentations, invoices, tenders, and texts in all kinds of file formats. Moreover, departments and people all categorize information differently, and not structured like a computer would. Not everyone administers equally neatly, or categories are missing, so that colleagues simply write a lot of data away in the “other” field. The problem is that BI and AI tools cannot properly look into this essential and unstructured information. They lack signage, so they get lost in the maze of unstructured data.

Turning archives into accessible knowledge (and skills)

For many companies the future lies in the past. Because most organizations have boxes full of archived material from the pre-digital era, they are now digitizing at a rapid pace. Decades of acquired knowledge and experience are stored but hidden in these archives. Because, like many digital files, these are not well structured. Who categorized their project notes or files neatly into different categories, if they were available at all? If you want to use this unstructured data now, it will take you hundreds of hours of manual work to analyze. SynerScope’s technology searches terabytes or petabytes of data within 72 hours and provides immediate answers from all data.

Unstructured data harbor new revenue models

How that works? A non-life insurer did not know exactly where 25% of their insurance payments went. That is why SynerScope automatically examined the raw texts of millions of damage claims of the last 20 years. The word broken screen came up immediately for claims above 100 euros. The graph showed that screen breakage was rare until 2010, but then grew explosively. What happened? The insurer had never made the category of smartphones or tablets. As a result, they missed a major cost item, or to put it positively: for years they overlooked a new revenue model.


Turn data into progress

Thanks to the power of cloud computing in Azure, SynerScope is able to analyze large amounts of data in real time. And it doesn’t matter what kind of data it is. Spreadsheets, meeting minutes, drone images, filing cabinets full of invoices, you name it! Do you have hundreds of terabytes or even petabytes of satellite or drone data? Then it will be in the model tomorrow! Thanks to the analysis of the present and the past, organizations with SynerScope’s software live up to the promise of data-driven working. Leading companies such as Achmea, ExxonMobil, Stedin, VIVAT and De Volksbank are converting their data into progress with the solution from SynerScope.
Do you also want insight into your present & past to get a grip on the future?
Then request a demo!

The Art of Adding Common Sense To Data And knowing When To Dismiss It…

SynerScope

At Synerscope, helping people and notably domain experts, making sense of data is our core business. We bring both structured and unstructured data to use and we focus on combining the most advanced and innovative technologies with the knowledge and brainpower of the people in our customers’ business. Our specially developed visual interface allows domain experts to be directly plugged into the AI analysis process, so they can truly operate side-by-side with data scientists to drive results.

Thinking of the first needs in business continuity following on from the corona outbreak, we recently developed a GDPR compliant Covid-19 radar platform (https://www.synerscope.com/). Our Covid-19 radar supports the tracking and tracing of your personnel and provides HR specialists with ways for rapid and targeted intervention in the event an outbreak should hit your company after having come out of lockdown. Returning to work safely requires your HR departments to be equipped with more information to act upon. We can deliver these insights through our novel ways of handling all the data at hand.

Data, Artificial Intelligence & Common Sense

Due to Covid-19, data and the insights it provides have become an absolute must, as organizations base their decisions on data & analytics strategies in order to survive the outfall of the pandemic.

Making sense of the enormous influx of corona related data that is coming our way, we need Artificial Intelligence and Machine Learning to help master this. However, human common sense & creativity are equally needed in order to teach AI its tricks, as data becomes only meaningful when there is context.

It will be extra impactful as the call to rescue the economy and open up companies sooner rather than later is getting stronger. We have to take into account that we need to track, trace and isolate any cluster of direct contacts around new corona cases as quickly as possible, as not to hinder any progress made while also adhering to the prevalent GDPR rulings in Europe.

 AI Wizardry: it all stems from common sense

For any automated AI to deliver its best value, the models need training and the data input needs to be well selected. Training AI and selecting data are both dependent on human sense making, thus are best applied early on in the process. The quality and speed of human information discovery and knowledge generation depends on both the expertise and the richness of context in the data. Luckily that same AI can help experts to digest a much wider context faster than ever before. When you couple large diversity of expertise with a wider context of data the outcome of the analytical process clearly wins.

When and how do you best involve those people that bring context and common sense to the table? They will add the most value when locating useful data resources and validating the outcomes of algorithmic operations. Leveraging AI should therefore be an ongoing process: with the help of automated unsupervised AI domain experts a data scientist can identify data sources, amass and sort the data, then have data scientists add some more AI and ML wizardry. The outcomes will be presented to the people with the best common sense and with plenty of domain experience. Based on their judgment the algorithms can be optimized and repeated. Following this approach, companies are able to accelerate their progress in AI and learn and improve faster. Equipped with the right tooling and a small helping hand from data scientists, the domain experts will for sure find their way in and around the data themselves: we believe these so called citizen data scientists have a big future role to play in gaining more insight from data!

Reasoning by association

People with business sense help to continuously improve AI by injecting their common sense in the AI system. And what’s more: they add a key substance that typical AI is still missing, which is the capability of causal reasoning and reasoning by association. Enhancing AI with associative and creative thinking is groundbreaking. That’s exactly where our SynerScope technology sets itself apart.

We shouldn’t view AI as a black box, it is too important for that. In our vision, humans and technology are complementary: humans should have the overall lead and control but during the analytics process we should recognize for which steps we give the controls to our computer and let technology guide us through the data. Think of the self-driving car for a minute; while its ABS is fully automatic we still keep a finger at the steering wheel.

Unknown unknowns

As always, there is another side to the value of human contribution in AI. People tend to stick to what they already know, to relate to the context they are familiar with, i.e. to keep a steady course. But we want to ‘think out of the box’ and expect AI to help us with that.
Genuine paradigm shifting AI should help us master the ‘unknown unknowns’. Present us hidden insights and business opportunities that traditional ways of thinking will never unearth, like the cure for a disease that nobody ever thought of, or get the best scenario out of a number of variables too large to be handled by the human brain alone.

To select from the patterns of data that AI helps reveal, again you will need people, but a different kind: people that are fresh in the field, without being primed by too much knowledge, and are not encapsulated in corporate frameworks. And with the right technology solutions to help them find their way in the data, they are able to thrive and your company along with them.

Enabling humans to master all data

Synerscope creates and uses technology to augment intelligence on its way to a data-driven universe. We have developed groundbreaking inventions, some of which are patented. Our products & solutions are in use with some of the world’s Fortune 1000 companies. We are happy to partner with the leading Cloud providers of this world. SynerScope’s solutions provide Intelligence Augmentation (IA) to domain expert users that will make them 20-50x faster at extracting information from raw data. We focus on situations and use cases where insight goals and data inputs are by default not well predefined. And in situations where reaching full-context knowledge requires linking data from text, numbers, maps, networked interactions, transactions, sensors and digital images, in short we combine structured data with unstructured data and everything in between.

If you would like to listen & learn more about our view of the data & analytics market and our solutions, please click here to watch a short video (video SynerScope The Movie: a futuristic vision) or you can contact us by email: info@synerscope.com or call us on +31-88-ALLDATA.

Inspired by Inspire

Some take-aways from Microsoft Inspire 2019

What’s happens in Vegas should not stay in Vegas…. Microsoft Inspire is one of the major events for the global data community. It is exciting to learn about the developing vision of Microsoft as the leader in all things data for everyone.

From 14-18 July, 2019, Microsoft Inspire invited its partners from over the whole world to Las Vegas. As a Microsoft Partner, we were there to learn, network, do business and have fun with other partners and the Microsoft solution specialists, managers and senior leadership. We will share our experiences and thoughts with you in a series of blogs.

In his keynote Satya Nadella, CEO of Microsoft, covered the increasing intensity and impact of tech to society. Gaming, modern home and workplace, business applications, IoT devices, AI and Machine Learning…  this tech intensity changes the way we live and work. One thing is for sure: we have become data-driven and data will rule the world “That’s what’s leading to us building out Azure as the world’s computer. We now have 54 data center regions.” (Nadella Quote)

To facilitate the new world of data, Microsoft works on the creation of a ‘limitless data estate’ based on Azure Databases and Azure Analytics. Limitless in scale, variety and place: in the cloud and at the edge.

Nadella propagated the democratization of AI and of app development. AI will expand to the many sorts of apps, services and devices that we use in daily life. Consumers and workers will benefit from the help of their digital environment in their doing and decision-making.

All those emerging innovations need to be created. Nadella forecasts that 500 million apps will be built in the next 5 years, more than we saw emerging in the past 40 years. But who makes them? One of the growing pains the digital transformation our society is going through, is the increasing shortage of application developers.

Citizen development, by ordinary users who build and then share their applications, will help to meet this huge challenge. The tools for easy and no-code development are here to stay. This allows everybody to become a developer.

Big Data is Dead, long live Big Data AI.

Nadella’s keynote was also about the big numbers. Of connected devices (50 billion in 10 years time), of users (3,2 billion worldwide at the end of 2019) and – of course – of data. To illustrate the scale of the data explosion: 90% of all data is created in the last 2 years! And mind this: 73% of the data (old and new) still needs to be analyzed.

This leads to a growing ocean of dark data with hidden treasures beneath its surface. You only need to find them and bring them to the surface! And exactly that’s where Synerscope comes in: we can help to create value by accelerating and optimizing the safe and controlled exploration and exploitation of your data. As Gil Press noted in Forrester early July: Big Data is Dead, long live Big Data AI.

In our next blog we will explain how to deal with the ocean of dark data in a safe and cost-effective way. Stay tuned!

Search and Match: Navigate Muddy International Waters

Geopolitics entering Banking IT

Author: Jan-Kees Buenen

The global financial system is weaponized in the war that the US and the international community are waging on rogue states and individuals. Regulators harness the international financial and banking system for strategic-political purposes. Countries like North-Korea, Iran and Russia are hit through boycotts. But also high-level individuals and their business entities are targeted. Oligarchs in Putin’s circle have been blacklisted. And recently, the US authorities have cut off members of the Iranian Islamic Revolutionary Guard Corps from international financial services. Global efforts on anti-terrorism financing are also intensified through Customer Due Diligence (CDD).

Banks and other financial service providers need to step up their initiatives for identifying suspect transactions, people and businesses. This goes far beyond the traditional detection procedures for money laundering and fraud detection. Banks falling short may be punished severely, as several European cases have demonstrated recently. There is only one way for financials to meet the challenges that this new geopolitical environment brings about. And that is automated leveraging of all their data. All data in all processes should be scrutinized to find suspect people, entities and money or to see patterns that may indicate suspect activities. A single truth version of the data has become a must have!

In finding illegal things, any bank has two scenarios that may apply:

  1. Pattern recognition: look for unusual patterns or for patterns that are typical for suspect actions and transactions. In the long run, this scenario may be very effective, as is proven on a daily basis in the cyber security domain where pattern recognition is now one of the main defense practices. But pattern recognition requires immense and time-consuming efforts in creating a monitoring infrastructure, building a rule base, machine learning and validation of the outcomes. Nevertheless, it’s coming. Augmented analytics, an approach that uses machine learning, natural language processing and other automation tools, is poised to dominate the data analysis and business intelligence markets by 2020, according to Gartner.
  2. Search and match: find the needle in the haystack by vetting all available data for clues. Basically, well known concepts and initiatives follow this approach, such as Know Your Customer (KYC), Customer Due Diligence (CDD), Periodic Due Diligence (PDD) and Anti Money Laundering (AML). These are just some of the typical processes that banks and financial service providers have or should have in place. The Search and Match scenario also comes with substantial investments in time and money as the workloads are increasing dramatically as the scope of the associated processes is stretched and will be stretched even further. Regulators are bringing significant more customers under review. Search and Match processes are to a large extent done manually, driving the need for improving economics. And while these processes need to scale up, the human factor limits the scaling capabilities of the process. You simply do not resolve the scaling problem by bringing in more people, even if they could be found in an overheated labor market.

How to scale?

search and match

So how to scale then? Synerscope’s view on the challenge:

  1. A single truth of data in the structured and unstructured domain comes first
  2. Reduce noise / cut the crap
    To keep costs in check SynerScope delivers a triage solution that reduces the number of files to be inspected. This will significantly lower the cycle time of the individual client files from 4 hours to 20 minutes per file.
  3. Look into unstructured data
    In order to find a needle, the whole haystack needs to be examined. Not only the parts of the haystack that are clean and dry, but also the messy, dirty patches. The same applies to the data sources of the bank. The unstructured data in social media, emails, Whatsapp messages, docs and sheets must be analyzed as well to get a 360o view of the subject of investigation and its relations. Not only the ‘easy’ part of clean and structured data in databases must be inspected. Remember: 80% of data is generally considered unstructured data and is left unused for decision-making.
  4. Use visualization as a means for accessing and interrogating data
    The right tooling for visualization can offer much more than just presenting analysis in a pretty way. The tooling will help to maneuver in and between complex data sets in search for those ‘needles’ you need to find.

The SynerScope platform is a point solution for Search and Match in all its appearances. Most companies already have a lot of automation available. Unfortunately, these technologies are not always making it easy to be compliant with the new regulations. The SynerScope platform helps you to be compliant without having to replace your existing critical automation, by analyzing and assessing your whole data lake of structured and unstructured data.

SynerScope helps you to steer clear of criminals, adversaries and unwanted customers in a complex global network of transactions. Visit us at www.synerscope.com

How to manage End User Computing and avoid GDPR or IFRS fines

Author: Jan-Kees Buenen

I’ve long said that End User Computing (EUC) is here to stay, whether we like it or not.

EUC applications such as spreadsheets and database tools can provide a significant benefit to companies by allowing humans to directly manage and manipulate data. Unlike rigid systems like ERP, EUC offers flexibility to businesses and users to quickly deploy initiatives in response to market and economic needs.

However, EUC has become the villain in the big data story. EUC flexibility and speed often lacks lineage, logs and audit capabilities.

The risks of the incomplete governance and compliance mechanisms of EUC are not new. Organizations are pretty aware of the accidents they cause: financial errors, data breaches, audit findings. In the context of increasing data regulation (like GDPR and IFRS) companies struggle to embed EUC in a safe way in their information chains.

GDPR and the impact of EUC

GDPR (General Data Protection Regulation) was enforced on May 25, 2018. It is a legal framework that requires businesses to protect the personal data and privacy of European Union citizens.

Article 32 of the GDPR addresses the security of the processing of personal data. These requirements for data apply to EUC as well.

Article 17 provides the right to be “forgotten” for any individual. Companies have to precisely control data so there is no leftover data lying in unmonitored applications if the user decides to be deleted from all the systems.

The recent financial penalty of 53 Million euro against Google is a concrete example of what may happen to other companies. In accordance with GDPR, Google was fined for lack of transparency, inadequate information and lack of valid consent regarding the ads personalization.

The challenge of EUC applications: they generate data that largely remain in silos, also known as dark data.

IFRS and the impact of EUC

IFRS (International Financial Reporting Standards) aims at bringing transparency, accountability and efficiency to financial markets around the world.

The new compliance requirements, like the new IFRS9 and IFR17, include data at much more defined levels than ever before. Data that currently flows to and from EUC has to be traced, linked and precisely controlled by knowing its content.

Having a higher emphasis on the control environment, workflow and ability to adjust at a very detailed level is key as disclosure and reporting requirements increase.

Using SynerScope to manage the data linked to End User Computing

Organizations have to recognize that EUC falls under the purview of data governance. Any organization that deals with data – basically every organization – has to manage and control such apps so they are able to act immediately to ensure compliance.

SynerScope solutions offer 2 key ways to reclaim management and control over data:

1. Single Pane of Glass

The first solution to reclaim control is to gather the company’s entire data footprint together. Both structured and unstructured data in one unique space: a single pane of glass.

SynerScope offers an advanced analytical approach to include and converge unstructured and semi-structured data sources. All applications from different back-ends are gathered in a unique space. A single, powerful platform for operational analytics that replaces disjointed and disparate data processing silos.

2. Data protection within EUC

The second approach to reclaim control over EUCs is to track and trace all applications, their data and the respective users.

Synerscope combines a top-down overview with all the underlying data records, making it easy to investigate why a certain business metric is off, and where the changes came from. It fluently analyzes textual documents and contracts to help spot the differences between tons of thousands of documents in the blink of an eye.

Furthermore, an extra layer on the top of all data to control outcomes and keep data to check for governance and compliance.

Two powerful tools to get control and insight into End User Computing Data

SynerScope Ixiwa provides a more effective approach to data catalogue and data lake management for business users. Ixiwa is a data lake (Hadoop and Spark-based) management product that ingests data automatically, collects metadata about the ingested data (automatically) and classifies that data for the company. While Ixiwa will often be deployed as a stand-alone solution, it can also be viewed as complementary to third party data cataloguing tools, which tend to focus on structured data only and/or have only limited unstructured capability.

SynerScope Iximeer complements Ixiwa. It is a visual analysis tool that has the ability to apply on-demand analytics against large volumes of data, for real-time decision-making.

Figure 1: SynerScope Ixiwa and Iximeer provide a more efficient and visual approach to data management and analytics

What to do next?

If your organization is concerned about the new IFRS or GDPR regulations and you are searching for solutions to ensure compliance, please contact us to learn more.

 

What can 3 world-class Michelin chefs teach us about Big Data?

Author: Jan-Kees Buenen

Heston Blumenthal. Grant Achatz. Joan Roca. Names that are on the list of the most awarded chefs in the world.

They have been leaders in the global gastronomic sector in the last years and are known for their innovative, emotional and modernist approaches. They are famous for fusing food and tech.

And what possibly links these master chefs with big data?

Firstly, they turn common ingredients (“raw data”) into stunning dishes (“using new unique insights”) with the help of science. They have proven that combining science, technology & human touch creates incredible results. One could even go further and say that the combination of these passionate minds and advanced technologies make un-imaginable results possible.

Look at Heston Blumenthal, the mad scientist and genius chef. He has great knowledge of his business and he’s completely devoted to his passion. What makes him so successful, however , is taking science into his routine. Unique insights are generated by putting together daily ingredients and high-tech utensils, like dehydrators and cartouches. Scientific tools enable him to leverage his creativity.

Secondly, they don’t work in a messy and dirty environment. Their staff is very organized. The kitchen and their labs are always clean. They only work with the best ingredients. They take utmost care of their mise-en-place to never miss any single ingredient when arranging the plate. It is this final touch, sometimes done in the blink of an eye, that brings together all those efforts that create a high-quality experience. “It’s an immense amount of work in a very strict, almost military-like, environment”, says Grant Achatz.

In Big Data we could well learn from those chefs. Cognitive analytics, artificial intelligence and deep learning are the scientific instruments to help create better information. However, insert messy or badly known data, have poor “data-kitchen” practices, operate the data preparation and data science stations at a too big distance from business reality and no science in the world will show good results. When you have a “spaghetti” of models and spreadsheets with messy data you very likely get “garbage in garbage out” results.

Finally, these master chefs had the courage to leave the well-trodden paths of cuisine and they never stop learning. “I’m always pushing the creative envelope and experimenting in the kitchen. But the key to becoming a top chef is taking time to master the fundamentals”, Joan Rocca asserts.

Big Data also requires business leaders to master the basics, push the creative envelope and continue experimenting. Data volumes and sources – sensor, speech, images, audio, video – will continue to grow. The volume and speed of data available from digital channels will continue to outpace manual decision-making. Business leaders, like master chefs, will need to leave the path that once made sense in order to succeed and stand-out in the data-driven world.

Signature dishes like Sound of the sea, Apple Ballloons and Pig Trotter Carpaccio delight so many of us. To achieve similar cutting-edge results with data, why not take your inspiration from chefs?

 

Read this article on LinkedIn: https://www.linkedin.com/pulse/what-can-3-world-class-michelin-chefs-teach-us-big-data-buenen/

Why 2019 will be all about End User Computing?

Author: Jan-Kees Buenen

As 2019 approaches, it’s time to look back at all the IT initiatives in that have happened and see what has shaped 2018.  End user computing, applications aimed to better integrate end users into the computing system environment, will surely definitely be high on the list.

With the pace of change in data monitoring and management in 2018, the world of End User Computing (EUC) looks a lot different than it did just one year ago. Developments in AI, cloud-based technology and analytics brought a new horizon to the scene. 

Some products were replaced, others simply disappeared and several innovations emerged to fill holes many IT pros and business leaders didn’t even know they had. Many of these new technologies were launched to allow businesses to have timely insights.

In recent years, employees couldn’t solve a task or a problem promptly simply because they were lacking in data. New technologies allow businesses and users to evolve exponentially in their daily analysis. Real-time decisions are now possible. Companies can quickly deploy solutions to adapt to changes in the most dynamic markets.

A lot of business leaders still lose sleep over some well-known challenges concerning EUC management. A quick search finds us hundreds of cases in 2018 of misstated financial statement, compliance violations, cybersecurity threats and audit findings. All issues resulting from breakdowns in EUC control.

EUC requirements for today’s enterprises are typically complex and costly. “Employees working from multiple devices and remotely represent a tremendous IT task to manage endpoints…”, said Jed Ayres in a blog post on Forbes, “… add to that constant operating system (OS) migration demands and increasing cost pressures, and EUC might be IT’s greatest challenge”,

2018 was also the year in which leading players from different (and unexpected!) sectors tried to help companies solve the challenges around EUC management. Solutions aimed at solving many of the traditional end user computing issues mentioned above. Some examples:

  • KPMG released a Global Insights Pulse survey that shows a growing number of finance organizations using emerging technologies to enable next-generation finance target operating models
  • Deloitte launched an enterprise-level program for managing EUCs. The initiative provides organizations with a framework for managing and controlling EUC holistically
  • Forrester, for the first time, released an evaluation of the top 12 unified endpoint management solutions available on the market

AND WHAT WILL HAPPEN IN 2019?

Given the increase in the regulation and security constraints, we can expect organizations to continue to face EUC challenges in 2019. It will be crucial for business leaders and IT pros to find solutions that manage, secure and optimize data if they want to succeed in the digital transformation value chain.

New holistic approaches and innovative technologies indicate an exciting year of 2019, with an increasing range of disruptive EUC solutions. Innovations deployed by leading players like Microsoft and Citrix and highly specialized companies and start-ups.

As we wind down 2018 and look ahead to a bright new year, yes, it will be all about end user computing.

Sometimes Too Good To Be True Blocks Disruptive Innovation

Author: Jan-Kees Buenen

Over the last few months we have had the opportunity to present our SynerScope Ixiwa solution to many prospect corporations and potential business partners. I have learned a lot from these conversations, both on how to make our offering even more user-friendly, as well as on which technical developments should be prioritized.

Disruptive technology

The most interesting insight however that I got from all these meetings directly results from having truly disruptive technology: generally people do not believe some of the things Ixiwa can do. They think it is too good to be true. For example, people do not believe that we can tag, analyze, categorize and match content of a data lake at a record level without having pre-created metadata. Other people struggle with believing that we can match structured and unstructured data without pre-defined search logic. They question if our patented many-to-many correlator truly can group content, elements or objects in logical clusters. Or they wonder if it is true that business users can easily add and use data-sets in the lake without IT support. Frankly stated: they think we oversell the capabilities of our platform.

Faster, high quality and value generating insights

So we learned we need to show what we can do as early as possible in the relationship; we demo on real data in a real data lake in a live environment so that our partners and clients can first hand experience how our technology works. And when we get to this point, people become enthusiastic and brainstorm about the projects they could do with our technology. Which of course is what we want; deploying our technology to help customers and partners get faster, high quality and value generating insights out of their data.

Demo our capabilities

So please do us a favor and give us a chance by allowing us to demo our capabilities, even if you don’t immediately believe our story. We promise we won’t disappoint you.