Tag Archive for: GDPR

Open Government Act: getting your information house in order just like ‘getting your house in order’

Open Government Act (Woo): from obligation to improvement…

In the program “Sort Your Life Out” with presenter Stacey Solomon, families were given the clear task of saying goodbye to a large part of their belongings so they could move on with a tidy house. The items they selected to get rid of had to be placed in three different boxes: Donate, Sell or Recycle.

Digitization combined with an urge to collect has created almost similar chaos with digital data at many government organizations..

The story

The program makers collect all the items and lay them sorted on the floor of a 2,000-square-meter warehouse. Then the family enters and immediately there is surprise. First about the total quantity after which slowly the recognition of the objects in the different boxes unfolds. Next, they must work on their own, choosing which items to part with and then dividing them among the boxes donate, sell, or recycle.

“The program beautifully portrays man’s urge to collect and the impossibility to turn chaos into structure and order without a predefined overview.”

open government act
(Courtesy of BBC1)

Digitization combined with an urge to collect has created almost similar chaos with digital data at many government organizations. We are generating and collecting data more and more and faster, and with storage costs close to zero, it is quickly growing into an unmanageable amount. Distribution to different departments and system applications result in rigid data silos. No one knows the whole and it is questionable whether the individual details are well known.

A comparison to the various closets, rooms, attics, and garages of a household comes to mind. As there is no overview of the content of the data, so it has become difficult, if not impossible to create some sort of order.

If we first bring all this data sorted and organized into overview, half the work of “information housekeeping” would already done. To achieve that, we need a team of programmers, which is currently lacking. And the employment of more people, whether hired from outside or not, that can do inventory and mark up file by file, and page by page is neither practical nor economically feasible at the current scale of data and information. It would also be a return to a type of approach from the paper world of yesteryear.

SynerScope sorts, categorizes and displays patterns

Digital technology causes the data problem, but it also increasingly provides the opportunity to develop and apply an approach, à la “Sort Your Life Out”, to all digital data. The big shed is found in scalable public cloud infrastructure such as MS Azure. Taking the data, sorting, and visually displaying it can be done with software. SynerScope has developed a very powerful solution in this segment that takes unstructured data in addition to already (partially) structured data. SynerScope sorts, categorizes, and displays patterns in the data, providing the organization’s domain experts with all the information and context needed to apply data markers and labels at detail level in the data. Not page by page but with whole groups of pages, documents, or files at once so great speed can be achieved without compromising quality.

Open Government Act (Woo).

Of course, the data and information housekeeping of government organizations is more complex than the straightforward “de-stuffing” in the TV show. Multiple rules and laws apply to handling government data. The Archives Act specifies what data must be kept and what must be destroyed, and when and for how long. The Woo indicated which data should be actively published and how this should be designed in phases over the coming years across various categories of government data and information. For the older data, it remains possible to continue to request it in Wob (Freedom of Information request) manner, but then under the regime of response times of the Woo.

Tasks such as the Environment and Planning Act and the healthcare domain significantly increase the degree of difficulty in gaining control of all data. The tasks and obligations ensuing from the GDPR run over and through all of this. Privacy protection requires masking, openness requires applying it selectively, deciding what to mask requires good oversight, transparency, and visibility into the details of the data.

In short, for all decisions in each of the aforementioned areas and policy decisions, knowing the data is always a requirement.

SynerScope labels and organizes

After the computer has sorted and displays its patterns from the data, users with domain knowledge will provide each sorted data “box” with labels that mark the content and thus also differentiate the areas according to content. These labels (also called tags or meta-data) are very valuable for reuse throughout the organization. Also, new unknown data can be mixed with such previously labeled data so that it becomes possible to transfer previously acquired knowledge directly to newly entered data.

We would like to let you and your organization experience what putting information management in order means when new methods and processes supported by SynerScope’s technology are deployed. There is an opportunity to turn the Woo from obligation into a powerful push to make your organization work better with digital assets. This will enable the government to greatly improve its service to citizens and society. A government organization that quickly knows more about its data can better tailor the disclosure of data and information to the needs of its various stakeholders and constituencies. This brings a major change from informing afterwards to involving in advance, transparently presenting policy alternatives, and exposing the various considerations, all with the fullest possible context of the underlying data.

Webinar Open Government Act

In our webinar, we gave you an idea of what this will look like in practice. Organizing a pilot with data from your own organization is of course always a possibility. Because a pilot with your own data can be realized within a few days, testing is more efficient than meeting about the possibilities….

Watch our webinar, including video and slides (Dutch language):

 

Using Dynamic Data Labelling to Drive Business Value

Dynamic Data Labelling with Ixivault

Before deriving any value from data, you need to find and retrieve relevant data. Search allows you to achieve that goal. However, for ‘search’ to work, we need two things: A search term needs to be defined by humans; data must be indexed for the computer to find it with cost and speed efficiency and to keep the user engaged. But search efficiency breaks under the sheer scale of all-available data and the presence of dark data (with no indexes or labels attached), when considering either finance or response time points of view.

Technologies like enterprise search never took off for this exact reason. Without labels, it’s ineffective to ask a system to select results from the data. At the very moment of creating the data the creator knows exactly what a file contains. But as time passes our memories fail, and other people might be tasked with finding and retrieving data long after we’ve moved on. Searching data in enterprise applications often means painstakingly looking up each subject or object recorded. For end-user applications like MS Office, we lack even that possibility. Without good labels search and retrieval options are near impossible.  And, while the people who create data know exactly what’s in it, the people who come after, and the programs we create to manage that data, cannot perform the same mental hat trick of pulling meaning from unsorted data.

At SynerScope we offer a solution easily recover data that was either lost over time or vaguely defined from the start.  We first lift such ‘unknown’ data into an automated, AI-based, sorting machine. Once sorted, we involve a human data specialist, who can then work with sub-groups of data rather than individual files. Again, unsupervised, our solution presents the user with the discerning words that represent each sub-group in relation to each other. In essence, the AI presents the prime label options for files and content in each subgroup, no matter what the size in number of files, pages, or paragraphs. The human reviewer only has to select and verify a label option, rather than taking on the heavy lifting task of generating labels.

Thus labeled, the data is ready for established processes for enterprise data. Cataloging, access management, analysis, AI, machine learning, and remediation are common end goals for data after Synerscope Ixivault generates metadata and labels.

SynerScope also allows for ongoing, dynamic relabeling of data as new needs appear. That’s important in this age of fast digital growth, with a constant barrage of new questions and digital needs. Ixivault’s analysis and information extraction capabilities can evolve and adapt to future requirements with ease, speed, and accuracy.

How Does Unlabeled Data Come about?

Data is constantly created and collected. When employees capture or create data, they are adding to files and logs. Humans are also very good at mentally categorizing data – we can navigate with ease through most recent data, unsorted and all. Whether that means navigating a stack of papers or nested folders – our associative brain can remember the general idea of what is in each pile of data – so long as that data doesn’t move. But we’re very limited by the scale we can handle. We have mental pictures of scholars and professors working in rooms where data is piled to the ceiling everywhere, but where little cleaning was ever allowed. This paradigm doesn’t hold for digital data in enterprises. Collaboration, analysis, AI needs and regulations always put too much pressure on knowing where data is.

Catalogs and classification solutions can help, but automation levels for filling process are too low. That leads to gaps and arrears in labeling data. The AI for fully automatic labeling isn’t there yet. Cataloging and classifying business documentation is even harder than classifying digital images and video footage.

Digital Twinning and Delivering Value with Data

Before broadband, there was no such thing as a digital twin for people, man-made objects, or natural objects. Only necessary information was stored in application-based data silos. By 2007, the arrival of the iPhone and its revolution in mobile and mobile devices changed that. Everyone and everything were online, all the time, and constantly generating data. The digital twin, a collection of data representing a real person or a natural or man-made object was born.

In most organizations, these digital twins remain mostly in the dark. Most organizations collect vast quantities of data on clients, customer cases, accounts, and projects. It stays in the dark because it’s compiled, stored, and used in silos. When the people who created the data retire or move to another company, its meaning and content fade quickly – because no one else knows what’s there or why. And, without proper labels your systems will have a hard time handling any of it.

GDPR, HIPPA, CCPA etc. forces organizations to understand what data they have regarding real people, and they demand the same for any historic data stored from the days before those regulations existed.

Regulations evolve, technologies evolve, markets evolve, and your business evolves, all driving highly dynamic changes to what you need to know from your data. If you want to keep up, ensuring that you can use that data to drive business value – while avoiding undue risks from business regulations, data privacy and security regulation – you must be able to search your data. Failing this, you could get caught in a chaotic remediation procedure, accompanied by unsorted data that doesn’t reduce the turmoil, but adds to the chaos.

Dynamic Data Labelling with Ixivault

Ixivault helps you to match data to new realities in a flexible, efficient way, with a dynamic, weakly-supervised system for data labeling. The application installs in your own secure Microsoft Azure client-tenant, using the very data stores you set up and control, so all data always remains securely under your governance. Our solution, and its data sorting power, helps your entire workforce – from LOB to IT –  to categorize, classify, and label data by content – essentially lifting it out of the dark.

Your data is then accessible for all your digital processes.  Ixivault shows situations and objects grouped by similarity of documentation and image recordings and allows you to compare groups for differences in the content.  This simplifies and speeds the tasks of assigning labels to the data. Any activity that requires comparison between cases, objects, situations, data, or a check against set standards is made simple.  Ixivault also improves the quality of data selection, which helps in a range of applications ranging from Know Your Customer and Customer Due Diligence to analytics and AI based predictions using historical data.

For example, insurance companies can use that data to find comparable cases, match them to risks and premium rates, and thereby identify outliers – allowing the company to act in pricing, underwriting or binding or all of them.

SynerScope’s type of dynamic labelling creates opportunities to match any data, fast and flexible. As perception and the cultural applications of data change over time, you can also match data with the evolving needs for information extraction, change labels as data contexts change, and to continue driving value from the data you have at your disposal.

If you want to know more about Ixivault or its dynamic matching capabilities in your organization, contact us for personalized information.

Moving to the Azure cloud: unpacking dark data

Moving to the Azure cloud?

Today, more and more businesses are moving to the cloud – to automate and take advantage of AI and scalable storage, and to reduce costs over existing legacy infrastructure. In fact, in 2021, an estimated 19.2% of large organizations made the move to the cloud. And Microsoft Azure is close to leading that shift – with a 60% market adoption.

Often organizations focus on selected applications during a cloud transition. However, existing data might actually present the bigger complexity.  A majority of organizations use less than 50% of the data they own. At the same time, there is no oversight of data that is owned. This unused, unclassified, and unlabeled data is otherwise known as “dark data”, because it remains in the shade until abundant time is allocated to sort, label, and classify it.

Moving to the Azure Cloud is Like Moving House

We believe there is merit to comparing moving to the Azure cloud and moving house. You decide where to move, you choose your new infrastructure, and you get everything ready to move in. Then, you pack up your old belongings and move it with you. The problem is you likely already have plenty of boxes lying around. Think about your attic, your basement, and storage. Things from earlier relocations. You might have lost all knowledge of what’s in there. The same holds true when your organization’s applications and data must move house. But this time you also have to deal with ‘boxes’ of data left unlabeled by people leaving the organization, data left unused for a longer time, and data left behind from already obsolete applications. Moving this and other less well-known data may create bigger issues in the future.

  • Data is accumulating faster than it ever did before. You’ll have more of it tomorrow. Therefore now is the best time to go through data and categorize it
  • Proper governance of data is impossible without knowing its contents first. Older data collected from before GDPR regulations is still there. Compliance and Risk officers and CISOs dread this unknown data and fear it may fall out of compliance regulations.
  • It can be difficult to pass regulatory compliance audits with dark data ar If you can’t open a ‘box’ of data to show auditors what’s inside, you can’t prove you’re compliant.
  • You’re also not allowed to simply delete data. Industries and governments must comply with laws and regulations on archiving and maintaining open data.
  • When you know what data you have you can strategize and move towards controlled decisions on cold/warm/hot storage to optimize both costs and access. Moving data that is still dark may bring about irreversible data loss or at least expensive repairs in the future
  • Locating and accessing data requires the kind of information best-captured in classifications and labels, historical data analysis needs this metadata.
  • The parts of data that make up dark data leaves organizations vulnerable as it makes designing and taking security precautions extra hard.
  • Sometimes you can or must delete information. However, you can only do so if you know its contents beforehand and can determine regulatory compliance and have the foresight for future valuable analytics.

How can you optimize accessing this data? When one of our clients, the Drents Overijsselse Delta Waterschappen, looked at archiving and storing its past project documentation in the cloud, it found the necessary manual labeling a daunting task. The massive time-investment needed is very similar for other organizations making a cloud transition. Manually reviewing data is simply too labor-intensive for most organizations to undertake within a feasible timeframe.

Unpacking Data with Synerscope’s Ixivault

With Synerscope, you can achieve the data clarity you need. As a weakly supervised AI system, our solutions are built to perform where standard AI approaches would fail. Synerscope’s Ixivault implements onto your Azure Tenant – with no backend of its own. This means that all data stays inside your tenant, which is a big plus for all matters and concerns regarding security, governance, and compliance. Our friction-less implementation then allows you to open up, categorize, and label dark data using a combination of machine learning with manual review to speed up the full process by an average of 70%.

Ixivault analyzes your full data pool of structured and unstructured data, creating categories based on data similarities, pulling keywords and distinctive terms, and generating images of those data stacks – which your domain expert can then sit down to quickly label. Most importantly, Ixivault has built-in learning capabilities, meaning that it gets better at categorizing and labeling your specific data as you use it.

All this makes Ixivault the perfect tool to help you move – by unpacking boxes of data as you move them to the cloud. You can then choose appropriate storage, governance and access controls, even if you need or don’t need to keep the data. For the first time you can have a near edge-to-edge overview of all your data with zoom in options to very granular levels so you can make the best choice what to do next with this newly discovered data. Having new information about your data can make you money and save you money all at the same time.

If you need help with unboxing your dark data as you move, contact us for more information about how Synerscope can help. You may also purchase the Ixivault app directly at Microsoft’s Azure Marketplace.

Is Your Organization Prepared to Manage Dark Data?

The Business Value of Mining Dark Data in Azure Infrastructure

As organizations accelerate the pace of digital transformations, most are moving to the cloud. In 2019, 91% of organizations had at least one cloud service. But, 98% of organizations still maintain on-premises servers, often on legacy infrastructure and systems. At the same time, moving to the cloud is a given for organizations wanting to take advantage of new tools, dashboards, and data management. The global pandemic has created a prime opportunity for many to make that shift. That also means shifting data from old infrastructure to new. For most, it means analyzing, processing, and dealing with massive quantities of “Dark data”.

Most importantly, that dark data is considerable. In 2019, Satya Nadella discussed Microsoft’s shift towards a new, future-friendly Microsoft Azure. In it, he explained that 90% of all data had been created in the last 2 years.  Yet, more than 73% of total data had not yet been analyzed. This includes data collected from customers as well as that generated by knowledge workers with EUC (End-user computing, such as MSFT Office, email, and a host of other applications. As a result, the process of big data creation has only accelerated and (unfortunately) more dark data exists now than ever before.

As organizations make the shift to the cloud, move away from legacy infrastructure and towards microservices with Azure, now is the time to unpack dark data.

Satya Nadella discusses Microsoft’s shift towards a new, future-friendly Microsoft Azure

Dealing with (Dark) Data

The specter of dark data has haunted large organizations for more than a decade. The simple fact of having websites, self-service, online tooling, and digital logs means data accumulates. Whether that’s automatically collected from analytics and programs, stored by employees who then leave the company, or part of valuable business assets that are tucked away as they are replaced – dark data exists. Most companies have no real way of knowing what they have, whether it’s valuable, or even whether they’re legally allowed to delete it. Retaining dark data is primarily about compliance. Yet, storing data for compliance-only purposes means incurring expenses and risks without deriving any real value. And simply shifting dark data to cloud storage means incurring huge costs for the future organization – when dark data will have grown to even more unmanageable proportions.

Driving Value with Dark Data

Dark data is expensive, difficult to store, and difficult to migrate as you move from on-premises to cloud-hosted infrastructure. But it doesn’t have to be that way. If you know what data you have, you can set it into scope, delete data you no longer need, and properly manage what you do need. While you’ll never use dark data on a daily, weekly, or even monthly basis – it can drive considerable value, while preventing regulatory issues that might arise if you fail to unlock that data.

  • Large-scale asset replacement can result in requiring decades-old data stored on legacy systems.
  • GDPR and other regulations may require showing total data assets, which means unlocking dark data to pass compliance requirements
  • Performing proper trend analysis means utilizing the full extent of past data alongside present data and future predictions.

Dark Data is a Business Problem

As your organization shifts to the cloud, it can be tempting to leave the “problem” of dark data to IT staff. Here, the choice will often be to discard or shift it to storage without analysis. But dark data is not an IT problem (although IT should have a stake in determining storage and risk management). Instead, dark data represents real business opportunities, risks, and regulatory compliance. It influences trend and performance analysis, it influences business operations, and it can represent significant value.

For example, when Stedin, a Dutch utility company serving more than 2 million homes, was obligated to install 2 million smart meters within 36 months, they turned to dark data. Their existing system, which utilized current asset records in an ERP was only enabling 85% accuracy on “first time right” quotes for engineer visits. The result was millions in avoidable resource costs and significant customer dissatisfaction. With Synerscope’s help, Stedin was able to analyze historical data from 11 different sources – creating a complete picture of resources and creating a situational dashboard covering more than 70% of target clients. The result was an increase to a 99.8% first time right quote – saving millions and helping Stedin to complete the project within deadline.

Synerscope delivers the tools and expertise to assess, archive, and tag archived data – transforming dark data from across siloed back-ends and applications into manageable and useable assets in the Azure cloud. This, in turn, gives business managers the tools to decide which data is relevant and valuable, which can be discarded, and which must be retained for compliance purposes.

If you’d like to know more, feel free to contact us to start a discussion around your dark data.

 

Ixivault™ – Complete View & Control of Your Data in the Cloud

Ixivault™ – software to get on top of all your enterprise data

The cloud offers compute, memory and storage horsepower and efficient cost to get every bit of information extracted from data. But it needs new software instead of a lift&shift of traditional analytical software to really obtain the benefits. And then there is the question what data to transfer to the cloud? Data silos, structured and unstructured data, Dark Data all stand in the way of an easy transfer. GDPR exacerbates this as it introduces three main challenges that Compliance, Risk and Data Protection functions must deal with, and on which they base their advice to the LOBs and Executive leadership:

  • Does your cloud computing connect with the sensitivity of the data you entrusted or want to entrust to the cloud? If a cloud computing solution is chosen where data processing and/or storing are shared between enterprise customers, the risk of data leakage is present.
  • The question which law applies: again the choice of software or software platform determines if sovereignty principles about the physical location at which certain sensitive data is held can be met.
  • The externalization of privacy requires that no cracks exist between the contracts an enterprise makes with software or platform vendors, the services companies, and the cloud vendor.

These are real hurdles in taking maximum advantage of the speed, scalability, flexibility and cost efficiency of the cloud. Much of this potential is lost when an enterprise at best feels safe transferring only parts of their data. The holes in this ‘Emmental’ cheese of data gets even bigger when we realize that most enterprises have near 70% of Dark Data. (Satya Nadella at Microsoft Inspire 2019)

SynerScope Ixivault™

We propose to scan all content of the enterprise data and using the cloud to perform that in a safe way. For this purpose, SynerScope introduces its Ixivault™ on Microsoft Azure. The setup is entirely within the enterprise’s own Azure tenant through Azure marketplace. The loading to the cloud and bulk scanning happens there (called a vault). The unknown dark data is transformed into known data and silo-ed data is linked so that a grounded decision on its release for further use can be made. Data that cannot be released for wider use in the enterprise cloud tenant is deleted from the vault. Data that’s safe is published for BI, data science and domain expert’s use by different functional departments in the company. All the original data sets stay safely in the company’s data center.

SynerScope turns Dark Data into Bright Data, ready to be used by human combined with machine intelligence for extracting information and value. Our three main solutions Ixivault™, Ixiwa™ and Iximeer™ are designed to handle any type of data in any combination, fast and flexible. Unstructured text, image and IoT data can easily be linked with structured data from ERP, CRM and other operational systems. To facilitate widespread secure and safe working with the data we support the (self) publishing of data to analytic processes (human-machine) under full linkage to the Data Protection Impact Assessment (DPIA) requirements. Granular content-based access control, integration of masking and hashing functions ensure that no eyes will see, and no process will use any un-eligible data (NoXi principle).

To further service the compliance, risk and data protection functions of the company, our systems will log every touch point of the data. Humans and machines can be audited for having worked inside the boundaries of Standard Operating Procedures (SOPs) set with each Project-DPIA. Providing evidence for an always-appropriate use of data is efficiently supported in SynerScope.

Azure

We have built our solution specifically to operate in your own Azure cloud-tenant environment. As an enterprise you directly agree with Microsoft Azure on the SLA’s with regards to data security. Adding SynerScope doesn’t affect these SLA’s and publishing of data inside your enterprise is linked to your own Azure AD set up. SynerScope prefers an Azure set up where all data lives in the ADLS/Blob store (and in its original application source system). We believe data should remain open for use in different applications.

SynerScope is proud to be a partner of Microsoft. We continuously look to exploit the expanding functions of Azure modules. Our serverless architecture provides the flexibility to deploy in any enterprise’s Azure tenant; we gladly welcome you to discuss opportunities to take advantage of your own tenant architecture.

 

IxivaultTM Azure

SynerScope Solution

Flexibility and speed of the SynerScope solution is secured by our patented data scanning, matching and visualization technology.
All this we have developed in aid of your move towards a data driven architecture in the cloud, including full compliance and with a view to provide you with significant cost reduction.
The solution is tried and tested to full satisfaction by our clients in the financial & insurance industry & the critical infrastructure industry.

“SynerScope’s platform might best be described as a data lake management product that covers everything from automated ingestion, through discovery and cataloguing to data preparation.” (BloorInDetail – 2018)

SynerScope will bring you:

Cost Reduction

  • Data warehouse optimization / application rationalization
  • Optimized use of data, cloud data storage and your cloud data warehouse architecture
  • SynerScope as migration and data publishing tool
  • Broad use of data in the LOB by domain experts, citizen data scientists or data scientists proper.

Efficiency improvement IT-operations

  • Incident/ problem root cause analysis reducing time to repair
  • Reduction of incident and problems
  • Usage of historical data and archives

The SynerScope Solution:

  • Extracting and accelerating insight from data with patented products
  • Open technology on Microsoft Azure
  • Deployment via Microsoft marketplace
  • Stand alone deployment also possible
  • Is complementary to third party data cataloguing tools, and adds considerably to the ease of use of unstructured data
  • Security and compliance proof (traceable & transparent)
  • Fast deployment

SynerScope References:

  • Financial & Insurance industry
  • Critical Infrastructure
  • Government (safety regions and smart city)

The Art of Adding Common Sense To Data And knowing When To Dismiss It…

SynerScope

At Synerscope, helping people and notably domain experts, making sense of data is our core business. We bring both structured and unstructured data to use and we focus on combining the most advanced and innovative technologies with the knowledge and brainpower of the people in our customers’ business. Our specially developed visual interface allows domain experts to be directly plugged into the AI analysis process, so they can truly operate side-by-side with data scientists to drive results.

Thinking of the first needs in business continuity following on from the corona outbreak, we recently developed a GDPR compliant Covid-19 radar platform (https://www.synerscope.com/). Our Covid-19 radar supports the tracking and tracing of your personnel and provides HR specialists with ways for rapid and targeted intervention in the event an outbreak should hit your company after having come out of lockdown. Returning to work safely requires your HR departments to be equipped with more information to act upon. We can deliver these insights through our novel ways of handling all the data at hand.

Data, Artificial Intelligence & Common Sense

Due to Covid-19, data and the insights it provides have become an absolute must, as organizations base their decisions on data & analytics strategies in order to survive the outfall of the pandemic.

Making sense of the enormous influx of corona related data that is coming our way, we need Artificial Intelligence and Machine Learning to help master this. However, human common sense & creativity are equally needed in order to teach AI its tricks, as data becomes only meaningful when there is context.

It will be extra impactful as the call to rescue the economy and open up companies sooner rather than later is getting stronger. We have to take into account that we need to track, trace and isolate any cluster of direct contacts around new corona cases as quickly as possible, as not to hinder any progress made while also adhering to the prevalent GDPR rulings in Europe.

 AI Wizardry: it all stems from common sense

For any automated AI to deliver its best value, the models need training and the data input needs to be well selected. Training AI and selecting data are both dependent on human sense making, thus are best applied early on in the process. The quality and speed of human information discovery and knowledge generation depends on both the expertise and the richness of context in the data. Luckily that same AI can help experts to digest a much wider context faster than ever before. When you couple large diversity of expertise with a wider context of data the outcome of the analytical process clearly wins.

When and how do you best involve those people that bring context and common sense to the table? They will add the most value when locating useful data resources and validating the outcomes of algorithmic operations. Leveraging AI should therefore be an ongoing process: with the help of automated unsupervised AI domain experts a data scientist can identify data sources, amass and sort the data, then have data scientists add some more AI and ML wizardry. The outcomes will be presented to the people with the best common sense and with plenty of domain experience. Based on their judgment the algorithms can be optimized and repeated. Following this approach, companies are able to accelerate their progress in AI and learn and improve faster. Equipped with the right tooling and a small helping hand from data scientists, the domain experts will for sure find their way in and around the data themselves: we believe these so called citizen data scientists have a big future role to play in gaining more insight from data!

Reasoning by association

People with business sense help to continuously improve AI by injecting their common sense in the AI system. And what’s more: they add a key substance that typical AI is still missing, which is the capability of causal reasoning and reasoning by association. Enhancing AI with associative and creative thinking is groundbreaking. That’s exactly where our SynerScope technology sets itself apart.

We shouldn’t view AI as a black box, it is too important for that. In our vision, humans and technology are complementary: humans should have the overall lead and control but during the analytics process we should recognize for which steps we give the controls to our computer and let technology guide us through the data. Think of the self-driving car for a minute; while its ABS is fully automatic we still keep a finger at the steering wheel.

Unknown unknowns

As always, there is another side to the value of human contribution in AI. People tend to stick to what they already know, to relate to the context they are familiar with, i.e. to keep a steady course. But we want to ‘think out of the box’ and expect AI to help us with that.
Genuine paradigm shifting AI should help us master the ‘unknown unknowns’. Present us hidden insights and business opportunities that traditional ways of thinking will never unearth, like the cure for a disease that nobody ever thought of, or get the best scenario out of a number of variables too large to be handled by the human brain alone.

To select from the patterns of data that AI helps reveal, again you will need people, but a different kind: people that are fresh in the field, without being primed by too much knowledge, and are not encapsulated in corporate frameworks. And with the right technology solutions to help them find their way in the data, they are able to thrive and your company along with them.

Enabling humans to master all data

Synerscope creates and uses technology to augment intelligence on its way to a data-driven universe. We have developed groundbreaking inventions, some of which are patented. Our products & solutions are in use with some of the world’s Fortune 1000 companies. We are happy to partner with the leading Cloud providers of this world. SynerScope’s solutions provide Intelligence Augmentation (IA) to domain expert users that will make them 20-50x faster at extracting information from raw data. We focus on situations and use cases where insight goals and data inputs are by default not well predefined. And in situations where reaching full-context knowledge requires linking data from text, numbers, maps, networked interactions, transactions, sensors and digital images, in short we combine structured data with unstructured data and everything in between.

If you would like to listen & learn more about our view of the data & analytics market and our solutions, please click here to watch a short video (video SynerScope The Movie: a futuristic vision) or you can contact us by email: info@synerscope.com or call us on +31-88-ALLDATA.

How to manage End User Computing and avoid GDPR or IFRS fines

Author: Jan-Kees Buenen

I’ve long said that End User Computing (EUC) is here to stay, whether we like it or not.

EUC applications such as spreadsheets and database tools can provide a significant benefit to companies by allowing humans to directly manage and manipulate data. Unlike rigid systems like ERP, EUC offers flexibility to businesses and users to quickly deploy initiatives in response to market and economic needs.

However, EUC has become the villain in the big data story. EUC flexibility and speed often lacks lineage, logs and audit capabilities.

The risks of the incomplete governance and compliance mechanisms of EUC are not new. Organizations are pretty aware of the accidents they cause: financial errors, data breaches, audit findings. In the context of increasing data regulation (like GDPR and IFRS) companies struggle to embed EUC in a safe way in their information chains.

GDPR and the impact of EUC

GDPR (General Data Protection Regulation) was enforced on May 25, 2018. It is a legal framework that requires businesses to protect the personal data and privacy of European Union citizens.

Article 32 of the GDPR addresses the security of the processing of personal data. These requirements for data apply to EUC as well.

Article 17 provides the right to be “forgotten” for any individual. Companies have to precisely control data so there is no leftover data lying in unmonitored applications if the user decides to be deleted from all the systems.

The recent financial penalty of 53 Million euro against Google is a concrete example of what may happen to other companies. In accordance with GDPR, Google was fined for lack of transparency, inadequate information and lack of valid consent regarding the ads personalization.

The challenge of EUC applications: they generate data that largely remain in silos, also known as dark data.

IFRS and the impact of EUC

IFRS (International Financial Reporting Standards) aims at bringing transparency, accountability and efficiency to financial markets around the world.

The new compliance requirements, like the new IFRS9 and IFR17, include data at much more defined levels than ever before. Data that currently flows to and from EUC has to be traced, linked and precisely controlled by knowing its content.

Having a higher emphasis on the control environment, workflow and ability to adjust at a very detailed level is key as disclosure and reporting requirements increase.

Using SynerScope to manage the data linked to End User Computing

Organizations have to recognize that EUC falls under the purview of data governance. Any organization that deals with data – basically every organization – has to manage and control such apps so they are able to act immediately to ensure compliance.

SynerScope solutions offer 2 key ways to reclaim management and control over data:

1. Single Pane of Glass

The first solution to reclaim control is to gather the company’s entire data footprint together. Both structured and unstructured data in one unique space: a single pane of glass.

SynerScope offers an advanced analytical approach to include and converge unstructured and semi-structured data sources. All applications from different back-ends are gathered in a unique space. A single, powerful platform for operational analytics that replaces disjointed and disparate data processing silos.

2. Data protection within EUC

The second approach to reclaim control over EUCs is to track and trace all applications, their data and the respective users.

Synerscope combines a top-down overview with all the underlying data records, making it easy to investigate why a certain business metric is off, and where the changes came from. It fluently analyzes textual documents and contracts to help spot the differences between tons of thousands of documents in the blink of an eye.

Furthermore, an extra layer on the top of all data to control outcomes and keep data to check for governance and compliance.

Two powerful tools to get control and insight into End User Computing Data

SynerScope Ixiwa provides a more effective approach to data catalogue and data lake management for business users. Ixiwa is a data lake (Hadoop and Spark-based) management product that ingests data automatically, collects metadata about the ingested data (automatically) and classifies that data for the company. While Ixiwa will often be deployed as a stand-alone solution, it can also be viewed as complementary to third party data cataloguing tools, which tend to focus on structured data only and/or have only limited unstructured capability.

SynerScope Iximeer complements Ixiwa. It is a visual analysis tool that has the ability to apply on-demand analytics against large volumes of data, for real-time decision-making.

Figure 1: SynerScope Ixiwa and Iximeer provide a more efficient and visual approach to data management and analytics

What to do next?

If your organization is concerned about the new IFRS or GDPR regulations and you are searching for solutions to ensure compliance, please contact us to learn more.