Technology we build.
At SynerScope, we built our two core product from scratch. We designed and tested the required workflows for unstructured data to make our software fit for even the largest of enterprises. All that unstructured data knowledge has shaped the way we build our software.
Iximeer is built on a stack closer to the hardware. It uses a C++ core library for maximum performance, and interfaces directly with NVidia GPUs to provide the lowest latency possible. By introducing our patented text analytics. The view configurator in Iximeer provides a very rich set of visualization templates. So for each data type there is just the right representation. By having all of the data at the fingertips of our users and immersing the user into the data, we let the data tell the story. If we break that virtual experience, because the user has to switch between views or components, he or she will lose track quickly. Thus we had to make certain to always link all the data in view, and design our user interfaces around that core concept.
Certain solutions we needed were simply not available. For example, resolving similar data objects across an entire data lake was not possible. Thus we invented and patented an algorithm that brought down the time complexity. This allowed us to present our users with data linkage suggestions even on the largest of data lakes. The Data Correlator was born.
Technology we use.
Technology is at the core of our company, but we could have never gotten this far with our technical details without standing on the shoulders of some giants. In our case, we leverage well known and industry-tested components from the open source software landscape. Components like Apache Spark helped us scale out our CPU and IO intensive compute across multi-node systems, so that we no longer have to worry about scaling limitations. We picked these technologies because they are fit for purpose and have the right maturity level. Each component has to pass the gates of security design and tests before it is adopted in our stack.
For our in-house developed image and text processing logic, we make intensive use of Deep Learning. We support that through the Google TensorFlow library system. The great critical mass that this library is getting in the industry, helps us quickly adopt new and emerging technologies, as the library itself expands its functionality in an ultra-rapid pace.
We take meticulous care to keep our platform modular and manageable. Internal API’s make sure that components can be replaced when necessary, and this eases our product life-cycle management.
Where others focus on building the data-lake, we focus on getting the data-lake to value, by getting unstructured data in front of people in as many facets of your organization as possible.
A well-tuned appliance will serve customers better than having to build their own stack of solutions.