Simplifying Data Warehouse Optimization

When I hear the phrase “Data Warehouse Optimization”, shivers go down my spine.  It sounds like such a complicated undertaking.  After all, data warehouses are big, cumbersome and complex systems that can store terabytes and even petabytes of data that people depend on to make important decisions on the way their business is run.  The thought of any type of tinkering with such an integral part of a modern business would make even the most seasoned CIO’s break out into cold sweats.

However, the value of optimizing a data warehouse isn’t often disputed.  Minimizing costs and increasing performance are mainstays on the “to-do” lists of all Chief Information Officers.  But that is just the tip of the proverbial iceberg.  Maximize availability.  Increase data quality.  Limit data anomalies.  Eliminate depreciating overhead.  These are the challenges that become increasingly more difficult to achieve when stuck with unadaptable technologies and confined by rigid hardware specifications.

The Data Warehouse of the Past

Let me put it into some perspective.  Not long ago many of today’s technologies (i.e. Big Data Analytics, Spark engines for processing and Cloud Computing and storage) didn’t exist,  yet the reality of balancing the availability of quality data with the efforts required to cleanse and load the latest information proved a constant challenge.  Every month, IT was burdened with loading the latest data into the data warehouse for the business to analyze.  However, often the loading itself took days to complete and if the load failed, or worse, the data warehouse became corrupted, recovery efforts could take weeks.  By the time last month’s errors were corrected, this month’s data needed to be loaded. 

It was an endless cycle that produced little value.  Not only was the warehouse out-of-date with its information, but it was also tied up in data loading and data recovery processes, thus making it unavailable to the end user.  With the added challenges of today’s continuously increasing data volumes, a wide array of data sources and more demands from the business for real-time data in their analysis, the data warehouse needs to be a nimble and flexible repository of information, rather than a workhorse of processing power.

Today’s Data Warehouse Needs

In this day and age, CIO’s can rest easy knowing that optimizing a data warehouse doesn’t have to be so daunting.  With the availability of Big Data Analytics, lightning-quick processing with Apache Spark, and the seemingly limitless and instantaneous scalability of the cloud, there are surely many approaches one can take to address the optimization conundrum.  But I have found the most effective approach to simplifying data warehouse optimization (and providing the biggest return on investment) is to remove unnecessary processing (i.e. data processing, transformation and cleansing) from the warehouse itself.  By removing the inherent burden of ETL processes, the warehouse has nearly instantaneously increased availability and performance.  This is commonly referred to as “Offloading ETL”. 

This isn’t to say that the data doesn’t need to be processed, transformed and cleansed.  On the contrary, data quality is of utmost importance.  But relying on the same systems that serve up the data to be responsible for processing and transforming the data is robbing the warehouse of its sole purpose; providing accurate, reliable and up-to-date analysis to end-users in a timely fashion, with minimal downtime.  By utilizing Spark and it’s in-memory processing architecture, you can shift the burden of ETL onto other in-house servers designed for such workloads. Or better yet, shift the processing to the cloud’s scalable infrastructure and not only optimize your data warehouse, but ultimately cut IT spend by eliminating the capital overhead of unnecessary hardware.

Talend Big Data & Machine Learning Sandbox

In the new Talend Big Data and Machine Learning Sandbox, one such example illustrates how effective ETL Offloading can be.  Utilizing Talend Big Data and Spark, IT can work with business analysts to perform Pre-load analytics – analyzing the data in its raw form, before it is loaded into a warehouse – in a fraction of the time of standard ETL.  Not only does this give business users insight into the quality of the data before it is loaded into the warehouse, it also allows IT a sort of security checkpoint to prevent poor data from corrupting the warehouse and causing additional outages and challenges.

Optimizing a data warehouse can surely produce a fair share of challenges.  But sometimes the best solution doesn’t have to be the most complicated.  That is why Talend offers industry leading data quality, native Spark connectivity and subscription-based affordability, giving you a jump-start on your optimization strategy.  Further, Data Integration tools need to be as nimble as the systems they are integrating.  Therefore, leveraging Talend’s future-proof architecture means you will never be out of style with the latest technology trends; giving you piece of mind that today’s solutions won’t become tomorrow’s problems.

Download the Talend Big Data and Machine Learning Sandbox today and dive into our cookbook

Join The Conversation


Leave a Reply