ETL, ELT, and UPM for Data Warehousing with Google BigQuery


 

Authored by Darius Kemeklis, Myers-Holum, Inc

It’s hard to believe that Data Warehousing (DW) has been around since 1970 when Bill Inmon first defined the term.  The 1990’s saw Bill Inmon and Ralph Kimball dueling on two different Data Warehousing approaches, with Kimball publishing The Data Warehousing Toolkit. The 2000’s saw MPP databases and the birth of Big Data and Hadoop.  Today we have Cloud Data Storage offerings and new data processing approach – the Unified Programming Model (UPM).  Let’s briefly explore how we got here and what lays ahead.

ETL (Extract Transform Load): Many productivity tools rose to prominence to support growing need for sourcing, cleaning, transforming, loading, storing and accessing data in the Data Warehouse.  Initially, data came into DW in nightly batches from departmental applications in a structured tabular format. The Extract Transform Load (ETL) approach and tooling were introduced with their visual data transformation capabilities so as not to have to code data transformations by hand in the language de jure or plain SQL.  Talend was one of the Data Integration companies that chose an open source approach to ETL.  Multiple relational database management systems (RDBMS) rose to prominence to host enterprise data in DW allowing to store data on a large multi-CPU multi-disk machines.  Data Warehouses capacity was measured in Gigabytes. 

ELT (Extract Load Transform): Eventually as the number of enterprise applications and their usage increased, the amount of data stored in a DW reached terabytes and outgrew single large machine capacity.  A new breed of MPP database management appliances were born where data was stored in a shared nothing architecture across multiple physical machines thus sharding and parallelizing data storage and access across multiple notes.  With this came a new data processing paradigm – Extract Load Transform (ELT) – that allowed data management professionals to offload data transformation processing to the new parallel processing capabilities that data warehousing appliances offered without having to scale their ETL infrastructure to the same scale, especially in high-velocity use cases.  Unfortunately, this often meant going back to coding data transformations in SQL. Most data still came in batches from internal enterprise applications.

MR (MapReduce): With the proliferation of the Internet and ubiquitous connectivity new data sources were coming online – web traffic, social media, location services. This data was usually unstructured or semi-structured with terabyte data volumes. Big Data was born with multiple machine Hadoop clusters and MapReduce programming pattern to extract relevant insights and aggregations to be loaded into DW.  ETL vendors such as Talend implemented Hadoop integrations to allow for a higher level of ETL abstraction instead of having to write low-level MapReduce code. Unfortunately, Hadoop and MapReduce were built for batch processing and as new streaming data sources came online, and business desired up to the minute analytics on near real-time data. With that need, the Lambda architecture was born with its two separate processing paths for – one for batch and one for streaming.  This resulted in data integrators having to code and maintain two data processing paths.

UPM (Unified Programming Model): Today’s information supply side is growing exponentially – people and device connectivity is increasing, always on and real-time interaction applications are proliferating,  and devices are becoming smart-everything.  All of this is generating petabytes of new data that is streaming in at a high velocity.  At the same time, traditional enterprise data sources have not stopped producing batch data that needs to be integrated with new datasets.  On the information consumption side, cloud platform providers such as Google BigQuery offers petabyte scale storage capabilities at the fraction of the on-premise storage cost to store and analyze billion row data sets.  Thus the needs to integrate data from all sources, transform, cleanse, and load in a standard format keeps on growing and all it has to be applied to both streaming and batch data flows consistently. So how do we solve this data integration need?  Do we do ETL, ELT, MR, Lambda, or something new? Recently, we’ve seen a different idea start to gain traction. Meet Apache Beam – an open source framework to unify streaming and batch data processing through the Unified Programming Model (UPM).

Started in 2016 by Google, Talend (and others), the goal of Apache Beam is to “implement batch and streaming data processing jobs that run on any execution engine”.  Apache Beam code can run on any Beam executor engine.  With its ability to support both batch and streaming with the same programming model, there is no more need for a Lambda architecture with two parallel code execution paths for batch and streaming.  Developers now can write a single Apache Beam transformation code once that works for both traditional bounded batch data sets and continuous unbounded infinite streaming data sources such as IoT device data. 

Google Cloud DataFlow is an Apache Beam runner on Google Cloud Platform.  Google Cloud Dataflow offers no-ops serverless auto-scalable processing.  It will scale execution nodes up and down based on data volume.  Google Cloud Dataflow is well integrated with Google BigQuery for streaming inserts (Google’s data warehouse in the cloud offering). 

This sounds all very exciting, but there must be a catch?  The biggest difficulty with Apache Beam and UPM is the fairly low-level programming that needs to be done in implementing transformations.  An integration developer has to have a deep understanding of the Apache Beam programming model to efficiently implement transformations especially for unbounded data sources that require mastering such concepts as windowing, triggering, and watermarks.  This requires Data Engineering skills instead of Data Integration skills – think Hadoop MapReduce programming in the early days.

Talend Support for Apache Beam and Google Cloud DataFlow

By using Apache Beam with Talend and Google, your big data projects are future-proofed, eliminating the need to reconfigure your pipelines to work with new data processing engines. Talend Data Preparation is the industry’s first data application to run on Apache Beam, allowing users to prepare their data once and run them anytime, anywhere, at extreme scale. Google Cloud Dataflow also has deep integration with Google BigQuery streaming capabilities.  The combination of developer ease of use using Talend and powerful fully managed execution runtime deeply integrated into Google Cloud platform is a great accelerator for projects dealing with both traditional batch and unbounded streaming data integration needs.  Give Talend and Google Cloud Dataflow a try!

About the Author

Darius is an Executive Vice President of Google Cloud Practice at Myers-Holum.  Myers-Holum offers end-to-end enterprise data integration strategy, architecture and project implementation – focus on technological solutions to real business problems, support them with proven, flexible, metadata-driven architectures, and efficiently build what we design. The firm has 20+ years’ experience with large-scale data warehouses, on Oracle, Teradata and other platforms, and deep knowledge of Google Cloud Platform, BigQuery, and migrating traditional on-premise data warehouse implementations to Google Cloud.  
 
Before starting Google Cloud Practice at Myers-Holum Darius worked at Google within Corporate Engineering group to solve business needs using Google Cloud data management, analytics, reporting, and machine learning technologies.
 

One thought on “ETL, ELT, and UPM for Data Warehousing with Google BigQuery

  1. Hi Darius, excellent post.
    This has helped me to determine how to use ETL/ELT in bigquery

    Is that ok to put all raw data to bigquery and clean the data in google-data-studio using calculated fields?
    I against this idea, but my project manager prefer to do this approach. The data is pulled via REST though.

    I am still researching on how to implement datawarehouse in bigquery. Is there any reference that you think is best to read?

    Thanks.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>