What is ETL (Extract, Transform, Load)?

Extract, Transform, Load (ETL) is the process of gathering data from an unlimited number of sources, organizing it together, and centralizing it into a single repository.

In most companies, potentially useful data is inaccessible; one study revealed that two-thirds of businesses either get “little tangible benefit” from their data or no benefit whatsoever. The data tends to be locked away in isolated silos, legacy systems, or seldom-used applications. ETL is the process of making that data available by extracting data from multiple sources (as seen in the diagram above) and making it usable for cleansing, transformation, and finally, business insight. 

Some people perform ETL through hand coding in SQL or Java, but there are tools available that simplify the process. This article takes a close look at ETL use cases, the advantages of using an ETL tool rather than hand coding, and what consumers should look for in ETL tools.

Ready For More? Watch Getting Started with Data Integration now.

Watch Now

What is ETL Used For?

Data plays an important part in nearly every business operation; for it to be valuable, it must be moved and prepared for use, which means you need ETL processes. The use cases for ETL include:

  • Migrating data from one application to another
  • Data replication for backup or redundancy analysis
  • Operational processes like migrating data out of a CRM into an ODS (Operational Data Store) to enhance or enrich the data and then pushing it back into the CRM
  • Putting data into a data warehouse in order to ingest, sort, and transform it for business intelligence
  • Migrating applications from on-premises to cloud, hybrid cloud, or multi-cloud infrastructures
  • Synchronization of key systems

How ETL and ELT Processes Work

ETL is typically a 3-step process.  It is becoming increasingly common for data to be extracted from its source locations, then loaded into a target data warehouse or data to be transformed after loading.  This process is called ELT instead of ETL. Find out more about ETL vs ELT.

Here are the three steps of the ETL process:

Step 1: Extract

The goal of ETL is producing cleansed, accessible data that can be used for analytics or business operations. Raw data must be extracted from an array of sources, including:

  • Existing databases
  • Activity logs like network traffic, error reports, and more
  • Application performance and anomalies
  • Security events
  • Other transaction activity that must be reported for compliance

The extracted data is then sometimes placed in a destination like a data lake or data warehouse.

Step 2: Transformation

The transformation stage of ETL is where the most critical work takes place. The biggest thing transformation means is applying any business rules to the data to meet reporting requirements.  Transformation changes the raw data to the correct reporting formats.  If the data is not cleansed, then it becomes more challenging to apply the business rules for reporting.

Transformation is achieved through a series of rules and regulations that you outline. The standards that ensure data quality and accessibility during this stage should include:

  • Standardization: defining what data will be in play, how it will be formatted and stored, and other baseline considerations that will define every step that follows.
  • Deduplication: reporting duplication to data stewards; excluding and/or discarding redundant data.
  • Verification: running automated checks to compare similar information like transaction times and access records. Verification tasks further prune unusable data and can red-flag anomalies in your systems, applications, or data.
  • Sorting: maximizing efficiency in data warehouses by grouping and storing items like raw data, audio, multimedia, and other objects in categories. Transformation rules will determine how each data piece is classified and where it will go next. Often, the ETL process is used to build aggregation tables for summary reporting. This will require sorting and then aggregating. 
  • Other tasks you define and set to automatically execute.

These transformation steps pare down what was once a mass of unusable material into a data product you can present in the final step of ETL, the load phase.

Step 3: Load

The last step in a typical ETL process is loading this extracted, transformed data into its new home. There are two typical ways that data can be loaded into a data warehouse: full load and incremental load.

Executing a lone ETL cycle or a scheduled series of them can be done by executing a task from a command line or GUI interface. But there are certain things to watch out for. Managing exceptions, for example, can be a very cumbersome process.  Often data extracts can fail if one or more systems are down. Bad data in one system can impact data being extracted from another, so monitoring and error handling are key activities.

Learn how to get started with Talend's data integration tools.

Modern Data Practices: ETL vs. ELT

ELT is a tried-and-true process of managing data’s journey from the source to a state where it’s ready to be used for business intelligence. But with modern, cloud-based infrastructure technologies, systems now are able to support large data storage and scalable compute power at lower costs. Therefore, it becomes advantageous to keep data in a large and ever-expanding data pool, with endless fast processing power to maintain the extracted raw data.  

In some ways, ELT is a preferable approach to data management, because all the raw data can be kept until it is ready to be used. It therefore becomes easy to focus exclusively on data extraction and loading. However, it’s important not to forget about data transformation, which can have numerous benefits, including:

  • Preparing data for machine learning, which will dramatically improve the accuracy of the algorithms
  • Cleansing data to improve data quality
  • Deduplication and matching of data
  • Standardizing data values

Even in a data lake-centric world, data transformation is critical to extract the full value out of enterprise data.

ETL Tools vs. Hand Coding ETL Processes

Many IT professionals wonder if hand coding isn’t a better option than investing in yet another tool. Most of ETL's functions can be accomplished with hand coding, but in most cases, it's more scalable and cheaper in the long run to use a tool for ETL processes.

There are numerous challenges with hand coding. Managing, supporting, and reusing hand coding is complex. It can be very difficult for one developer to learn another developer's code, let alone reuse it. ETL tools create a visual representation of a data flow that is much easier to understand. With hand coding, we often see developers rewriting other developers' code, because it's easier to rewrite the code than it is to learn what someone else has done. This is why maintenance costs are often 2x higher with hand coding. This same barrier makes it much less likely that one developer will reuse another developer's code.

In addition, with data integration tools you will automatically get advanced features like parallelization, monitoring, and failover all built in. If you wanted those features when hand coding, you would need very skilled coders to learn all those techniques. Finally, a custom-coded approach to data integration will inhibit scaling and innovation because developers skilled at working with your custom-coded integrations are scarce. The upfront savings you might enjoy when forgoing an ETL tool are swallowed up by the vast increase in maintenance costs.

Ready For More? Download Talend Data Integration Cookbook User Guide now.

Download Now

ETL and Moving to the Cloud

Cloud compatibility has proven critical for many organizations. As companies move to the cloud, they often have to re-evaluate their existing data integration and ETL tools because many traditional tools don't work well in the cloud. Not only that, but most enterprises have multiple clouds or find that they wish to migrate form one cloud provider (e.g. AWS, Azure, or Google Cloud Platform) to another.

An ETL tool should be able to work well on any cloud provider and should be able to migrate easily as companies change providers. You won't want to rewrite your data pipelines when you change cloud platforms; instead, you want your data plipelines to be portable so that you can plug in and plug out different clouds, storage technologies, data processing technologies, and cloud databases.

What to Look for in an ETL Tool

ETL tools have the potential to simplify this process a great deal. The features you should look for in an ETL tool are:

  • A lot of connectors: there are many systems and applications in the world; the more pre-built connectors your ETL tool has, the more time your team will save.
  • Open-source: open-source architectures typically provide more flexibility while helping to avoid vendor lock-in.
  • Portability: it's important, as companies increasingly move to hybrid cloud models, to be able to build your data integrations once and run them anywhere.
  • Ease of use: ETL tools should be easy to learn and easy to use with a GUI interface to make visualizing your data pipelines simpler.
  • A transparent price model: your ETL tool provider should not ding you for increasing the number of connectors or data volumes.
  • Cloud compatibility: your ETL tool should work natively in a single cloud, multi-cloud, or hybrid cloud environment.

ETL with Talend: Talend Data Integration vs Talend Open Source Data Integration

Talend provides robust data integration tools for ETL processes. Using Talend for data integration allows users to complete integration jobs 10 times faster than hand coding, at a fifth of our competitors' cost.

There are two versions of the Talend data integration software:  Talend Open Source Data Integration and Talend Data Management Platform. Take a look at a comparison between the two. The Talend Open Source Data Integration product is powerful and valuable for many use cases; it's a great tool for an individual developer to build and run data pipelines.

As customers build larger teams, want more collaboration, want enterprise-class support, and desire even better productivity, then they will want the commercial edition. Talend Data Management platform offers additional features like management and monitoring capabilities, data quality built right into the platform, and additional support on web, email, and phone. Both offer native multi-cloud functionality, scalability for any kind of project, and 900 built-in connectors.

ETL is a critical process to make use data in the enterprise. Talend's tools make performing ETL processes easy. Give Talend Open Source Data Integration a try and see how much data you can put to work for your business without sacrificing speed and productivity.

| Last Updated: July 27th, 2018