Data Lake Architecture Best Practices

Implementing the right data lake architecture is crucial for turning data into value. No matter how much data you have within your data lake, it will be of little use if you lack the architectural features to govern the data effectively, keep track of it, and keep it secure.

What is data lake architecture?

A data lake is a repository of information in its "raw" format, meaning the format in which it existed when it was first collected and added to the storage pool. The data inside a data lake can take varying forms, and it is not arranged in any particular way.

The architecture of a data lake refers to the features that are included within a data lake to make it easier to work with that data. Even though data lakes are unstructured, it is still important to ensure that they offer the functionality and design features that your organization requires in order to easily interact with the data that they house.

 

Download Data Lakes: Purposes, Practices, Patterns, and Platforms now.
Download Now

4 data lake architecture best practices

Building the right features into your data lake architecture from the start is critical for ensuring that you can work with the data in the ways you need.

1. Establish governance

Data governance refers to the processes, standards, and metrics that organizations use to ensure that data can fulfill its intended purpose. Data governance also helps to enable effective data quality and data security. Without effective data governance, you lack a systematic and predictable approach to managing data

Including data governance within your data lake architecture requires establishing the right processes, standards, and metrics from the start. For example, in order to standardize file sizes, it's typically wise to set a file size limit for data within the data lake. Files that are too large can make your data difficult to work with.

Likewise, your data team should create a process for identifying data quality problems within the data lake. Ideally, this process will be automated as much as possible by scanning the data lake for signs of data quality issues, such as incomplete or unreadable data.

Download The Definitive Guide to Cloud Data Warehouses and Cloud Data Lakes now.
Download Now

2. Create a data catalog

A data catalog is a source of information about the data that exists within your data lake. Its purpose is to make it easy for stakeholders within and outside your organization to understand the context of the data so that they can work with it quickly.

The exact types of information included in a data catalog can vary, but they typically include items such as: 

  • Which connectors are necessary for working with the data.
  • Metadata about where each data asset originated and how long it has been stored.
  • A description of which applications use the data.

If you include a data catalog within your data lake architecture from the start, it's easy to grow the catalog and keep it up-to-date as the data lake expands. To do this, first determine which types of information you will include in your data catalog, based on your organization's needs. Then, deploy tools that will automatically add entries to the data catalog by scanning each new data asset as it is added to the lake.

3. Enable search

While data catalogs provide one tool for helping stakeholders to find the data they need within a data lake and determine how to work with it, being able to search through the data lake is also crucial.

Effective data lake search functionality should include the ability to find data assets based on features like their size, date of origin, and contents.

Because data lakes are typically very large, attempting to parse the entire data lake for each search is usually not feasible. Instead, build an index of data assets in order to facilitate fast searches, and rebuild the index periodically in order to keep it up-to-date.

4. Ensure security

Data security may not always be essential for working with the data inside a data lake. But it is crucial for adhering to compliance requirements and ensuring that sensitive information remains private.

Basic data security best practices to include in your data lake architecture include: 

  • Rigid access controls that prevent non-authorized parties from accessing or modifying the data lake. Access controls can be implemented on local servers if your data is stored on-premises, or via a cloud provider's IAM framework for cloud-based data lakes.

  • Encryption can also be built into your data lake architecture to help prevent unauthorized access to data. However, keep in mind that encryption is not a magic bullet. Even if data is encrypted while it is in storage, it is often decrypted, and no longer protected, when used by applications.

Download Executive Summary: A Practical Guide to Data Privacy Compliance and Governance now.
Download Now

2 examples of successful data lake architectures: healthcare and technology

Carefully planned data lake architectures are a key part of the ability of modern companies — across a variety of industries — to work with data effectively.

Data lake architecture for biopharmaceuticals

AstraZeneca is a biopharmaceutical company that aims to innovate, develop, and produce innovative medicines for a global medical community. Using a cloud-based data lake, AstraZeneca is able to store and manage 20,000 terabytes of data. The company is able to support multiple internal groups using a single data lake, because they adopted a multi-faceted data lake architecture and governance strategy.

Data lake architecture for high tech

Johnson Controls produces high-tech building management and climate-control solutions for customers around the globe. The company relies on data to achieve a unified view of its customers. Using a cloud-based data lake architecture, the company is moving more and more of its data operations to the cloud to make data available, in a secure way, to all units within the organization.

The cloud and the future of data lake architecture

Although data lakes can exist on-premises, cloud infrastructures have made it easier for more and more companies to build and manage data lakes. There are many added benefits with the cloud — from affordable and flexible storage, and easy access to cloud-based data lakes from any location with a network connection.

Going forward, the decreasing cost of cloud data warehouses, combined with the increasing sophistication of cloud-compatible data governance and security tools, will drive the creation of more cloud-based data lake architectures. 

At the same time, the adoption of multi-cloud strategies, which help increase reliability while reducing costs, among a growing number of companies means that more organizations will build data lakes that span multiple cloud infrastructures. Because these data lakes are spread across multiple clouds, organizations will have to rely on a mix of native tools from cloud providers and third-party solutions to manage them.

Getting started with data lake architecture

A successful data lake architecture includes data governance, data catalogs, search functionality, and effective data security. Ideally, these features will be built into your data lake architecture from the start. 

However, even for data lakes that have already been created, it is feasible to add these features to the architecture, especially if you take advantage of tools that automate the processes required to create data catalogs and governance frameworks.

With built-in data governance and security features, Talend Data Fabric provides a comprehensive suite of cloud-based apps that streamline the creation of an effective data lake architecture. Talend Data Fabric allows users to collect trusted data across systems, govern it to ensure proper use, transform it into new formats, improve quality, and share it with internal and external stakeholders — all at the speed of your business. 

Try Talend Data Fabric to quickly secure your data lake and ensure your business is using data you can trust.





| Last Updated: October 11th, 2019