- Content Types
- Resource Center
Full Resource Library
A data lake is a central storage repository that holds big data from many sources in a raw format. The benefits of the data lake format are enticing many organizations to ditch their data warehouses. Discover what sets data lakes apart, why they are becoming more popular, and how to start building one.View Now
Data lakes and data warehouses are both widely used for storing big data, but they are not interchangeable terms. A data lake is a vast pool of raw data, the purpose for which is not yet defined. A data warehouse is a repository for structured, filtered data that has already been processed for a specific purpose.View Now
ETL testing refers to tests applied throughout the ETL process to validate, verify, and ensure the accuracy of data while preventing duplicate records and data loss. Learn the 8 stages of ETL testing, 9 types of tests, common challenges, how to find the best tool, and more.View Now
Data preparation is the process of cleaning and transforming raw data prior to processing and analysis. It is a time consuming process, but the business intelligence benefits demand it. And today, savvy self-service data preparation tools are making it easier and more efficient than ever.View Now
Big data analytics is the process of translating massive amounts of digital information into useful business intelligence. Utilizing this data, companies can provide actionable information that can be used in real-time to improve business operations, optimize applications for the cloud, and more.View Now
With the advent of big data, data quality management is both more important and more challenging than ever. Fortunately the combination of Hadoop open source distributed processing technologies and Talend open source data management solutions bring big data quality operations within the reach of any organization.View Now
Big data is the catch-all term used to describe gathering, analyzing, and using massive amounts of digital information to improve operations. It is rapidly changing the way we live, shop, and approach daily life. Understand what big data is and how you can put it to work for you.View Now
The difference between ETL and ELT lies in where data is transformed into business intelligence and how much data is retained in working data warehouses. Discover what those differences mean for business intelligence, which approach is best for your organization, and why the cloud is changing everything.View Now
In this tutorial, create Hadoop Cluster metadata by importing the configuration from the Hadoop configuration files.
This tutorial uses Talend Data Fabric Studio version 6 and a Hadoop cluster: Cloudera CDH version 5.4.
1. Create a new Hadoop cluster metadata definition
Ensure that the Integration perspective is selected.
In the Project Repository, expand Metadata, right-click Hadoop Cluster, and click Create Hadoop Cluster to open the wizard.
In the Name field of the Hadoop Cluster Connection wizard, type MyHadoopCluster_files. In the Purpose field, type Cluster connection metadata, in the Description field, type Metadata to connect to a Cloudera CDH 5.4 cluster, and click Next.