`

Talend Big Data Advanced – Spark

Talend provides a development environment that lets you interact with many source and target Big Data stores without having to learn and write complicated code.

This course covers Big Data batch Jobs that use the Spark framework. It then covers Big Data streaming Jobs that use the Spark streaming framework.

Duration 1 day (7 hours)
Target audience Anyone who wants to use Talend Studio to interact with Big Data systems
Prerequisites Completion of Talend Data Integration Basics and Talend Big Data Basics
Course objectives

After completing this course, you will be able to:

  • Connect to a Hadoop cluster from a Talend Job
  • Use context variables and metadata
  • Read and write files in HDFS or HBase in a Big Data batch or Big Data streaming Job
  • Read and write messages in a Kafka topic in real time
  • Configure a Big Data batch Job to use the Spark or MapReduce framework
  • Configure a Big Data streaming Job to use the Spark streaming framework
Course agenda

Introduction to Kafka

  • Monitoring the Hadoop cluster
  • Understanding basics of Kafka
  • Publishing messages to a Kafka topic
  • Consuming messages

Introduction to Spark

  • Understanding basics of Spark
  • Analyzing customer data
  • Producing and consuming messages in real time

Logs processing use case: generating enriched logs

  • Introduction to the logs processing use case
  • Generating raw logs
  • Generating enriched logs

Logs processing use case: monitoring

  • Monitoring enriched logs

Logs processing use case: reporting

  • Generating reports based on data windows

Logs processing use case: batch analysis

  • Ingesting streams of data
  • Analyzing logs with a batch Job