Talend Big Data Advanced – Spark Streaming

Talend provides a development environment that lets you interact with many source and target Big Data stores, without having to learn and write complicated code.

This course covers Big Data streaming Jobs that use the Spark streaming framework.

Duration1 day (7 hours)
Target audience Anyone who wants to use Talend Studio to interact with Big Data systems
PrerequisitesCompletion of Talend Big Data Basics
Course objectives

After completing this course, you will be able to:

  • Connect to a Hadoop cluster from a Talend Job
  • Use context variables and metadata
  • Read and write files in HDFS or HBase in a Big Data batch or Big Data streaming Job
  • Read and write messages in a Kafka topic in real time
  • Configure a Big Data batch Job to use the Spark framework
  • Configure a Big Data streaming Job to use the Spark streaming framework
Course agenda

Spark in context

  • Concepts

Introduction to Kafka

  • Monitoring the Hadoop cluster
  • Understanding Kafka basics
  • Publishing messages to a Kafka topic
  • Consuming messages

Introduction to Spark

  • Understanding Spark basics
  • Analyzing customer data
  • Producing and consuming messages in real time

Logs processing use case – generating enriched logs

  • Introduction to the logs processing use case
  • Generating raw logs
  • Generating enriched logs

Logs processing use case – monitoring

  • Monitoring enriched logs

Logs processing use case – reporting

  • Generating reports based on data windows

Logs processing use case – batch analysis

  • Ingesting streams of data
  • Analyzing logs with a batch Job