Hit the “Easy” Button with Talend & Databricks to Process Data at Scale in the Cloud

Hit the “Easy” Button with Talend & Databricks to Process Data at Scale in the Cloud

  • Isabelle Nuage
    Isabelle Nuage is Director of Product Marketing for Big Data at Talend. Her field of expertise include Data Integration, Big Data and Analytics. Isabelle brings more than 19 years of experience in the software industry holding various leadership positions in product marketing at SAP & Business Objects.

The challenge today for big data is that 85% of on-premises based Big Data projects fail to meet expectations and over 2/3 of Big Data potential is not being realized by organizations. Why is that you ask? Well, simply put on-premises “Big Data” programs are not that easy. 

Your data engineers will need to be proficient in new programming languages and architectural models while your system admins will need to learn how to set up and manage a data lake. So you’re not really focusing on what you do best and instead paying top dollar for data engineers with the programming skills while spending (wasting) a lot of time configuring infrastructure – and not reaping the benefits of a big data program.

In short, making big data available at scale is hard and can be very expensive and the complexity is really killing big data projects.  

Welcoming modern data engineering in the cloud

Data engineers ensure the data the organization is using clean, reliable, and prepped for whichever use cases that may present themselves.  In spite of the challenges with on-premises “big data,” technologies like Apache Spark have become a best practice due to its ability to scale as jobs get larger and SLA’s become more stringent.  

But using Spark on-premises as we’ve highlighted is not that easy.  The market and technologies have come to an inflection point where it is agreed that what is needed is the ability to:

  1. Eliminate the complexity of system management to lower operations costs and increase agility
  2. Have automatic scale up/down of processing power, to grow and shrink as needed while only paying for what you use
  3. Enable a broader set of users to utilize these services without requiring a major upgrade in their education or hiring expensive external expertise

To simplify success with big data programs, market leaders have moved from an on-premises model to a cloud model.  Cloud based environments offer the ability to store massive volumes of data as well as all varieties (structured to unstructured). Now what is needed is the ability to process that data for consumption in BI tools, data science, or machine learning.

Databricks, founded by the original creators of Apache Spark, provides the Databricks Unified Analytics Platform. Databricks accelerates innovation by bringing data and ML together.  This service solves many of the hard challenges discussed above by automatically handling software provisioning, upgrades, and management.  Databricks also manages the scaling up and down to ensure that you have the right amount of processing power and saving money but shutting down clusters when they are not needed.  By taking this workload off the table for their customers, this allows those customers to focus on the next level of analytics – machine learning and data science.

While Databricks solves two out of three of the big challenges posed, there is still the third challenge of making the technology more accessible to “regular” data engineers that do not have the programming expertise to support these massively parallel, cloud environments.  But that is where Talend comes in.  Talend provides a familiar data flow diagram design surface and will convert that diagram into an expertly programmed data processing job native to Databricks on Azure or AWS.

The combination of Databricks and Talend then provides a massively scalable environment that has a very low configuration overhead while having a highly productive and easy to learn/use development tool to design and deploy your data engineering jobs.  In essence, do more with less work, expense, and time.

For further explanation and a few examples, keep reading….

Example use case

Watch these videos and see for yourself how easy it is to run a Spark Serverless in the Cloud.

Movie recommendation use case with machine learning and Spark Serverless

 

 

Create and connect to a Databricks Spark Cluster with Talend

 

Click here to learn more about serverless and how to modernize your architecture?

Check out our GigaOM webinar with Databricks and Talend to learn how to accelerate your analytics and machine learning.

Join The Conversation

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *