We all know that enterprise data needs change constantly, and recently that change has come at an increasing pace. Companies that were once processing all their big data on-prem have suddenly moved into the cloud. Frameworks we used to know and love suddenly become obsolete. However, an interesting debate that still rages on is how to get data processed faster. There are generally two heralded ways of processing data today:
- Batch Processing
- Stream Processing
Batch processing deals with non-continuous data. It’s fantastic at handling data sets quickly but doesn’t really get near the real-time requirements of most of today’s business. Stream processing does deal with continuous data and is really the golden key to turning big data into fast data.
Each approach has its pros and cons. At the end of the day, your choice of batch or streaming all comes down to your business use case. However, there are questions and use cases to consider here when selecting your data processing approach. In our latest episode of Craft Beer and Data, Mark Balkenende and I dove deep into the debate of batch vs. streaming.
We answered some interesting questions like “Is data ever really real-time?” We also debated if the lambda architecture is really dead, as well as sifted through some considerations you should take into account when deciding batch or stream processing.
Before we jump into the video (small plug), we are taking Craft Beer and Data on the road! Check out our events page and come attend an event in your area. We’d also love to hear your thoughts on the batch vs. streaming debate. Tweet me your thoughts @Nick_Piette.