TALEND CONNECT 2018 : Get inspired by the movers and shakers in the big data world in NYC
Applying Machine Learning to IoT Sensors
Applying Machine Learning to IoT Sensors
If you have already seen our demo on Analyzing Real-time Streaming IoT (Internet of Things) Data, you're probably wondering how we trained our classification model to be able to predict user activity. Let's first discuss the background of the demo, how we classify the data, and lastly choosing an appropriate model.
The objective of the IoT demo is to demonstrate Talend's real-time streaming and machine learning capabilities. This means that Talend receives accelerometer sensor data from mobile phones in real-time, pushes the data into a message queue, and performs machine learning to classify the data for analysis. Did I mention this is all done without hand coding?
On the processing side, a REST endpoint is created using Talend to which the sensor data can be sent. The sensor data is parsed and pushed into a message queue (Kafka). Once the data is in the message queue, a Talend Big Data Streaming job reads the messages off the queue using a sliding window, passes the data through a machine learning model, and prepares the data for visualization.
Show Me the Data
The data being processed comes from an accelerometer sensor on a mobile device. More specifically, we are processing the linear acceleration of X, Y, and Z axes. Just performing a quick and dirty analysis from a graph of the sensor data, we are presented with the following:
The acceleration for each axis is being graphed in m/s2. We can visually deduce that there are three phases of activity: low, high and medium respectively. To translate this into a machine learning model, we expect that the chosen model will be able to classify the sensor data into low, medium or high. In machine learning, classification refers to identifying a category that the observations belong to. To begin the exercise of choosing a classification model from Spark MLib, we examine some popular models: Naive Bayes, Logistic Regression and Random Forest Model.
Choosing a Model
The Naive Bayes model is generally used more for text classification, and since we are dealing with decimal numbers, it wouldn't fit well. Next, the Logistic Regression model won't handle multi-class classification that we need for low, medium and high activity. Lastly, the Random Forest model allows us classify against each axis. The Random Forest model is also efficient on larger data sets, and can handle thousands of input variables.
The Random Forest model works by taking the training set and performing random sampling to create subsets of data or random "trees". After many trees have been created, it creates a random "forest". The benefit of having many trees is that we'll get a more accurate prediction of classification for the data. For example, if 7 out of 10 trees in the forest suggest that a particular sensor event is walking, the classification is expected to be walking.
The Talend Real-Time Big Data Platform comes with pre-built components for machine learning. The first step to use the Random Forest model is to train using handmade classifications. This means that we take the data from our quick and dirty analysis, and add an activity label. This training set will be used by a model encoder to output a model to be used to classify the activity during streaming. The label in the training set will associated with human activity, specifically Resting, Walking, and Running. The training set looks like:
The specific training set used in generating this model had about 150 events for each activity. Taking the generated model and comparing the handmade classified labels against the outputs yielded a 97% accuracy, which is to be expected.
To assess the accuracy of the machine learning model, we use a K-Fold Cross Validation technique and run 10 individual learning exercises. Each exercise takes a partition of the training set which is then used as validation data. This technique yielded a 95% accuracy with our chosen model. In a future blog, we’ll explore this validation technique and how to build it using Talend Studio.
The last step is to use the model in the streaming piece of the demo to classify our data. Before data is classified, it can also be captured and stored for an archive for future analysis. The classified data is then prepared for visualization.
The most remarkable piece of this exercise is the fact that there was no hand coding required. Everything from creating a REST service to acquire data, to the Spark Streaming job which implements a machine learning model, was all designed using a graphical user environment. If you haven’t already seen the demo, get in touch with us so you can experience how easy it is to use Talend for your next Big Data project.
Most Downloaded Resources
Browse our most popular resources - You can never just have one.