Oops! Something went wrong while submitting the form.
We use cookies to improve your browsing experience on our website, to show you personalised content and to analize our website traffic. By browsing our website, you consent to our use of cookies. Read privacy policy.
The world continues to go through digital transformation at an accelerating pace. Modern applications and infrastructure continues to expand and operational complexity continues to grow. According to a recent ManageEngine Application Performance Monitoring Survey:
28 percent use ad-hoc scripts to detect issues in over 50 percent of their applications.
32 percent learn about application performance issues from end users.
59 percent trust monitoring tools to identify most performance deviations.
Most enterprises and web-scale companies have instrumentation & monitoring capabilities with an ElasticSearch cluster. They have a high amount of collected data but struggle to use it effectively. This available data can be used to improve availability and effectiveness of performance and uptime along with root cause analysis and incident prediction.
IT Operations & Machine Learning
Here is the main question: How to make sense of the huge piles of collected data? The first step towards making sense of data is to understand the correlations between the time series data. But only understanding will not work since correlation does not imply causation. We need a practical and scalable approach to understand the cause-effect relationship between data sources and events across complex infrastructure of VMs, containers, networks, micro-services, regions, etc.
It’s very likely that due to one component something goes wrong with another component. In such cases, operational historical data can be used to identify the root cause by investigating through a series of intermediate causes and effects. Machine learning is particularly useful for such problems where we need to identify “what changed”, since machine learning algorithms can easily analyze existing data to understand the patterns, thus making easier to recognize the cause. This is known as unsupervised learning, where the algorithm learns from the experience and identifies similar patterns when they come along again.
Let's see how you can setup Elastic + X-Pack to enable anomaly detection for your infrastructure & applications.
Anomaly Detection using Elastic's machine learning with X-Pack
Step I: Setup
1. Setup Elasticsearch:
According to Elastic documentation, it is recommended to use the Oracle JDK version 1.8.0_131. Check if you have required Java version installed on your system. It should be at least Java 8, if required install/upgrade accordingly.
Log in as the built-in user elastic and password changeme.
You will see the below screen:
3. Metricbeat:
Metricbeat helps in monitoring servers and the services they host by collecting metrics from the operating system and services. We will use it to get CPU utilization metrics of our local system in this blog.
By default, Metricbeat is configured to send collected data to elasticsearch running on localhost. If your elasticsearch is hosted on any server, change the IP and authentication credentials in metricbeat.yml file.
Now, all setup is done. Let’s go to step 2 to create machine learning jobs.
Step II: Time Series data
Real-time data: We have metricbeat providing us the real-time series data which will be used for unsupervised learning. Follow below steps to define index pattern metricbeat-* in Kibana to search against this pattern in Elasticsearch: - Go to Management -> Index Patterns - Provide Index name or pattern as metricbeat-* - Select Time filter field name as @timestamp - Click Create
You will not be able to create an index if elasticsearch did not contain any metric beat data. Make sure your metric beat is running and output is configured as elasticsearch.
Saved Historic data: Just to see quickly how machine learning detect the anomalies you can also use data provided by Elastic. Download sample data by clicking here.
Unzip the files in a folder: tar -zxvf server_metrics.tar.gz
Download this script. It will be used to upload sample data to elastic.
Provide execute permissions to the file: chmod +x upload_server-metrics.sh
Run the script.
As we created index pattern for metricbeat data, in same way create index pattern server-metrics*
Step III: Creating Machine Learning jobs
There are two scenarios in which data is considered anomalous. First, when the behavior of key indicator changes over time relative to its previous behavior. Secondly, when within a population behavior of an entity deviates from other entities in population over single key indicator.
To detect these anomalies, there are three types of jobs we can create:
Single Metric job: This job is used to detect Scenario 1 kind of anomalies over only one key performance indicator.
Multimetric job: Multimetric job also detects Scenario 1 kind of anomalies but in this type of job we can track more than one performance indicators, such as CPU utilization along with memory utilization.
Advanced job: This kind of job is created to detect anomalies of type 2.
For simplicity, we are creating following single metric jobs:
Tracking CPU Utilization: Using metric beat data
Tracking total requests made on server: Using sample server data
Follow below steps to create single metric jobs:
Job1: Tracking CPU Utilization
Job2: Tracking total requests made on server
Go to http://localhost:5601/
Go to Machine learning tab on the left panel of Kibana.
Click on Create new job
Click Create single metric job
Select index we created in Step 2 i.e. metricbeat-* and server-metrics* respectively
Configure jobs by providing following values:
Aggregation: Here you need to select an aggregation function that will be applied to a particular field of data we are analyzing.
Field: It is a drop down, will show you all field that you have w.r.t index pattern.
Bucket span: It is interval time for analysis. Aggregation function will be applied on selected field after every interval time specified here.
If your data contains so many empty buckets i.e. data is sparse and you don’t want to consider it as anomalous check the checkbox named sparse data (if it appears).
Click on Use full <index pattern=""> data to use all available data for analysis.</index>
Click on play symbol
Provide job name and description
Click on Create Job
After creating job the data available will be analyzed. Click on view results, you will see a chart which will show the actual and upper & lower bound of predicted value. If actual value lies outside of the range, it will be considered as anomalous. The Color of the circles represents the severity level.
Click on machine learning tab in the left panel. The jobs we created will be listed here.
You will see the list of actions for every job you have created.
Since we are storing every minute data for Job1 using metricbeat. We can feed the data to the job in real time. Click on play button to start data feed. As we get more and more data prediction will improve.
You see details of anomalies by clicking Anomaly Viewer.
We have seen how machine learning can be used to get patterns among the different statistics along with anomaly detection. After identifying anomalies, it is required to find the context of those events. For example, to know about what other factors are contributing to the problem? In such cases, we can troubleshoot by creating multimetric jobs.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Machine Learning for your Infrastructure: Anomaly Detection with Elastic + X-Pack
Introduction
The world continues to go through digital transformation at an accelerating pace. Modern applications and infrastructure continues to expand and operational complexity continues to grow. According to a recent ManageEngine Application Performance Monitoring Survey:
28 percent use ad-hoc scripts to detect issues in over 50 percent of their applications.
32 percent learn about application performance issues from end users.
59 percent trust monitoring tools to identify most performance deviations.
Most enterprises and web-scale companies have instrumentation & monitoring capabilities with an ElasticSearch cluster. They have a high amount of collected data but struggle to use it effectively. This available data can be used to improve availability and effectiveness of performance and uptime along with root cause analysis and incident prediction.
IT Operations & Machine Learning
Here is the main question: How to make sense of the huge piles of collected data? The first step towards making sense of data is to understand the correlations between the time series data. But only understanding will not work since correlation does not imply causation. We need a practical and scalable approach to understand the cause-effect relationship between data sources and events across complex infrastructure of VMs, containers, networks, micro-services, regions, etc.
It’s very likely that due to one component something goes wrong with another component. In such cases, operational historical data can be used to identify the root cause by investigating through a series of intermediate causes and effects. Machine learning is particularly useful for such problems where we need to identify “what changed”, since machine learning algorithms can easily analyze existing data to understand the patterns, thus making easier to recognize the cause. This is known as unsupervised learning, where the algorithm learns from the experience and identifies similar patterns when they come along again.
Let's see how you can setup Elastic + X-Pack to enable anomaly detection for your infrastructure & applications.
Anomaly Detection using Elastic's machine learning with X-Pack
Step I: Setup
1. Setup Elasticsearch:
According to Elastic documentation, it is recommended to use the Oracle JDK version 1.8.0_131. Check if you have required Java version installed on your system. It should be at least Java 8, if required install/upgrade accordingly.
Log in as the built-in user elastic and password changeme.
You will see the below screen:
3. Metricbeat:
Metricbeat helps in monitoring servers and the services they host by collecting metrics from the operating system and services. We will use it to get CPU utilization metrics of our local system in this blog.
By default, Metricbeat is configured to send collected data to elasticsearch running on localhost. If your elasticsearch is hosted on any server, change the IP and authentication credentials in metricbeat.yml file.
Now, all setup is done. Let’s go to step 2 to create machine learning jobs.
Step II: Time Series data
Real-time data: We have metricbeat providing us the real-time series data which will be used for unsupervised learning. Follow below steps to define index pattern metricbeat-* in Kibana to search against this pattern in Elasticsearch: - Go to Management -> Index Patterns - Provide Index name or pattern as metricbeat-* - Select Time filter field name as @timestamp - Click Create
You will not be able to create an index if elasticsearch did not contain any metric beat data. Make sure your metric beat is running and output is configured as elasticsearch.
Saved Historic data: Just to see quickly how machine learning detect the anomalies you can also use data provided by Elastic. Download sample data by clicking here.
Unzip the files in a folder: tar -zxvf server_metrics.tar.gz
Download this script. It will be used to upload sample data to elastic.
Provide execute permissions to the file: chmod +x upload_server-metrics.sh
Run the script.
As we created index pattern for metricbeat data, in same way create index pattern server-metrics*
Step III: Creating Machine Learning jobs
There are two scenarios in which data is considered anomalous. First, when the behavior of key indicator changes over time relative to its previous behavior. Secondly, when within a population behavior of an entity deviates from other entities in population over single key indicator.
To detect these anomalies, there are three types of jobs we can create:
Single Metric job: This job is used to detect Scenario 1 kind of anomalies over only one key performance indicator.
Multimetric job: Multimetric job also detects Scenario 1 kind of anomalies but in this type of job we can track more than one performance indicators, such as CPU utilization along with memory utilization.
Advanced job: This kind of job is created to detect anomalies of type 2.
For simplicity, we are creating following single metric jobs:
Tracking CPU Utilization: Using metric beat data
Tracking total requests made on server: Using sample server data
Follow below steps to create single metric jobs:
Job1: Tracking CPU Utilization
Job2: Tracking total requests made on server
Go to http://localhost:5601/
Go to Machine learning tab on the left panel of Kibana.
Click on Create new job
Click Create single metric job
Select index we created in Step 2 i.e. metricbeat-* and server-metrics* respectively
Configure jobs by providing following values:
Aggregation: Here you need to select an aggregation function that will be applied to a particular field of data we are analyzing.
Field: It is a drop down, will show you all field that you have w.r.t index pattern.
Bucket span: It is interval time for analysis. Aggregation function will be applied on selected field after every interval time specified here.
If your data contains so many empty buckets i.e. data is sparse and you don’t want to consider it as anomalous check the checkbox named sparse data (if it appears).
Click on Use full <index pattern=""> data to use all available data for analysis.</index>
Click on play symbol
Provide job name and description
Click on Create Job
After creating job the data available will be analyzed. Click on view results, you will see a chart which will show the actual and upper & lower bound of predicted value. If actual value lies outside of the range, it will be considered as anomalous. The Color of the circles represents the severity level.
Click on machine learning tab in the left panel. The jobs we created will be listed here.
You will see the list of actions for every job you have created.
Since we are storing every minute data for Job1 using metricbeat. We can feed the data to the job in real time. Click on play button to start data feed. As we get more and more data prediction will improve.
You see details of anomalies by clicking Anomaly Viewer.
We have seen how machine learning can be used to get patterns among the different statistics along with anomaly detection. After identifying anomalies, it is required to find the context of those events. For example, to know about what other factors are contributing to the problem? In such cases, we can troubleshoot by creating multimetric jobs.
Velotio Technologies is an outsourced software product development partner for top technology startups and enterprises. We partner with companies to design, develop, and scale their products. Our work has been featured on TechCrunch, Product Hunt and more.
We have partnered with our customers to built 90+ transformational products in areas of edge computing, customer data platforms, exascale storage, cloud-native platforms, chatbots, clinical trials, healthcare and investment banking.
Since our founding in 2016, our team has completed more than 90 projects with 220+ employees across the following areas:
Building web/mobile applications
Architecting Cloud infrastructure and Data analytics platforms