Tutorial
6 min read

Alert backoff with Flink CEP

Flink complex event processing (CEP)....

....provides an amazing API for matching patterns within streams. It was introduced in 2016 with an interesting blog post presenting CEP usage scenarios for monitoring and alert detection. When implementing it in real-life, one may find an important missing feature - backoff. Once an alert is triggered, we do not want more of the same alerts. As an example, if you check for low disk usage every minute, when an alert is raised next checks should not trigger an alert for a specified interval like an hour or a day. 

In this post, we present how to implement backoff in Flink CEP. The same functionality can be achieved with Flink DataStream API or Pattern Recognition SQL within the MATCH_RECOGNIZE clause. Please refer to our other Flink post to get more information on available Flink APIs and comparison between them. Scenario described in the post is a real case study from one of our customers and we decided to use CEP in order to be able to easily extend it further when pattern matching requirements get more complex. 

We start with creating a simple Flink CEP logic that matches a pattern that we consider to become an alert. In our scenario this was as simple as filtering some specific events. Testing is our friend from the first line of code, thus we can start with a test that checks if a filtering logic works as expected.

Later on we extend our test base with the following scenarios: 

  • Create multiple events that trigger an alert and check a single alert is triggered.
  • Create multiple alerts that exceed the backoff window and assert more than one alert is triggered.

We are going to use CEP’s SKIP_PAST_LAST_EVENT after match strategy that controls the number of matches a single event will be assigned to. According to CEP documentation, this works in a way that for a pattern b+ c and a data stream b1 b2 b3 c, only a pattern b1 b2 b3 c will be returned. 

In other words, this assures that if an event belongs to a match, it cannot belong to any other match until this one ends. That’s something we are looking for and we just need a pattern match for a whole backoff time (let’s say 24 hours). 

In order to do that, let us create a Flink stream (Flink abstraction and not real Kafka topic) that for each event that should trigger an alert creates an additional event with a field PATTERN_END and event time delayed for 24 hours. This can be done with a code below: 

Creating of a Complex Event Processing with Flink
Adding Fliink stream

Please note that we try to avoid duplicating the whole stream twice. That is why we do the filtering first so that only a fraction of a stream is duplicated. Another possible approach, when having events of a significant size, is to create a new stream with events containing only the necessary fields to trigger the alert. 

Now we need to create a pattern that

  • Uses SKIP_PAST_LAST_EVENT strategy,
  • Starts the pattern with element that is not PATTERN_END,
  • Pattern starting events are followed by PATTERN_END events, 
  • Event time difference between the first and last event in the pattern is exactly 24 hours.

The event time difference is important when multiple alert events occur and it is hard to match PATTERN_END event to the corresponding PATTERN_BEGIN event (see on the picture below: the red PATTERN_BEGIN event and the yellow PATTERN_END event).

CEP with Flink patterns

The requirements defined above can be achieved with the following code

Flink Complex Event Processing Platform
Data Stream API

In the example above, event filtering is done within DataStream API and not within a pattern definition. This allows us to keep duplicated stream downsized to minimum. In other scenarios it may be useful to include filtering within Pattern definition, as it enables richer API to filter among multiple events’ patterns.

In many cases, it is desired to work with multiple alert types. Imagine our event has a field customer and we want to get separate alerts for each customer. In this case, we need to define keyBy function on the stream. The code below puts all the things together.

Complex Events Processing Platform with using of FLink

Please do remember to make Flink streams rely on an event time:

CEP API

During development, extra filtering has been added to make sure a time interval between first and last pattern element is 24 hours. This was introduced after a failing test. On one hand Flink CEP is a great API to solve complex problems with a minimum amount of code. On the other hand, this can be error prone. That is why tests and test driven development should be your best friend when working with Complex Event Processing.

big data
analytics
apache flink
CEP
data discovery
flink
data stream platform
28 July 2021

Want more? Check our articles

big data blog getindata from spreadsheets automated data pipelines how this can be achieved 2png
Tutorial

From spreadsheets to automated data pipelines - and how this can be achieved with support of Google Cloud

CSVs and XLSXs files are one of the most common file formats used in business to store and analyze data. Unfortunately, such an approach is not…

Read more
radiodatawilla
Radio DaTa Podcast

Data Journey with Arunabh Singh (Willa) – Building robust ML & Analytics capability very early with FinTech, skills & competencies for data scientists with ML/AI predictions for the next decades.

In this episode of the RadioData Podcast, Adama Kawa talks with Arunabh Singh about Willa use cases (​ FinTech): the most important ML models…

Read more
getindata grafana loki monitoring
Use-cases/Project

Why are log analytics so important in a monitoring system?

A monitoring system is a necessary component of any data platform. We can find a lot of different services that use different approaches to the same…

Read more
getindata blog big data flink data capture jdbc flinksql
Tutorial

Change Data Capture by JDBC with FlinkSQL

These days, Big Data and Business Intelligence platforms are one of the fastest-growing areas of computer science. Companies want to extract knowledge…

Read more
14 720

Learn dbt Data Modeling: 3 Expert Blogs You Shouldn’t Miss

If you’re in the data world, you already know dbt (data build tool) is the real deal for transforming raw data into something actionable. It’s the go…

Read more
observability using grafanaobszar roboczy 1 4
Tutorial

Observability using Grafana - lessons learned

Introduction At GetInData, we understand the value of full observability across our application stacks. In this article we will share with you our…

Read more

Contact us

Interested in our solutions?
Contact us!

Together, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.


What did you find most impressive about GetInData?

They did a very good job in finding people that fitted in Acast both technically as well as culturally.
Type the form or send a e-mail: hello@getindata.com
The administrator of your personal data is GetInData Poland Sp. z o.o. with its registered seat in Warsaw (02-508), 39/20 Pulawska St. Your data is processed for the purpose of provision of electronic services in accordance with the Terms & Conditions. For more information on personal data processing and your rights please see Privacy Policy.

By submitting this form, you agree to our Terms & Conditions and Privacy Policy