Pierre Zemb's Blog

Handling OVH's alerts with Apache Flink

Table of contents

This is a repost from OVH's official blogpost.. Thanks Horacio Gonzalez for the awesome drawings!

πŸ”—Handling OVH's alerts with Apache Flink

OVH & Apache Flink

OVH relies extensively on metrics to effectively monitor its entire stack. Whether they are low-level or business centric, they allow teams to gain insight into how our services are operating on a daily basis. The need to store millions of datapoints per second has produced the need to create a dedicated team to build a operate a product to handle that load: **Metrics Data Platform.By relying on **Apache Hbase, Apache Kafka and Warp 10, we succeeded in creating a fully distributed platform that is handling all our metrics… and yours!

After building the platform to deal with all those metrics, our next challenge was to build one of the most needed feature for Metrics: the Alerting.

πŸ”—Meet OMNI, our alerting layer

OMNI is our code name for a fully distributed, as-code, alerting system that we developed on top of Metrics. It is split into components:

The query executor is pushing the query results into Kafka, ready to be handled! We now need to perform all the tasks that an alerting system does:

To handle that, we looked at open-source projects, such as Prometheus AlertManager, LinkedIn Iris, we discovered the hidden truth:

Handling alerts as streams of data,
moving from operators to another.

We embraced it, and decided to leverage Apache Flink to create Beacon. In the next section we are going to describe the architecture of Beacon, and how we built and operate it.

If you want some more information on Apache Flink, we suggest to read the introduction article on the official website: What is Apache Flink?

πŸ”—Beacon architecture

At his core, Beacon is reading events from Kafka. Everything is represented as a message, from alerts to aggregations rules, snooze orders and so on. The pipeline is divided into two branches:

Then everything is merged to generate a notification, that is going to be forward to the right person. A notification message is pushed into Kafka, that will be consumed by another component called beacon-notifier.

πŸ”—Handling States

If you are new to streaming architecture, I recommend reading Dataflow Programming Model from Flink official documentation.

Everything is merged into a dataStream, partitionned (keyed byin Flink API) by users. Here's an example:

    final DataStream> alertStream =
    
      // Partitioning Stream per AlertIdentifier
      cleanedAlertsStream.keyBy(0)
      // Applying a Map Operation which is setting since when an alert is triggered
      .map(new SetSinceOnSelector())
      .name("setting-since-on-selector").uid("setting-since-on-selector")
    
      // Partitioning again Stream per AlertIdentifier
      .keyBy(0)
      // Applying another Map Operation which is setting State and Trend
      .map(new SetStateAndTrend())
      .name("setting-state").uid("setting-state");

In the example above, we are chaining two keyed operations:

Each of this class is under 120 lines of codes because Flink is handling all the difficulties. Most of the pipeline are only composed of classic transformations such as Map, FlatMap, Reduce, including their Rich and Keyed version. We have a few Process Functions, which are very handy to develop, for example, the escalation timer.

πŸ”—Integration tests

As the number of classes was growing, we needed to test our pipeline. Because it is only wired to Kafka, we wrapped consumer and producer to create what we call **scenari:**a series of integration tests running different scenarios.

πŸ”—Queryable state

One killer feature of Apache Flink is the capabilities of **querying the internal state of an operator**. Even if it is a beta feature, it allows us the get the current state of the different parts of the job:

Queryable state overviewQueryable state overview

Thanks to this, we easily developed an API over the queryable state, that is powering our alerting view in Metrics Studio, our codename for the Web UI of the Metrics Data Platform.

We deployed the latest version of Flink (1.7.1 at the time of writing) directly on bare metal servers with a dedicated Zookeeper's cluster using Ansible. Operating Flink has been a really nice surprise for us, with clear documentation and configuration, and an impressive resilience. We are capable of rebooting the whole Flink cluster, and the job is restarting at his last saved state, like nothing happened.

We are using RockDB as a state backend, backed by OpenStack Swift storageprovided by OVH Public Cloud.

For monitoring, we are relying on Prometheus Exporter with Beamium to gain observability over job's health.

If you are used to work with stream related software, you may have realized that we did not used any rocket science or tricks. We may be relying on basics streaming features offered by Apache Flink, but they allowed us to tackle many business and scalability problems with ease.

Apache Flink

As such, we highly recommend that any developers should have a look to Apache Flink. I encourage you to go through Apache Flink Training, written by Data Artisans. Furthermore, the community has put a lot of effort to easily deploy Apache Flink to Kubernetes, so you can easily try Flink using our Managed Kubernetes!

Tags: #stream #flink