Alerting docs: Update `Alert rules` intro and `Introduction` pages (#84772)

undefined
pull/83368/head
Pepe Cano 1 year ago committed by GitHub
parent 3c359376e1
commit 856e410480
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 93
      docs/sources/alerting/fundamentals/_index.md
  2. 77
      docs/sources/alerting/fundamentals/alert-rules/_index.md
  3. 2
      docs/sources/alerting/fundamentals/alert-rules/annotation-label.md
  4. 12
      docs/sources/alerting/fundamentals/alert-rules/state-and-health.md
  5. 2
      docs/sources/alerting/fundamentals/notifications/alertmanager.md

@ -26,7 +26,7 @@ The following diagram gives you an overview of Grafana Alerting and introduces y
- Grafana alerting periodically queries data sources and evaluates the condition defined in the alert rule
- If the condition is breached, an alert instance fires
- Firing instances are routed to notification policies based on matching labels
- Firing and resolved alert instances are routed to notification policies based on matching labels
- Notifications are sent out to the contact points specified in the notification policy
## Fundamentals
@ -35,72 +35,60 @@ The following concepts are key to your understanding of how Grafana Alerting wor
### Alert rules
An alert rule consists of one or more queries and expressions that select the data you want to measure. It also contains a condition, which is the threshold that an alert rule must meet or exceed in order to fire.
An [alert rule][alert-rules] consists of one or more queries and expressions that select the data you want to measure. It also contains a condition, which is the threshold that an alert rule must meet or exceed in order to fire.
Add annotations to your alert rule to provide additional information about the alert rule and add labels to uniquely identify your alert rule and configure alert routing. Labels link alert rules to notification policies, so you can easily manage which policy should handle which alerts and who gets notified.
Add labels to uniquely identify your alert rule and configure alert routing. Labels link alert rules to notification policies, so you can easily manage which policy should handle which alerts and who gets notified.
Once alert rules are created, they go through various states and transitions. An alert rule can produce multiple alert instances - one alert instance for each time series.
The alert rule state is determined by the “worst case” state of the alert instances produced and the states can be Normal, Pending, or Firing. For example, if one alert instance is firing, the alert rule state will also be firing.
The alert rule health is determined by the status of the evaluation of the alert rule, which can be Ok, Error, and NoData.
### Labels and states
Alert rules are uniquely identified by sets of key/value pairs called labels. Each key is a label name and each value is a label value. For example, one alert might have the labels `foo=bar` and another alert rule might have the labels `foo=baz`. An alert rule can have many labels such as `foo=bar,bar=baz`, but it cannot have the same label twice such as `foo=bar,foo=baz`. Two alert rules cannot have the same labels either, and if two alert rules have the same labels such as `foo=bar,bar=baz` and `foo=bar,bar=baz` then one of the alerts will be discarded. Firing alerts are resolved when the condition in the alert rule is no longer met, or the alert rule is deleted.
In Grafana-managed alert rules, alert rules can be in Normal, Pending, Alerting, No Data or Error states. In datasource-managed alert rules, such as Mimir and Loki, alert rules can be in Normal, Pending and Alerting, but not NoData or Error.
Once alert rules are created, they go through various states and transitions.
### Alert instances
For Grafana-managed alert rules, multiple alert instances can be created as a result of one alert rule (also known as a multi-dimensional alerting) and they can be in Normal, Pending, Alerting, No Data, Error states. For Mimir or Loki-managed alert rules, alert instances are only created when the threshold condition defined in an alert rule is breached.
Each alert rule can produce multiple alert instances (also known as alerts) - one alert instance for each time series. This is exceptionally powerful as it allows us to observe multiple series in a single expression.
### Contact points
Contact points determine where notifications are sent. For example, you might have a contact point that sends notifications to an email address, to Slack, to an incident management system (IRM) such as Grafana OnCall or Pagerduty, or to a webhook.
The notifications that are sent from contact points can be customized using notification templates. You can use notification templates to change the title, message, and structure of the notification. Notification templates are not specific to individual integrations or contact points.
### Notification policies
```promql
sum by(cpu) (
rate(node_cpu_seconds_total{mode!="idle"}[1m])
)
```
Notification policies group alerts and then route them to contact points. They determine when notifications are sent, and how often notifications should be repeated.
A rule using the PromQL expression above creates as many alert instances as the amount of CPUs we are observing after the first evaluation, enabling a single rule to report the status of each CPU.
Alerts are matched to notification policies using label matchers. These are human-readable expressions that assert if the alert's labels exactly match, do not exactly match, contain, or do not contain some expected text. For example, the matcher `foo=bar` matches alerts with the label `foo=bar` while the matcher `foo=~[a-zA-Z]+` matches alerts with any label called foo with a value that matches the regular expression `[a-zA-Z]+`.
{{< figure src="/static/img/docs/alerting/unified/multi-dimensional-alert.png" caption="Multiple alert instances from a single alert rule" >}}
By default, an alert can only match one notification policy. However, with the `continue` feature alerts can be made to match any number of notification policies at the same time. For more information on notification policies, see [fundamentals of Notification Policies][notification-policies].
[Alert rules are frequently evaluated][alert-rule-evaluation] and the state of their alert instances is updated accordingly. Only alert instances that are in a firing or resolved state are routed to notification policies to be handled.
### Silences and mute timings
### Notification policies
Silences and mute timings allow you to pause notifications for specific alerts or even entire notification policies. Use a silence to pause notifications on an ad-hoc basis, such as during a maintenance window; and use mute timings to pause notifications at regular intervals, such as evenings and weekends.
[Notification policies][notification-policies] group alerts and then route them to contact points. They determine when notifications are sent, and how often notifications should be repeated.
## Provisioning
Alert instances are matched to notification policies using label matchers. This provides a flexible way to organize and route alerts to different receivers.
You can create your alerting resources (alert rules, notification policies, and so on) in the Grafana UI; configmaps, files and configuration management systems using file-based provisioning; and in Terraform using API-based provisioning.
Each policy consists of a set of label matchers (0 or more) that specify which alert instances (identified by their labels) they handle. Notification policies are defined as a tree structure where the root of the notification policy tree is called the **Default notification policy**. Each policy can have child policies.
## Key features and benefits
{{< figure src="/media/docs/alerting/notification-routing.png" max-width="750px" caption="Notification policy routing" >}}
**One page for all alerts**
### Contact points
A single Grafana Alerting page consolidates both Grafana-managed alerts and alerts that reside in your Prometheus-compatible data source in one single place.
[Contact points][contact-points] determine where notifications are sent. For example, you might have a contact point that sends notifications to an email address, to Slack, to an incident management system (IRM) such as Grafana OnCall or Pagerduty, or to a webhook.
**Multi-dimensional alerts**
Notifications sent from contact points are customizable with notification templates, which can be shared between contact points.
Alert rules can create multiple individual alert instances per alert rule, known as multi-dimensional alerts, giving you the power and flexibility to gain visibility into your entire system with just a single alert rule. You do this by adding labels to your query to specify which component is being monitored and generate multiple alert instances for a single alert rule. For example, if you want to monitor each server in a cluster, a multi-dimensional alert will alert on each CPU, whereas a standard alert will alert on the overall server.
### Silences and mute timings
**Route alerts**
[Silences][silences] and [mute timings][mute-timings] allow you to pause notifications for specific alerts or even entire notification policies. Use a silence to pause notifications on an ad-hoc basis, such as during a maintenance window; and use mute timings to pause notifications at regular intervals, such as evenings and weekends.
Route each alert instance to a specific contact point based on labels you define. Notification policies are the set of rules for where, when, and how the alerts are routed to contact points.
### Architecture
**Silence alerts**
Grafana Alerting is built on the Prometheus model of designing alerting systems.
Silences stop notifications from getting created and last for only a specified window of time.
Silences allow you to stop receiving persistent notifications from one or more alert rules. You can also partially pause an alert based on certain criteria. Silences have their own dedicated section for better organization and visibility, so that you can scan your paused alert rules without cluttering the main alerting view.
Prometheus-based alerting systems have two main components:
**Mute timings**
- An alert generator that evaluates alert rules and sends firing and resolved alerts to the alert receiver.
- An alert receiver (also known as Alertmanager) that receives the alerts and is responsible for handling them and sending their notifications.
A mute timing is a recurring interval of time when no new notifications for a policy are generated or sent. Use them to prevent alerts from firing a specific and reoccurring period, for example, a regular maintenance period.
Grafana doesn’t use Prometheus as its default alert generator because Grafana Alerting needs to work with many other data sources in addition to Prometheus.
Similar to silences, mute timings do not prevent alert rules from being evaluated, nor do they stop alert instances from being shown in the user interface. They only prevent notifications from being created.
However, Grafana can also use Prometheus as an alert generator as well as external Alertmanagers. For more information about how to use distinct alerting systems, refer to the [Grafana alert rule types][alert-rules].
## Design your Alerting system
@ -134,15 +122,26 @@ Here are some tips on how to create an effective alert management set up for you
- Think carefully about priority and severity levels.
- Continually review your thresholds and evaluation rules.
## Principles
{{% docs/reference %}}
In Prometheus-based alerting systems, you have an alert generator that creates alerts and an alert receiver that receives alerts. For example, Prometheus is an alert generator and is responsible for evaluating alert rules, while Alertmanager is an alert receiver and is responsible for grouping, inhibiting, silencing, and sending notifications about firing and resolved alerts.
[alert-rules]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules"
[alert-rules]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules"
Grafana Alerting is built on the Prometheus model of designing alerting systems. It has an internal alert generator responsible for scheduling and evaluating alert rules, as well as an internal alert receiver responsible for grouping, inhibiting, silencing, and sending notifications. Grafana doesn’t use Prometheus as its alert generator because Grafana Alerting needs to work with many other data sources in addition to Prometheus. However, it does use Alertmanager as its alert receiver.
[contact-points]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/notifications/contact-points"
[contact-points]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/notifications/contact-points"
Alerts are sent to the alert receiver where they are routed, grouped, inhibited, silenced and notified. In Grafana Alerting, the default alert receiver is the Alertmanager embedded inside Grafana, and is referred to as the Grafana Alertmanager. However, you can use other Alertmanagers too, and these are referred to as [External Alertmanagers][external-alertmanagers].
[silences]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/configure-notifications/create-silence"
[silences]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/configure-notifications/create-silence"
[mute-timings]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/configure-notifications/mute-timings"
[mute-timings]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/configure-notifications/mute-timings"
[alertmanager]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/notifications/alertmanager"
[alertmanager]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/notifications/alertmanager"
[alert-rule-evaluation]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/rule-evaluation"
[alert-rule-evaluation]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/rule-evaluation"
{{% docs/reference %}}
[external-alertmanagers]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/configure-alertmanager"
[external-alertmanagers]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/set-up/configure-alertmanager"
[notification-policies]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/notifications/notification-policies"

@ -21,31 +21,28 @@ weight: 100
# Alert rules
An alert rule is a set of evaluation criteria for when an alert rule should fire. An alert rule consists of one or more [queries and expressions, a condition][queries-and-conditions], and the duration over which the condition needs to be met to start firing.
An alert rule is a set of evaluation criteria for when an alert rule should fire. An alert rule consists of:
While queries and expressions select the data set to evaluate, a condition sets the threshold that an alert must meet or exceed to create an alert.
- Queries and expressions that select the data set to evaluate.
- A condition (the threshold) that the query must meet or exceed to trigger the alert instance.
- An interval that specifies the frequency of [alert rule evaluation][alert-rule-evaluation] and a duration indicating how long the condition must be met to trigger the alert instance.
- Other customizable options, for example, setting what should happen in the absence of data, notification messages, and more.
An interval specifies how frequently an [alert rule is evaluated][alert-rule-evaluation]. Duration, when configured, indicates how long a condition must be met. The alert rules can also define alerting behavior in the absence of data.
Grafana supports two different alert rule types: [Grafana-managed alert rules](#grafana-managed-alert-rules) and [Data source-managed alert rules](#data-source-managed-alert-rules).
Grafana supports two different alert rule types: Grafana-managed alert rules and data source-managed alert rules.
## Grafana-managed alert rules
Grafana-managed alert rules are the most flexible alert rule type. They allow you to create alerts that can act on data from any of our supported data sources.
In addition to supporting multiple data sources, you can also add expressions to transform your data and set alert conditions. Using images in alert notifications is also supported. This is the only type of rule that allows alerting from multiple data sources in a single rule definition.
Grafana-managed alert rules are the most flexible alert rule type. They allow you to create alerts that can act on data from any of our [supported data sources](#supported-data-sources), and use multiple data sources in a single alert rule.
The following diagram shows how Grafana-managed alerting works.
Additionally, you can also add [expressions to transform your data][expression-queries], set custom alert conditions, and include [images in alert notifications][notification-images].
{{< figure src="/media/docs/alerting/grafana-managed-rule.png" max-width="750px" caption="Grafana-managed alerting" >}}
{{< figure src="/media/docs/alerting/grafana-managed-alerting-architecture.png" max-width="750px" caption="How Grafana-managed alerting works by default" >}}
1. Alert rules are created within Grafana based on one or more data sources.
1. Alert rules are evaluated by the Alert Rule Evaluation Engine from within Grafana.
1. Alerts are delivered using the internal Grafana Alertmanager.
Note that you can also configure alerts to be delivered using an external Alertmanager; or use both internal and external alertmanagers.
1. Firing and resolved alert instances are delivered to the internal Grafana [Alertmanager][alert-manager] which handles notifications.
### Supported data sources
@ -56,47 +53,26 @@ The following data sources are supported:
- [Enterprise data source plugins](/grafana/plugins/data-source-plugins/?enterprise=1) and others maintained by Grafana such as [AWS Athena](/grafana/plugins/grafana-athena-datasource/), [AWS X-Ray](/grafana/plugins/grafana-x-ray-datasource/), [AWS Redshift](/grafana/plugins/grafana-redshift-datasource/), [AWS Timestream](/grafana/plugins/grafana-timestream-datasource/), [AWS IoT SiteWise](/grafana/plugins/grafana-iot-sitewise-datasource/), [Azure Data Explorer](/grafana/plugins/grafana-azure-data-explorer-datasource/), [Azure Monitor](/grafana/plugins/grafana-azure-monitor-datasource/), [ClickHouse](/grafana/plugins/grafana-clickhouse-datasource/), [Cloudwatch](/grafana/plugins/cloudwatch/), [CSV](/grafana/plugins/marcusolsson-csv-datasource/), [Elasticsearch](/grafana/plugins/elasticsearch/), [Falcon LogScale](/grafana/plugins/grafana-falconlogscale-datasource/), [GitHub](/grafana/plugins/grafana-github-datasource/), [Google BigQuery](/grafana/plugins/grafana-bigquery-datasource/), [Google Cloud Monitoring](/grafana/plugins/stackdriver/), [Graphite](/grafana/plugins/graphite/), [Loki](/grafana/plugins/loki/), [InfluxDB](/grafana/plugins/influxdb/), [Infinity](/grafana/plugins/yesoreyeram-infinity-datasource/), [MSSQL](/grafana/plugins/mssql/), [MySQL](/grafana/plugins/mysql/), [OpenSearch](/grafana/plugins/grafana-opensearch-datasource/), [OpenTSDB](/grafana/plugins/opentsdb/), [Oracle](/grafana/plugins/grafana-oracle-datasource/), [Orbit](/grafana/plugins/grafana-orbit-datasource/), [PostgreSQL](/grafana/plugins/postgres/), [Prometheus](/grafana/plugins/prometheus/), [Sentry](/grafana/plugins/grafana-sentry-datasource/), [SurrealDB](/grafana/plugins/grafana-surrealdb-datasource/), and [TestData](/grafana/plugins/grafana-testdata-datasource/).
- Backend data sources maintained by the [community](/grafana/plugins/data-source-plugins/?signature=community) and [partners](/grafana/plugins/data-source-plugins/?signature=commercial) that enable alerting.
### Multi-dimensional alerts
Grafana-managed alerting supports multi-dimensional alerting. Each alert rule can create multiple alert instances. This is exceptionally powerful if you are observing multiple series in a single expression.
Consider the following PromQL expression:
```promql
sum by(cpu) (
rate(node_cpu_seconds_total{mode!="idle"}[1m])
)
```
A rule using this expression will create as many alert instances as the amount of CPUs we are observing after the first evaluation, allowing a single rule to report the status of each CPU.
{{< figure src="/static/img/docs/alerting/unified/multi-dimensional-alert.png" caption="A multi-dimensional Grafana managed alert rule" >}}
## Data source-managed alert rules
To create data source-managed alert rules, you must have a compatible Prometheus or Loki data source.
Data source-managed alert rules can improve query performance via [recording rules](#recording-rules) and ensure high-availability and fault tolerance when implementing a distributed architecture.
You can check if your data source supports rule creation via Grafana by testing the data source and observing if the Ruler API is supported.
They are only supported for Prometheus-based or Loki data sources with the Ruler API enabled. For more information, refer to the [Loki Ruler API](/docs/loki/<GRAFANA_VERSION>/api/#ruler) or [Mimir Ruler API](/docs/mimir/<GRAFANA_VERSION>/references/http-api/#ruler).
For more information on the Ruler API, refer to [Ruler API](/docs/loki/latest/api/#ruler).
The following diagram shows how data source-managed alerting works.
{{< figure src="/media/docs/alerting/loki-mimir-rule.png" max-width="750px" caption="Grafana Mimir/Loki-managed alerting" >}}
{{< figure src="/media/docs/alerting/mimir-managed-alerting-architecture-v2.png" max-width="750px" caption="Mimir-managed alerting architecture" >}}
1. Alert rules are created and stored within the data source itself.
1. Alert rules can only be created based on Prometheus data.
1. Alert rule evaluation and delivery is distributed across multiple nodes for high availability and fault tolerance.
1. Alert rules can only query Prometheus-based data. It can use either queries or [recording rules](#recording-rules).
1. Alert rules are evaluated by the Alert Rule Evaluation Engine.
1. Firing and resolved alert instances are delivered to the configured [Alertmanager][alert-manager] which handles notifications.
### Recording rules
A recording rule allows you to pre-compute frequently needed or computationally expensive expressions and save their result as a new set of time series. This is useful if you want to run alerts on aggregated data or if you have dashboards that query computationally expensive expressions repeatedly.
Querying this new time series is faster, especially for dashboards since they query the same expression every time the dashboards refresh.
Grafana Enterprise offers an alternative to recorded rules in the form of recorded queries that can be executed against any data source.
Querying this new time series is faster, especially for dashboards since they query the same expression every time the dashboards refresh. For more information, refer to [Create recording rules][create-recording-rules].
For more information on recording rules, refer to [Create recording rules][create-recording-rules].
Alternatively, Grafana Enterprise and Grafana Cloud offer [recorded queries][recorded-queries] that can be executed against any data source.
## Comparison between alert rule types
@ -104,7 +80,7 @@ When choosing which alert rule type to use, consider the following comparison be
| <div style="width:200px">Feature</div> | <div style="width:200px">Grafana-managed alert rule</div> | <div style="width:200px">Data source-managed alert rule |
| ------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Create alert rules<wbr /> based on data from any of our supported data sources | Yes | No: You can only create alert rules that are based on Prometheus data. The data source must have the Ruler API enabled. |
| Create alert rules<wbr /> based on data from any of our supported data sources | Yes | No. You can only create alert rules that are based on Prometheus-based data. |
| Mix and match data sources | Yes | No |
| Includes support for recording rules | No | Yes |
| Add expressions to transform<wbr /> your data and set alert conditions | Yes | No |
@ -112,19 +88,26 @@ When choosing which alert rule type to use, consider the following comparison be
| Scaling | More resource intensive, depend on the database, and are likely to suffer from transient errors. They only scale vertically. | Store alert rules within the data source itself and allow for “infinite” scaling. Generate and send alert notifications from the location of your data. |
| Alert rule evaluation and delivery | Alert rule evaluation and delivery is done from within Grafana, using an external Alertmanager; or both. | Alert rule evaluation and alert delivery is distributed, meaning there is no single point of failure. |
**Note:**
If you are using non-Prometheus data, we recommend choosing Grafana-managed alert rules. Otherwise, choose Grafana Mimir or Grafana Loki alert rules where possible.
{{% docs/reference %}}
[alert-manager]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/notifications/alertmanager"
[alert-manager]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/notifications/alertmanager"
[create-recording-rules]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-mimir-loki-managed-recording-rule"
[create-recording-rules]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/create-mimir-loki-managed-recording-rule"
[alert-rule-evaluation]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/rule-evaluation"
[alert-rule-evaluation]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/rule-evaluation"
[expression-queries]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/queries-conditions#expression-queries"
[expression-queries]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/queries-conditions#expression-queries"
[queries-and-conditions]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/queries-conditions"
[queries-and-conditions]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/queries-conditions"
[notification-images]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/configure-notifications/template-notifications/images-in-notifications"
[notification-images]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/configure-notifications/template-notifications/images-in-notifications"
[recorded-queries]: "/docs/ -> /docs/grafana/<GRAFANA_VERSION>/administration/recorded-queries"
{{% /docs/reference %}}

@ -39,6 +39,8 @@ Labels are a fundamental component of alerting:
- Contact points can access labels to send notification messages that contain specific alert information.
- The Alertmanager uses labels to match alerts for silences and alert groups in notification policies.
Note that two alert rules cannot have the same labels. If two alert rules have the same labels such as `foo=bar,bar=baz` and `foo=bar,bar=baz` then one of the alerts will be discarded.
### How label matching works
Use labels and label matchers to link alert rules to notification policies and silences. This allows for a flexible way to manage your alert instances, specify which policy should handle them, and which alerts to silence.

@ -29,11 +29,13 @@ There are three key components: [alert rule state](#alert-rule-state), [alert in
An alert rule can be in either of the following states:
| State | Description |
| ----------- | ---------------------------------------------------------------------------------------------- |
| **Normal** | None of the time series returned by the evaluation engine is in a `Pending` or `Firing` state. |
| **Pending** | At least one time series returned by the evaluation engine is `Pending`. |
| **Firing** | At least one time series returned by the evaluation engine is `Firing`. |
| State | Description |
| ----------- | -------------------------------------------------------------------------------------------------- |
| **Normal** | None of the alert instances returned by the evaluation engine is in a `Pending` or `Firing` state. |
| **Pending** | At least one alert instances returned by the evaluation engine is `Pending`. |
| **Firing** | At least one alert instances returned by the evaluation engine is `Firing`. |
The alert rule state is determined by the “worst case” state of the alert instances produced. For example, if one alert instance is firing, the alert rule state will also be firing.
{{% admonition type="note" %}}
Alerts will transition first to `pending` and then `firing`, thus it will take at least two evaluation cycles before an alert is fired.

@ -18,7 +18,7 @@ weight: 111
Grafana sends firing and resolved alerts to Alertmanagers. The Alertmanager receives alerts, handles silencing, inhibition, grouping, and routing by sending notifications out via your channel of choice, for example, email or Slack.
Grafana has its own Alertmanager, referred to as "Grafana" in the user interface, but also supports sending alerts to other Alertmanagers too, such as the [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/).
Grafana has its own Alertmanager, referred to as "Grafana" in the user interface, but also supports sending alerts to other Alertmanagers, such as the [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/). You can use both internal and external Alertmanagers.
The Grafana Alertmanager uses notification policies and contact points to configure how and where a notification is sent; how often a notification should be sent; and whether alerts should all be sent in the same notification, sent in grouped notifications based on a set of labels, or as separate notifications.

Loading…
Cancel
Save