docs(alerting): simplify Intro to Grafana Alerting docs (#106944)

* docs(alerting): improve `Intro > Alert rule evaluation` docs

* Update Introduction to Grafana Alerting

* Simplify `Intro > Alert rules` and related docs

* minor copy change phrasing GMA and DS differences

* fix vale error
pull/105632/head^2
Pepe Cano 23 hours ago committed by GitHub
parent f8189eece9
commit 5c1b263664
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 8
      docs/sources/alerting/_index.md
  2. 24
      docs/sources/alerting/alerting-rules/_index.md
  3. 2
      docs/sources/alerting/alerting-rules/alerting-migration.md
  4. 157
      docs/sources/alerting/alerting-rules/create-data-source-managed-rule.md
  5. 47
      docs/sources/alerting/alerting-rules/create-grafana-managed-rule.md
  6. 2
      docs/sources/alerting/alerting-rules/create-recording-rules/_index.md
  7. 4
      docs/sources/alerting/alerting-rules/create-recording-rules/create-grafana-managed-recording-rules.md
  8. 2
      docs/sources/alerting/alerting-rules/link-alert-rules-to-panels.md
  9. 28
      docs/sources/alerting/best-practices/_index.md
  10. 8
      docs/sources/alerting/best-practices/connectivity-errors.md
  11. 12
      docs/sources/alerting/best-practices/missing-data.md
  12. 92
      docs/sources/alerting/fundamentals/_index.md
  13. 112
      docs/sources/alerting/fundamentals/alert-rule-evaluation/_index.md
  14. 58
      docs/sources/alerting/fundamentals/alert-rule-evaluation/alert-rule-state-and-health.md
  15. 37
      docs/sources/alerting/fundamentals/alert-rule-evaluation/evaluation-within-a-group.md
  16. 81
      docs/sources/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states.md
  17. 6
      docs/sources/alerting/fundamentals/alert-rule-evaluation/stale-alert-instances.md
  18. 90
      docs/sources/alerting/fundamentals/alert-rules/_index.md
  19. 14
      docs/sources/alerting/fundamentals/notifications/_index.md
  20. 12
      docs/sources/alerting/monitor-status/view-alert-state.md
  21. 2
      docs/sources/shared/alerts/configure-alert-rule-name.md
  22. 2
      docs/sources/shared/alerts/configure-notification-message.md
  23. 10
      docs/sources/shared/alerts/note-prometheus-ds-rules.md

@ -35,13 +35,17 @@ cards:
href: ./configure-notifications/
description: Choose how, when, and where to send your alert notifications.
height: 24
- title: Monitor status
- title: Monitor alerts
href: ./monitor-status/
description: Monitor, respond to, and triage issues within your services.
height: 24
- title: Additional configuration
href: ./set-up/
description: Use advanced configuration options to further tailor your alerting setup. These options can enhance security, scalability, and automation in complex environments.
description: Use advanced configuration to customize your alerting setup and improve security, scalability, and automation in complex environments.
height: 24
- title: Best practices
href: ./best-practices/
description: Get practical guidance for handling common alert issues, and explore examples for creating both basic and advanced alerts.
height: 24
---

@ -33,16 +33,16 @@ refs:
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-recording-rules/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/create-recording-rules/
alert-types-comparison-table:
import-to-grafana-managed:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/#comparison-between-alert-rule-types
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/alerting-migration/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/#comparison-between-alert-rule-types
templating-labels-annotations:
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/alerting-migration/
comparison-ds-grafana-rules:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/templates/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-data-source-managed-rule/#comparison-with-grafana-managed-rules
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/templates/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/create-data-source-managed-rule/#comparison-with-grafana-managed-rules
---
# Configure alert rules
@ -53,18 +53,14 @@ An alert rule consists of one or more queries and expressions that select the da
Grafana supports two types of alert rules:
1. Grafana-managed alert rules: These can query multiple data sources.
1. **Grafana-managed alert rules** — the recommended option. They can query backend data sources—including Prometheus-based ones—and offer a [richer feature set](ref:comparison-ds-grafana-rules).
1. **Data source-managed alert rules** — supported for Prometheus-based data sources (such as Mimir, Loki, and Prometheus), with rules stored in the data source itself.
1. Data source-managed alert rules: These can only query Prometheus-based data sources and support horizontal scaling.
We recommend using Grafana-managed alert rules whenever possible, and opting for data source-managed alert rules when horizontal scaling is required. Refer to the [comparison table of alert rule types](ref:alert-types-comparison-table) for a more detailed overview.
You can [convert and import data source-managed rules into Grafana-managed rules](ref:import-to-grafana-managed) to let Grafana Alerting manage them.
Both types of alert rules can be configured in Grafana using the **+ New alert rule** flow. For step-by-step instructions, refer to:
- [Configure Grafana-managed alert rules](ref:configure-grafana-alerts)
- [Configure data source-managed alert rules](ref:configure-ds-alerts)
- [Create and link alert rules to panels](ref:templating-labels-annotations)
Alert rules can also query metrics generated by recording rules. To learn more, refer to:
- [Create recording rules](ref:recording-rules)
In Grafana Alerting, you can also [configure recording rules](ref:recording-rules), which pre-compute queries and save the results as new time series metrics for use in other alert rules or dashboard queries.

@ -10,7 +10,7 @@ labels:
- oss
title: Import data source-managed rules to Grafana-managed rules
menuTitle: Import to Grafana-managed rules
weight: 450
weight: 300
refs:
configure-grafana-rule_query_offset:
- pattern: /docs/

@ -18,9 +18,9 @@ labels:
- enterprise
- oss
title: Configure data source-managed alert rules
weight: 200
weight: 400
refs:
shared-configure-prometheus-data-source-alerting:
configure-prometheus-data-source-alerting:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/prometheus/configure/
- pattern: /docs/grafana-cloud/
@ -29,7 +29,12 @@ refs:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-grafana-managed-rule/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/create-grafana-managed-rule/notification-policies/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/create-grafana-managed-rule/
supported-data-sources-grafana-rules:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-grafana-managed-rule/#supported-data-sources
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/create-grafana-managed-rule/#supported-data-sources
notification-policies:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/notifications/notification-policies/
@ -75,29 +80,113 @@ refs:
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/link-alert-rules-to-panels/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/link-alert-rules-to-panels/
import-to-grafana-rules:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/alerting-migration/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/alerting-migration/
create-recording-rules:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-recording-rules/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/create-recording-rules/
rbac:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/configure-rbac/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/set-up/configure-rbac/
expression-queries:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/queries-conditions/#advanced-options-expressions
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/queries-conditions/#advanced-options-expressions
alert-condition:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/queries-conditions/#alert-condition
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/queries-conditions/#alert-condition
notification-images:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/configure-notifications/template-notifications/images-in-notifications/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/configure-notifications/template-notifications/images-in-notifications/
view-alert-state-history:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/monitor-status/view-alert-state-history/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting/monitor-status/view-alert-state-history/
view-compare-and-restore-alert-rules-versions:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/monitor-status/view-alert-rules/#view-compare-and-restore-alert-rules-versions
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting/monitor-status/view-alert-rules/#view-compare-and-restore-alert-rules-versions
th-provisioning:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/terraform-provisioning/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/set-up/provision-alerting-resources/terraform-provisioning/
no-data-error-states:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/
stale-alert-instances:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/stale-alert-instances/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/stale-alert-instances/
---
# Configure data source-managed alert rules
Data source-managed alert rules can only be created using Grafana Mimir or Grafana Loki data sources.
Data source-managed alert rules are alert rules that are stored in the data source, such as in Prometheus or Loki, rather than in Grafana.
The rules are stored within the data source. In a distributed architecture, they can scale horizontally to provide high-availability. For more details, refer to [alert rule types](ref:alert-rules).
In Grafana Alerting, you can:
We recommend using [Grafana-managed alert rules](ref:configure-grafana-managed-rules) whenever possible and opting for data source-managed alert rules when scaling your alerting setup is necessary.
1. Create and edit data source-managed rules for Grafana Mimir and Grafana Loki data sources.
1. View rules from Prometheus data sources when [Manage alerts via Alerting UI](ref:configure-prometheus-data-source-alerting) is enabled. However, you cannot create or edit these rules in Grafana.
1. [Import data source-managed rules](ref:import-to-grafana-rules) from Loki, Mimir, and Prometheus into Grafana-managed rules.
> Rules from a Prometheus data source appear in the **Data source-managed** section of the **Alert rules** page when [Manage alerts via Alerting UI](ref:shared-configure-prometheus-data-source-alerting) is enabled.
>
> However, Grafana can only create and edit data source-managed rules for Mimir and Loki, not for a Prometheus instance.
{{< admonition type="note" >}}
Data source-managed rules are supported for horizontal scalability, but they can introduce more operational complexity than Grafana-managed alert rules.
We recommend using [Grafana-managed alert rules](ref:configure-grafana-managed-rules) whenever possible, as they provide a richer feature set and better integration with the full Grafana Alerting workflow.
{{< /admonition >}}
## Comparison with Grafana-managed rules
The table below compares Grafana-managed and data source-managed alert rules.
| <div style="width:200px">Feature</div> | <div style="width:200px">Grafana-managed alert rule</div> | <div style="width:200px">Data source-managed alert rule |
| ----------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------- | --------------------------------------------------------------------------------- |
| Supported data sources | All backend data sources enabling the [`alerting` option](ref:rbac) | Only supports creating rules for Mimir and Loki data sources |
| Mix and match data sources | Yes | No |
| Add [expressions](ref:expression-queries) to transform<wbr /> your data and set [alert conditions](ref:alert-condition) | Yes | No |
| [No data and error states](ref:no-data-error-states) | Yes | No |
| [Stale alert instances](ref:stale-alert-instances) | Yes | No |
| [Images in alert notifications](ref:notification-images) | Yes | No |
| [Role-based access control](ref:rbac) | Yes | No |
| [Alert state history](ref:view-alert-state-history) | Yes | No |
| [Alert version history](ref:view-compare-and-restore-alert-rules-versions) | Yes | No |
| [Terraform provisioning](ref:th-provisioning) | Yes | No |
| [Recording rules](ref:create-recording-rules) | Yes | Yes |
| Organization | Organize and manage access with folders | Use namespaces |
| Alert rule evaluation | Alert evaluation is done in Grafana | Alert rule evaluation is done in the data source and allow for horizontal scaling |
| Scaling | Alert rules are stored in the Grafana database. | Alert rules are stored within the data source and allow for horizontal scaling |
[//]: <> ({{< docs/shared lookup="alerts/note-prometheus-ds-rules.md" source="grafana" version="<GRAFANA_VERSION>" >}})
The following diagram shows the architecture of a Mimir setup that uses data source-managed alert rules.
{{< figure src="/media/docs/alerting/mimir-managed-alerting-architecture-v2.png" max-width="750px" alt="Data source-managed alerting architecture based on Grafana Mimir" >}}
## Create data source-managed alert rules
To create or edit data source-managed alert rules, follow these instructions.
## Before you begin
### Before you begin
Verify that you have write permission to the Mimir or Loki data source. Otherwise, you cannot create or update data source-managed alert rules.
### Enable the Ruler API
#### Enable the Ruler API
For more information, refer to the [Mimir Ruler API](/docs/mimir/latest/references/http-api/#ruler) or [Loki Ruler API](/docs/loki/latest/api/#ruler).
@ -105,13 +194,13 @@ For more information, refer to the [Mimir Ruler API](/docs/mimir/latest/referenc
- **Loki** - The `local` rule storage type, default for the Loki data source, supports only viewing of rules. To edit rules, configure one of the other rule storage types.
### Permissions
#### Permissions
Alert rules for Mimir or Loki instances can be edited or deleted by users with **Editor** or **Admin** roles.
If you do not want to manage alert rules for a particular data source, go to its settings and clear the **Manage alerts via Alerting UI** checkbox.
### Provisioning
#### Provisioning
Note that if you delete an alert resource created in the UI, you can no longer retrieve it.
@ -119,9 +208,11 @@ To backup and manage alert rules, you can [provision alerting resources](ref:sha
[//]: <> ({{< docs/shared lookup="alerts/configure-provisioning-before-begin.md" source="grafana" version="<GRAFANA_VERSION>" >}})
### Set alert rule name
{{< docs/shared lookup="alerts/configure-alert-rule-name.md" source="grafana" version="<GRAFANA_VERSION>" >}}
## Define query and condition
### Define query and condition
Define a query to get the data you want to measure and a condition that needs to be met before an alert rule fires.
@ -137,7 +228,7 @@ By default, new alert rules are Grafana-managed. To switch to **Data source-mana
1. In the **Rule type** option, select **Data source-managed**.
1. Click **Preview alerts**.
## Set alert evaluation behavior
### Set alert evaluation behavior
Use [alert rule evaluation](ref:alert-rule-evaluation) to determine how frequently an alert rule should be evaluated and how quickly it should change its state.
@ -154,7 +245,7 @@ Use [alert rule evaluation](ref:alert-rule-evaluation) to determine how frequent
Once a condition is met, the alert goes into the **Pending** state. If the condition remains active for the duration specified, the alert transitions to the **Firing** state, else it reverts to the **Normal** state.
## Configure labels and notifications
### Configure labels and notifications
Add [labels](ref:alert-rule-labels) to your alert rules to set which [notification policy](ref:notification-policies) should handle your firing alert instances.
@ -164,4 +255,34 @@ All alert rules and instances, irrespective of their labels, match the default n
Add custom labels by selecting existing key-value pairs from the drop down, or add new labels by entering the new key or value.
{{< docs/shared lookup="alerts/configure-notification-message.md" source="grafana" version="<GRAFANA_VERSION>" >}}
### Configure notification message
Use [annotations](ref:shared-annotations) to add information to alert messages that can help respond to the alert.
Annotations are included by default in notification messages, and can use text or [templates](ref:shared-alert-rule-template) to display dynamic data from queries.
Grafana provides several optional annotations.
1. Optional: Add a summary.
Short summary of what happened and why.
1. Optional: Add a description.
Description of what the alert rule does.
1. Optional: Add a Runbook URL.
Webpage where you keep your runbook for the alert
1. Optional: Add a custom annotation.
Add any additional information that could help address the alert.
1. Optional: **Link dashboard and panel**.
[Link the alert rule to a panel](ref:shared-link-alert-rules-to-panels) to facilitate alert investigation.
1. Click **Save rule**.
[//]: <> ({{< docs/shared lookup="alerts/configure-notification-message.md" source="grafana" version="<GRAFANA_VERSION>" >}})

@ -42,9 +42,9 @@ refs:
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/queries-conditions/#recovery-threshold
modify-the-no-data-or-error-state:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/state-and-health/#modify-the-no-data-or-error-state
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#modify-the-no-data-or-error-state
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/state-and-health/#modify-the-no-data-or-error-state
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#modify-the-no-data-or-error-state
pending-period:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/#pending-period
@ -105,11 +105,6 @@ refs:
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/
compatible-data-sources:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/#supported-data-sources
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/#supported-data-sources
shared-provision-alerting-resources:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/
@ -130,27 +125,43 @@ refs:
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/link-alert-rules-to-panels/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/link-alert-rules-to-panels/
tutorials:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/best-practices/tutorials/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/best-practices/tutorials/
---
# Configure Grafana-managed alert rules
Grafana-managed rules can query data from multiple data sources in a single alert rule.
They're the most flexible [alert rule type](ref:alert-rules).
You can also add expressions to transform your data, set alert conditions, and images in alert notifications.
Grafana-managed alert rules are the default way to create alert rules in Grafana.
{{< admonition type="note" >}}
In Grafana Cloud, the number of Grafana-managed alert rules you can create depends on your Grafana Cloud plan.
Grafana-managed rules inherit their model from Prometheus Alerting and extend it with greater flexibility—such as multi-data source queries, expression-based transformations, advanced alert conditions, images in notifications, custom states, and more.
- Free Forever plan: You can create up to 100 free alert rules, with each alert rule having a maximum of 1000 alert instances.
- All paid plans (Pro and Advanced): They have a soft limit of 2000 alert rules and support unlimited alert instances. To increase the limit, open a support ticket from the [Cloud portal](/docs/grafana-cloud/account-management/support/).
To create or edit Grafana-managed alert rules, follow the instructions below.
{{< admonition type="tip" >}}
For quick-start tutorials on key alerting features, see [Getting started with Grafana Alerting tutorials](ref:tutorials).
{{< /admonition >}}
To create or edit Grafana-managed alert rules, follow the instructions below. For a practical example, check out our [tutorial on getting started with Grafana alerting](http://grafana.com/tutorials/alerting-get-started/).
## Before you begin
Verify that the data sources you plan to query in the alert rule are [compatible with Grafana-managed alert rules](ref:compatible-data-sources) and are properly configured.
Before you create Grafana-managed alert rules, review the following requirements and options.
### Supported data sources
Grafana-managed alert rules can query backend data sources when the data source's `plugin.json` file sets `{"backend": true, "alerting": true}`.
Before you create an alert rule, verify that the data sources you plan to query are compatible and properly configured.
You can find the public data sources that support alert rules in the [Grafana Plugins directory](/grafana/plugins/data-source-plugins/?features=alerting).
### Alert rule limits in Grafana Cloud
In Grafana Cloud, the number of Grafana-managed alert rules you can create depends on your Grafana Cloud plan.
- Free Forever plan: You can create up to 100 free alert rules, with each alert rule having a maximum of 1000 alert instances.
- All paid plans (Pro and Advanced): They have a soft limit of 2000 alert rules and support unlimited alert instances. To increase the limit, open a support ticket from the [Cloud portal](/docs/grafana-cloud/account-management/support/).
### Permissions
@ -174,6 +185,8 @@ After you have created an alert rule, the system defaults to your previous choic
Switching from advanced to default may result in queries and expressions that can't be converted.
In this case, a warning message asks if you want to continue to reset to default settings.
## Set alert rule name
{{< docs/shared lookup="alerts/configure-alert-rule-name.md" source="grafana" version="<GRAFANA_VERSION>" >}}
## Define query and condition

@ -50,5 +50,5 @@ The evaluation group of the recording rule determines how often the metric is pr
Similar to alert rules, Grafana supports two types of recording rules:
1. [Grafana-managed recording rules](ref:grafana-managed-recording-rules), which can query any Grafana data source supported by alerting.
1. [Grafana-managed recording rules](ref:grafana-managed-recording-rules), which can query any Grafana data source supported by alerting. It's the recommended option.
2. [Data source-managed recording rules](ref:data-source-managed-recording-rules), which can query Prometheus-based data sources like Mimir or Loki.

@ -28,9 +28,9 @@ refs:
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/create-recording-rules/
alerting-data-sources:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/#supported-data-sources
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-grafana-managed-rule/#supported-data-sources
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/#supported-data-sources
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/create-grafana-managed-rule/#supported-data-sources
configure-grafana-min-interval:
- pattern: /docs/
destination: /docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/#min_interval

@ -16,7 +16,7 @@ labels:
- enterprise
- oss
title: Create and link alert rules to panels
weight: 300
weight: 200
refs:
time-series-visualizations:
- pattern: /docs/grafana/

@ -15,6 +15,32 @@ weight: 170
# Grafana Alerting best practices
This section provides a set of guides and examples of best practices for Grafana Alerting. Here you can learn more about more about how to handle common alert management problems and you can see examples of more advanced usage of Grafana Alerting.
This section provides a set of guides and examples of best practices for Grafana Alerting. Here you can learn more about how to handle common alert management problems and you can see examples of more advanced usage of Grafana Alerting.
{{< section >}}
Designing and configuring an alert management set up that works takes time. Here are some additional tips on how to create an effective alert management set up:
**Which are the key metrics for your business that you want to monitor and alert on?**
- Find events that are important to know about and not so trivial or frequent that recipients ignore them.
- Alerts should only be created for big events that require immediate attention or intervention.
- Consider quality over quantity.
**How do you want to organize your alerts and notifications?**
- Be selective about who you set to receive alerts. Consider sending them to the right teams, whoever is on call, and the specific channels.
- Think carefully about priority and severity levels.
- Automate as far as possible provisioning Alerting resources with the API or Terraform.
**Which information should you include in notifications?**
- Consider who the alert receivers and responders are.
- Share information that helps responders identify and address potential issues.
- Link alerts to dashboards to guide responders on which data to investigate.
**How can you reduce alert fatigue?**
- Avoid noisy, unnecessary alerts by using silences, mute timings, or pausing alert rule evaluation.
- Continually tune your alert rules to review effectiveness. Remove alert rules to avoid duplication or ineffective alerts.
- Continually review your thresholds and evaluation rules.

@ -28,14 +28,14 @@ refs:
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/notifications/
no-data-and-error-alerts:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/state-and-health/#no-data-and-error-alerts
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#no-data-and-error-alerts
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/state-and-health/#no-data-and-error-alerts
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#no-data-and-error-alerts
configure-nodata-and-error-handling:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/state-and-health/#modify-the-no-data-or-error-state
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#modify-the-no-data-or-error-state
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/state-and-health/#modify-the-no-data-or-error-state
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#modify-the-no-data-or-error-state
missing-data-guide:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/best-practices/missing-data/

@ -33,19 +33,19 @@ refs:
destination: /docs/grafana-cloud/alerting-and-irm/alerting/monitor-status/view-alert-state-history/
configure-nodata-and-error-handling:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/state-and-health/#modify-the-no-data-or-error-state
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#modify-the-no-data-or-error-state
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/state-and-health/#modify-the-no-data-or-error-state
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#modify-the-no-data-or-error-state
stale-alert-instances:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/state-and-health/#stale-alert-instances-missingseries
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/stale-alert-instances/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/state-and-health/#stale-alert-instances-missingseries
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/stale-alert-instances/
no-data-and-error-alerts:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/state-and-health/#no-data-and-error-alerts
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#no-data-and-error-alerts
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/state-and-health/#no-data-and-error-alerts
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#no-data-and-error-alerts
---
# Handle missing data in Grafana Alerting

@ -10,7 +10,7 @@ labels:
- enterprise
- oss
menuTitle: Introduction
title: Introduction to Alerting
title: Introduction to Grafana Alerting
weight: 100
refs:
notifications:
@ -58,27 +58,36 @@ refs:
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/notifications/group-alert-notifications/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/notifications/group-alert-notifications/
multi-dimensional-alerts-example:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/best-practices/multi-dimensional-alerts/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/best-practices/multi-dimensional-alerts/
tutorials:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/best-practices/tutorials/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/best-practices/tutorials/
---
# Introduction to Alerting
# Introduction to Grafana Alerting
Whether you’re just starting out or you're a more experienced user of Grafana Alerting, learn more about the fundamentals and available features that help you create, manage, and respond to alerts; and improve your team’s ability to resolve issues quickly.
Grafana Alerting lets you define alert rules across multiple data sources and manage notifications with flexible routing.
Built on the Prometheus alerting model, it integrates with the Grafana stack to provide a scalable and effective alerting setup across a wide range of environments.
{{< admonition type="tip" >}}
For a hands-on introduction, refer to our [tutorial to get started with Grafana Alerting](http://grafana.com/tutorials/alerting-get-started/).
For a hands-on introduction, refer to [Getting started with Grafana Alerting tutorials](ref:tutorials).
{{< /admonition >}}
The following diagram gives you an overview of Grafana Alerting and introduces you to some of the fundamental features that are the principles of how Grafana Alerting works.
<br/>
## How it works at a glance
{{< figure src="/media/docs/alerting/alerting-configure-notifications-v2.png" max-width="750px" alt="How Grafana Alerting works" >}}
## How it works at a glance
- Grafana Alerting periodically queries data sources and evaluates the condition defined in the alert rule
- If the condition is breached, an alert instance fires
- Firing (and resolved) alert instances are sent for notifications, either directly to a contact point or through notification policies for more flexibility
1. Grafana Alerting periodically evaluates alert rules by executing their data source queries and checking their conditions.
1. Each alert rule can produce multiple alert instances—one per time series or dimension.
1. If a condition is breached, an alert instance fires.
1. Firing (and resolved) alert instances are sent for notifications, either directly to a contact point or through notification policies for more flexibility.
## Fundamentals
@ -90,9 +99,13 @@ An [alert rule](ref:alert-rules) consists of one or more queries and expressions
In the alert rule, choose the contact point or notification policies to determine how to receive the alert notifications.
### Alert rule evaluation
[Alert rules are frequently evaluated](ref:alert-rule-evaluation) and the state of their alert instances is updated accordingly. Only alert instances that are in a firing or resolved state are sent in notifications.
### Alert instances
Each alert rule can produce multiple alert instances (also known as alerts) - one alert instance for each time series. This is exceptionally powerful as it allows you to observe multiple series in a single expression.
Each alert rule can produce multiple alert instances (also known as alerts) - one alert instance for each time series or dimension. This is allows you to observe multiple resources in a single expression.
```promql
sum by(cpu) (
@ -102,9 +115,9 @@ sum by(cpu) (
A rule using the PromQL expression above creates as many alert instances as the amount of CPUs after the first evaluation, enabling a single rule to report the status of each CPU.
{{< figure src="/static/img/docs/alerting/unified/multi-dimensional-alert.png" caption="Multiple alert instances from a single alert rule" >}}
{{< figure src="/static/img/docs/alerting/unified/multi-dimensional-alert.png" alt="Multiple alert instances from a single alert rule" >}}
[Alert rules are frequently evaluated](ref:alert-rule-evaluation) and the state of their alert instances is updated accordingly. Only alert instances that are in a firing or resolved state are sent in notifications.
_For a demo, see the [multi-dimensional alerts example](ref:multi-dimensional-alerts-example)._
### Contact points
@ -118,57 +131,20 @@ In the alert rule, you can choose a contact point to receive the alert notificat
### Notification policies
[Notification policies](ref:notification-policies) is an advanced option to handle alert notifications for larger systems.
[Notification policies](ref:notification-policies) are an advanced option for handling alert notifications by distinct scopes, such as by team or service—ideal for managing large alerting systems.
Notification policies routes alerts to contact points via label matching. Each notification policy consists of a set of label matchers (0 or more) that specify which alert instances (identified by their labels) they handle. Notification policies are defined in a tree structure, where the root of the notification policy tree is the **Default notification policy**, which ensures all alert instances are handled.
Notification policies routes alerts to contact points via label matching. They are defined in a tree structure, where the root of the notification policy tree is the **Default notification policy**, which ensures all alert instances are handled.
{{< figure src="/media/docs/alerting/notification-routing.png" max-width="750px" alt="A diagram displaying how the notification policy tree routes alerts" caption="Routing firing alert instances through notification policies" >}}
<br/>
Each notification policy decides where to send the alert (contact point) and when to send the notification (timing options). Additionally, it can [group multiple firing alert instances into a single notification](ref:group-alert-notifications) to reduce alert noise.
Each notification policy decides where to send the alert (contact point) and when to send the notification (timing options).
### Notification grouping
{{< figure src="/media/docs/alerting/alerting-notification-policy-diagram-v5.png" max-width="750px" alt="A diagram of the notification policy component" >}}
To reduce alert noise, Grafana Alerting [groups related firing alerts into a single notification](ref:group-alert-notifications) by default. You can customize this behavior in the alert rule or notification policy settings.
### Silences and mute timings
[Silences](ref:silences) and [mute timings](ref:mute-timings) allow you to pause notifications without interrupting alert rule evaluation. Use a silence to pause notifications on a one-time basis, such as during a maintenance window; and use mute timings to pause notifications at regular intervals, such as evenings and weekends.
### Architecture
Grafana Alerting is built on the Prometheus model of designing alerting systems. Prometheus-based alerting systems have two main components:
- An alert generator that [evaluates alert rules](ref:alert-rule-evaluation) and sends firing and resolved alerts to the alert receiver.
- An alert receiver (also known as Alertmanager) that receives the alerts and is responsible for sending their [notifications](ref:notifications).
## Design your Alerting system
Monitoring complex IT systems and understanding whether everything is up and running correctly is a difficult task. Setting up an effective alert management system is therefore essential to inform you when things are going wrong before they start to impact your business outcomes.
Designing and configuring an alert management set up that works takes time.
Here are some tips on how to create an effective alert management set up for your business:
**Which are the key metrics for your business that you want to monitor and alert on?**
- Find events that are important to know about and not so trivial or frequent that recipients ignore them.
- Alerts should only be created for big events that require immediate attention or intervention.
- Consider quality over quantity.
**How do you want to organize your alerts and notifications?**
- Be selective about who you set to receive alerts. Consider sending them to the right teams, whoever is on call, and the specific channels.
- Think carefully about priority and severity levels.
- Automate as far as possible provisioning Alerting resources with the API or Terraform.
**Which information should you include in notifications?**
- Consider who the alert receivers and responders are.
- Share information that helps responders identify and address potential issues.
- Link alerts to dashboards to guide responders on which data to investigate.
**How can you reduce alert fatigue?**
- Avoid noisy, unnecessary alerts by using silences, mute timings, or pausing alert rule evaluation.
- Continually tune your alert rules to review effectiveness. Remove alert rules to avoid duplication or ineffective alerts.
- Continually review your thresholds and evaluation rules.

@ -15,16 +15,26 @@ labels:
title: Alert rule evaluation
weight: 108
refs:
alerts-state-health:
evaluation-within-a-group:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/state-and-health/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/evaluation-within-a-group/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/state-and-health/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/evaluation-within-a-group/
nodata-and-error-states:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/
import-ds-rules:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/alerting-migration/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/alerting-migration/
notifications:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/notifications/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/notifications/
---
# Alert rule evaluation
@ -35,80 +45,92 @@ The criteria determining when an alert rule fires are based on three settings:
- [Pending period](#pending-period): how long the condition must be met to start firing.
- [Keep firing for](#pending-period): how long the alert continues to fire after the condition is no longer met.
{{< figure src="/media/docs/alerting/alert-rule-evaluation-2.png" max-width="750px" alt="Set the evaluation behavior of the alert rule in Grafana." caption="Set alert rule evaluation" >}}
{{< figure src="/media/docs/alerting/alert-rule-evaluation-2.png" max-width="750px" alt="Set the evaluation behavior of the alert rule in Grafana." caption="Set alert rule evaluation" >}}
## Evaluation group
These settings affect how alert instances progress through their lifecycle.
Every alert rule and recording rule is assigned to an evaluation group. You can assign the rule to an existing evaluation group or create a new one.
## Alerting lifecycle
Each evaluation group contains an **evaluation interval** that determines how frequently the rule is checked. For instance, the evaluation may occur every `10s`, `30s`, `1m`, `10m`, etc.
Each alert rule can generate one or more alert instances.
An alert instance transitions between these common states based on how long the alert condition remains met or not met.
| State | Description |
| -------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
| **Normal** | The state of an alert when the condition (threshold) is not met. |
| **Pending** | The state of an alert that has breached the threshold but for less than the [pending period](#pending-period). |
| **Alerting** | The state of an alert that has breached the threshold for longer than the [pending period](#pending-period). |
| **Recovering** | The state of a firing alert when the threshold is no longer breached, but for less than the [keep firing for](#keep-firing-for) period. |
{{< figure src="/media/docs/alerting/alert-rule-evaluation-basic-statediagram.png" alt="A diagram of the lifecyle of a firing alert instance." max-width="750px" >}}
If an alert rule changes (except for updates to annotations, the evaluation interval, or other internal fields), its alert instances reset to the **Normal** state, and update accordingly during the next evaluation.
{{< admonition type="note" >}}
**Evaluation strategies**
To learn about additional alert instance states, see [No Data and Error states](ref:nodata-and-error-states).
Rules in different groups can be evaluated simultaneously.
{{< /admonition >}}
- **Grafana-managed** rules within the same group are evaluated concurrently—they are evaluated at different times over the same evaluation interval but display the same evaluation timestamp.
## Notification routing
- **Data source-managed** rules within the same group are evaluated sequentially, one after the other—this is useful to ensure that recording rules are evaluated before alert rules.
Alert instances are routed for [notifications](ref:notifications) in two scenarios:
- **Grafana-managed rules [imported from data source-managed rules](ref:import-ds-rules)** are also evaluated sequentially.
1. When they transition to the **Alerting** state.
2. When they transition to **Normal** state and marked as `Resolved`, either from the **Alerting** or **Recovering** state.
## Evaluation group
Every alert rule and recording rule is assigned to an evaluation group.
Each evaluation group contains an **evaluation interval** that determines how frequently the rule is checked. For instance, the evaluation may occur every `10s`, `30s`, `1m`, `10m`, etc.
Rules can be evaluated concurrently or sequentially. For details, see [How rules are evaluated within a group](ref:evaluation-within-a-group).
## Pending period
You can set a pending period to prevent unnecessary alerts from temporary issues.
You can set a **Pending period** to prevent unnecessary notifications caused by temporary issues.
The pending period specifies how long the condition must be met before firing, ensuring the condition is consistently met over a consecutive period.
When the alert condition is met, the alert instance enters the **Pending** state. It remains in this state until the condition has been continuously true for the entire **Pending period**.
You can also set the pending period to zero to skip it and have the alert fire immediately once the condition is met.
This ensures the condition breach is stable before the alert transitions to the **Alerting** state and routed for notification.
- **Normal** -> **Pending** -> **Alerting**<sup>\*</sup>
You can also set the **Pending period** to zero to skip the **Pending** state entirely and transition to **Alerting** immediately.
## Keep firing for
You can set a period to keep an alert firing after the threshold is no longer breached. This sets the alert to a Recovering state. In a Recovering state, the alert won’t fire again if the threshold is breached. The Keep firing timer is then reset and the alert transitions back to Alerting state.
You can set a **Keep firing for** period to avoid repeated firing-resolving-firing notifications caused by flapping conditions.
When the alert condition is no longer met during the **Alerting** state, the alert instance enters the **Recovering** state.
The Keep firing for period helps reduce repeated firing-resolving-firing notification scenarios caused by flapping alerts.
- **Alerting****Recovering****Normal (Resolved)**<sup>\*</sup>
- After the **Keep firing for** period elapses, the alert transitions to the **Normal** state and is marked as **Resolved**.
- If the alert condition is met again, the alert transitions back to the **Alerting** state, and no new notifications are sent.
You can also set the **Keep firing for** period to zero to skip the **Recovering** state entirely.
## Evaluation example
Keep in mind:
- One alert rule can generate multiple alert instances - one for each time series produced by the alert rule's query.
- Alert instances from the same alert rule may be in different states. For instance, only one observed machine might start firing.
- Only **Alerting** and **Resolved** alert instances are routed to manage their notifications.
{{< figure src="/media/docs/alerting/alert-rule-evaluation-overview-statediagram-v2.png" alt="A diagram of the alert instance states and when to route their notifications." max-width="750px" >}}
<!--
Remove ///
stateDiagram-v2
direction LR
Normal --///> Pending
note right of Normal
Route "Resolved" alert instances
for notifications
end note
Pending --///> Alerting
Alerting --///> Normal: Resolved
note right of Alerting
Route "Alerting" alert instances
for notifications
end note
-->
- One alert rule can generate multiple alert instances—one for each series or dimension produced by the rule's query. Alert instances from the same alert rule may be in different states.
- Only alert instances in the **Alerting** and **Normal (Resolved)** state are routed for [notifications](ref:notifications).
Consider an alert rule with an **evaluation interval** set at every 30 seconds and a **pending period** of 90 seconds. The evaluation occurs as follows:
| Time | Condition | Alert instance state | Pending counter |
| ------------------------- | --------- | --------------------- | --------------- |
| ------------------------- | --------- | -------------------- | --------------- |
| 00:30 (first evaluation) | Not met | Normal | - |
| 01:00 (second evaluation) | Breached | Pending | 0s |
| 01:30 (third evaluation) | Breached | Pending | 30s |
| 02:00 (fourth evaluation) | Breached | Pending | 60s |
| 02:30 (fifth evaluation) | Breached | Alerting<sup>\*</sup> | 90s |
| 02:30 (fifth evaluation) | Breached | Alerting 📩 | 90s |
An alert instance is resolved when it transitions from the `Firing` to the `Normal` state. For instance, in the previous example:
With a **keep firing for** period of 0 seconds, the alert instance transitions immediately from **Alerting** to **Normal**, and marked as `Resolved`:
| Time | Condition | Alert instance state | Pending counter |
| -------------------------- | --------- | ----------------------------- | --------------- |
| 03:00 (sixth evaluation) | Not met | Normal <sup>Resolved \*</sup> | 120s |
| 03:00 (sixth evaluation) | Not met | Normal <sup>Resolved</sup> 📩 | 120s |
| 03:30 (seventh evaluation) | Not met | Normal | 150s |
To learn more about the state changes of alert rules and alert instances, refer to [State and health of alert rules](ref:alerts-state-health).

@ -0,0 +1,58 @@
---
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rule-evaluation/alert-rule-state-and-health/
description: An alert instance is considered stale when its series disappears for a number of consecutive evaluation intervals. Learn how Grafana resolves them.
keywords:
- grafana
- alerting
- guide
- state
labels:
products:
- cloud
- enterprise
- oss
title: Alert rule state and health
weight: 130
refs:
example-multi-dimensional-alerts:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/best-practices/multi-dimensional-alerts/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/best-practices/multi-dimensional-alerts/
alert-instance-states:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#alert-instance-states
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#alert-instance-states
---
# Alert rule state and health
Each alert rule can generate one or more alert instances—one alert instance for each series or dimension, as shown in the [multi-dimensional alert example](ref:example-multi-dimensional-alerts).
Each alert instance of the same alert rule represents a different target and can be in a different state; for example, one alert instance may be **Normal** while another is **Alerting**.
{{< figure src="/media/docs/alerting/alert-rule-example-multiple-alert-instances.png" max-width="750px" alt="Multi dimensional alert rule. The alert rule state and alert rule health are determined by the state of the alert instances." >}}
The alert rule state and alert rule health are determined by the [state of the alert instances](ref:alert-instance-states).
## Alert rule states
An alert rule can be in either of the following states:
| State | Description |
| ----------- | ---------------------------------------------------------------------------------------------------- |
| **Normal** | None of the alert instances returned by the evaluation engine is in a `Pending` or `Alerting` state. |
| **Pending** | At least one alert instances returned by the evaluation engine is `Pending`. |
| **Firing** | At least one alert instances returned by the evaluation engine is `Alerting`. |
## Alert rule health
An alert rule can have one of the following health statuses:
| State | Description |
| ---------------------- | -------------------------------------------------------------------------------------------------------- |
| **Ok** | No error when evaluating the alert rule. |
| **Error** | An error occurred when evaluating the alert rule. |
| **No Data** | The alert rule query returns no data. |
| **{status}, KeepLast** | The rule would have received another status but was configured to keep the last state of the alert rule. |

@ -0,0 +1,37 @@
---
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rule-evaluation/evaluation-within-a-group/
description: An alert instance is considered stale when its series disappears for a number of consecutive evaluation intervals. Learn how Grafana resolves them.
keywords:
- grafana
- alerting
- guide
- state
labels:
products:
- cloud
- enterprise
- oss
title: How rules are evaluated within a group
menuTitle: Evaluation within a group
weight: 150
refs:
import-ds-rules:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/alerting-migration/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/alerting-migration/
---
# How rules are evaluated within a group
Each evaluation group contains an **evaluation interval** that determines how frequently the rule is evaluated. For instance, the evaluation may occur every `10s`, `30s`, `1m`, `10m`, etc.
Rules in different evaluation groups can be evaluated simultaneously.
Rules within the same evaluation group can be evaluated simultaneously or sequentially, depending on the rule type:
- **Grafana-managed** rules within the same group are evaluated concurrently—they are evaluated at different times over the same evaluation interval but display the same evaluation timestamp.
- **Data source-managed** rules within the same group are evaluated sequentially, one after the other—this is useful to ensure that recording rules are evaluated before alert rules.
- **Grafana-managed rules [imported from data source-managed rules](ref:import-ds-rules)** are also evaluated sequentially.

@ -1,10 +1,11 @@
---
aliases:
- ../../fundamentals/alert-rule-evaluation/state-and-health/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/state-and-health/
- ../../fundamentals/alert-rules/state-and-health/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/state-and-health/
- ../../fundamentals/state-and-health/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/state-and-health/
- ../../unified-alerting/alerting-rules/state-and-health/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/alerting-rules/state-and-health
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rule-evaluation/state-and-health/
description: Learn about the state and health of alert rules to understand several key status indicators about your alerts
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/
description: Grafana Alerting implements the No Data and Error states to handle these common scenarios when evaluating alert rules.
keywords:
- grafana
- alerting
@ -16,7 +17,7 @@ labels:
- cloud
- enterprise
- oss
title: State and health of alerts
title: No Data and Error states
weight: 109
refs:
evaluation_timeout:
@ -67,40 +68,41 @@ refs:
destination: /docs/grafana-cloud/alerting-and-irm/alerting/best-practices/missing-data/
---
# State and health of alerts
# No Data and Error states
There are three key components that help you understand how your alerts behave during their evaluation: [alert instance state](#alert-instance-state), [alert rule state](#alert-rule-state), and [alert rule health](#alert-rule-health). Although related, each component conveys subtly different information.
Grafana Alerting implements the **No Data** and **Error** states to handle common scenarios when evaluating alert rules, and you can modify their behavior.
## Alert instance state
An alert instance can transition to these special states:
An alert instance can be in either of the following states:
- [No Data state](#no-data-state) occurs when the alert rule query runs successfully but returns no data points.
- [Error state](#error-state) occurs when the alert rule fails to evaluate its query or queries successfully.
{{< admonition type="note" >}}
No Data and Error states are supported only for Grafana-managed alert rules.
{{< /admonition >}}
{{< admonition type="tip" >}}
For common examples and practical guidance on handling **Error**, **No Data**, and **stale** alert scenarios, refer to the [Handle connectivity errors](ref:guide-connectivity-errors) and [Handle missing data](ref:guide-missing-data) guides.
{{< /admonition >}}
## Alert instance states
A Grafana-managed alert instance can be in any of the following states, depending on the outcome of the alert rule evaluation:
| State | Description |
| ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Normal** | The state of an alert when the condition (threshold) is not met. |
| **Pending** | The state of an alert that has breached the threshold but for less than the [pending period](ref:pending-period). |
| **Alerting** | The state of an alert that has breached the threshold for longer than the [pending period](ref:pending-period). |
| **Recovering** | The state of an alert that has been configured to keep [firing for a duration after it is triggered](ref:keep-firing). |
| **Recovering** | The state of a firing alert when the threshold is no longer breached, but for less than the [keep firing for](ref:keep-firing) period. |
| **Error<sup>\*</sup>** | The state of an alert when an error or timeout occurred evaluating the alert rule. <br/> You can customize the behavior of the [Error state](#error-state), which by default triggers a different alert. |
| **No Data<sup>\*</sup>** | The state of an alert whose query returns no data or all values are null. <br/> You can customize the behavior of the [No Data state](#no-data-state), which by default triggers a different alert. |
If an alert rule changes (except for updates to annotations, the evaluation interval, or other internal fields), its alert instances reset to the `Normal` state. The alert instance state then updates accordingly during the next evaluation.
{{< figure src="/media/docs/alerting/alert-state-diagram2.png" caption="Alert instance state diagram" alt="A diagram of the distinct alert instance states and transitions." max-width="750px" >}}
{{< admonition type="note" >}}
`No Data` and `Error` states are supported only for Grafana-managed alert rules.
{{< /admonition >}}
### Notification routing
Alert instances will be routed for [notifications](ref:notifications) when they are in the `Alerting` state or have been `Resolved`, transitioning from `Alerting` to `Normal` state.
{{< figure src="/media/docs/alerting/alert-rule-evaluation-overview-statediagram-v2.png" alt="A diagram of the alert instance states and when to route their notifications." max-width="750px" >}}
### `Error` state
## `Error` state
The **Error** state is triggered when the alert rule fails to evaluate its query or queries successfully.
@ -108,7 +110,7 @@ This can occur due to evaluation timeouts (default: `30s`) or three repeated fai
When an alert instance enters the **Error** state, Grafana, by default, triggers a new [`DatasourceError` alert](#no-data-and-error-alerts). You can control this behavior based on the desired outcome of your alert rule in [Modify the `No Data` or `Error` state](#modify-the-no-data-or-error-state).
### `No Data` state
## `No Data` state
The **No Data** state occurs when the alert rule query runs successfully but returns no data points at all.
@ -159,12 +161,8 @@ To minimize the number of **No Data** or **Error** state alerts received, try th
1. To reduce multiple notifications from **Error** alerts, define a [notification policy](ref:notification-policies) to handle all related alerts with `alertname=DatasourceError`, and filter and group errors from the same data source using the `datasource_uid` label.
{{< admonition type="tip" >}}
For common examples and practical guidance on handling **Error**, **No Data**, and **stale** alert scenarios, see the following related guides:
- [Handling connectivity errors](ref:guide-connectivity-errors)
- [Handling missing data](ref:guide-missing-data)
{{< admonition type="tip" >}}
For common examples and practical guidance on handling **Error**, **No Data**, and **stale** alert scenarios, refer to the [Handle connectivity errors](ref:guide-connectivity-errors) and [Handle missing data](ref:guide-missing-data) guides.
{{< /admonition >}}
## `grafana_state_reason` for troubleshooting
@ -183,26 +181,3 @@ The `grafana_state_reason` annotation is included in these situations, providing
- If "no data" or "error" handling transitions to the `Normal` state, the `grafana_state_reason` annotation is included with the value **No Data** or **Error**, respectively.
- If the alert rule is deleted or paused, the `grafana_state_reason` is set to **Paused** or **RuleDeleted**. For some updates, it is set to **Updated**.
- [Stale alert instances](ref:stale-alert-instances) in the `Normal` state include the `grafana_state_reason` annotation with the value **MissingSeries**.
## Alert rule state
The alert rule state is determined by the “worst case” state of the alert instances produced. For example, if one alert instance is `Alerting`, the alert rule state is firing.
An alert rule can be in either of the following states:
| State | Description |
| ----------- | ---------------------------------------------------------------------------------------------------- |
| **Normal** | None of the alert instances returned by the evaluation engine is in a `Pending` or `Alerting` state. |
| **Pending** | At least one alert instances returned by the evaluation engine is `Pending`. |
| **Firing** | At least one alert instances returned by the evaluation engine is `Alerting`. |
## Alert rule health
An alert rule can have one of the following health statuses:
| State | Description |
| ---------------------- | -------------------------------------------------------------------------------------------------------- |
| **Ok** | No error when evaluating an alerting rule. |
| **Error** | An error occurred when evaluating an alerting rule. |
| **No Data** | The absence of data in at least one time series returned during a rule evaluation. |
| **{status}, KeepLast** | The rule would have received another status but was configured to keep the last state of the alert rule. |

@ -12,13 +12,13 @@ labels:
- enterprise
- oss
title: Stale alert instances
weight: 110
weight: 120
refs:
no-data-state:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/state-and-health/#no-data-state
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#no-data-state
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/state-and-health/#no-data-state
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#no-data-state
no-data-and-error-handling:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-grafana-managed-rule/#configure-no-data-and-error-handling

@ -18,11 +18,6 @@ labels:
title: Alert rules
weight: 100
refs:
shared-configure-prometheus-data-source-alerting:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/prometheus/configure/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/connect-externally-hosted/data-sources/prometheus/configure/
queries-and-conditions:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/queries-conditions/#data-source-queries
@ -33,14 +28,11 @@ refs:
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/queries-conditions/#alert-condition
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/queries-conditions/#alert-condition
recorded-queries:
- pattern: /docs/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/recorded-queries/
notification-images:
alert-rule-evaluation:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/configure-notifications/template-notifications/images-in-notifications/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/rule-evaluation/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/configure-notifications/template-notifications/images-in-notifications/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/rule-evaluation/
notifications:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/notifications/
@ -51,84 +43,50 @@ refs:
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-recording-rules/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/create-recording-rules/
expression-queries:
configure-grafana-alerts:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/queries-conditions/#advanced-options-expressions
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-grafana-managed-rule/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/queries-conditions/#advanced-options-expressions
alert-rule-evaluation:
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/create-grafana-managed-rule/
comparison-ds-grafana-rules:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/rule-evaluation/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-data-source-managed-rule/#comparison-with-grafana-managed-rules
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/rule-evaluation/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/create-data-source-managed-rule/#comparison-with-grafana-managed-rules
---
# Alert rules
An alert rule is a set of evaluation criteria for when an alert rule should fire. An alert rule consists of:
- [Queries](ref:queries-and-conditions) that select the dataset to evaluate.
- An [alert condition](ref:alert-condition) (the threshold) that the query must meet or exceed to trigger the alert instance.
- An interval that specifies the frequency of [alert rule evaluation](ref:alert-rule-evaluation) and a duration indicating how long the condition must be met to trigger the alert instance.
- Other customizable options, for example, setting what should happen in the absence of data, notification messages, and more.
1. [Queries](ref:queries-and-conditions) that select the dataset to evaluate.
1. An [alert condition](ref:alert-condition) (the threshold) that the query must meet or exceed to trigger the alert instance.
Grafana supports two different alert rule types: Grafana-managed alert rules and data source-managed alert rules.
{{< figure src="/media/docs/alerting/alerting-query-conditions-default-options.png" max-width="750px" alt="Alert query using the Prometheus query editor and alert condition" >}}
## Grafana-managed alert rules
1. An interval that specifies the frequency of [alert rule evaluation](ref:alert-rule-evaluation) and a duration indicating how long the condition must be met to trigger the alert instance.
1. Other customizable options, including expressions, labels, annotations, error and no data handling, notification routing, and more.
Grafana-managed alert rules are the most flexible alert rule type. They allow you to create alert rules that can act on data from any of the [supported data sources](#supported-data-sources), and use multiple data sources in a single alert rule.
## Alert rule types
{{< figure src="/media/docs/alerting/grafana-managed-alerting-architecture.png" max-width="750px" caption="How Grafana-managed alerting works by default" >}}
Grafana Alerting inherits the Prometheus Alerting model for defining alert rules and supports two alert rule types:
1. Alert rules are created and stored within Grafana.
1. Alert rules can query one or more supported data sources.
1. Alert rules are evaluated by the Alert Rule Evaluation Engine within Grafana.
1. Firing and resolved alert instances are forwarded to [handle their notifications](ref:notifications).
- **Data source-managed alert rules**
### Supported data sources
These alert rules can only query Prometheus-based data sources such as Mimir, Loki, and Prometheus. The rules are stored in the data source.
Grafana-managed alert rules can query backend data sources if Grafana Alerting is enabled by specifying `{"backend": true, "alerting": true}` in the `plugin.json` file.
Grafana Alerting supports this alert rule type for horizontal scalability with these data sources.
Find the public data sources supporting Alerting in the [Grafana Plugins directory](/grafana/plugins/data-source-plugins/?features=alerting).
- **Grafana-managed alert rules**
## Data source-managed alert rules
The recommended alert rule type in Grafana Alerting.
Data source-managed alert rules can only be created using Grafana Mimir or Grafana Loki data sources. Both data source backends can provide high availability and fault tolerance, enabling you to scale your alerting setup.
These alert rules can query a wider range of backend data sources, including multiple data sources in a single alert rule. They support expression-based transformations, advanced alert conditions, images in notifications, handling of error and no data states, and [more](ref:comparison-ds-grafana-rules).
{{< figure src="/media/docs/alerting/mimir-managed-alerting-architecture-v2.png" max-width="750px" caption="Mimir-managed alerting architecture" >}}
1. Alert rules are stored within the Mimir or Loki data source.
1. Alert rules can query only their specific data source.
1. Alert rules are evaluated by the Alert Rule Evaluation Engine within the data source.
1. Firing and resolved alert instances are forwarded to [handle their notifications](ref:notifications).
> Rules from a Prometheus data source appear in the **Data source-managed** section of the **Alert rules** page when [Manage alerts via Alerting UI](ref:shared-configure-prometheus-data-source-alerting) is enabled.
>
> However, Grafana can only create and edit data source-managed rules for Mimir and Loki, not for a Prometheus instance.
[//]: <> ({{< docs/shared lookup="alerts/note-prometheus-ds-rules.md" source="grafana" version="<GRAFANA_VERSION>" >}})
## Comparison between alert rule types
We recommend using Grafana-managed alert rules whenever possible, and opting for data source-managed alert rules when you need to scale your alerting setup.
The table below compares Grafana-managed and data source-managed alert rules.
| <div style="width:200px">Feature</div> | <div style="width:200px">Grafana-managed alert rule</div> | <div style="width:200px">Data source-managed alert rule |
| ----------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------- |
| Create alert rules<wbr /> that query [data sources supporting Alerting](#supported-data-sources) | Yes | Only supports creating rules for Mimir and Loki. |
| Mix and match data sources | Yes | No |
| Add [expressions](ref:expression-queries) to transform<wbr /> your data and set [alert conditions](ref:alert-condition) | Yes | No |
| Use [images in alert notifications](ref:notification-images) | Yes | No |
| Support for [recording rules](#recording-rules) | Yes | Yes |
| Organization | Organize and manage access with folders | Use namespaces |
| Alert rule evaluation and delivery | Alert evaluation is done in Grafana, while delivery can be handled by Grafana or an external Alertmanager. | Alert rule evaluation and alert delivery are distributed. |
| Scaling | Alert rules are stored in the Grafana database, which may experience transient errors. It only scales vertically. | Alert rules are stored within the data source and allow for horizontal scaling. |
You can find the supported public data sources in the [Grafana Plugins directory](/grafana/plugins/data-source-plugins/?features=alerting). For step-by-step instructions, see [Configure Grafana-managed alert rules](ref:configure-grafana-alerts).
## Recording rules
Similar to alert rules, recording rules are evaluated periodically. A recording rule pre-computes frequently used or computationally expensive queries, and saves the results as a new time series metric.
The new recording metric can then be used in alert rules and dashboards to optimize their queries.
For more details, refer to [Create recording rules](ref:create-recording-rules).
The new recording metric can then be used in alert rules and dashboards to optimize their queries. For further details, refer to [Create recording rules](ref:create-recording-rules).

@ -74,9 +74,10 @@ Start defining your [contact points](ref:contact-points) to specify how to recei
## How it works at a glance
- Grafana alerting periodically [evaluates your alert rules](ref:alert-rule-evaluation) and triggers notifications for firing and resolved alert instances.
- You can configure the alert rule to send notifications to a contact point or route them via Notification Policies for greater flexibility.
- To minimize alert noise, group similar alerts into a single notification by grouping alert labels and notification timings.
- Grafana alerting periodically [evaluates your alert rules](ref:alert-rule-evaluation).
- It triggers notifications for alert instances that are **firing** or **resolved**.
- You can configure an alert rule to send notifications to a **contact point** or route them through **notification policies** for greater flexibility.
- To reduce the number of notifications, you can **group related alerts** into a single notification by using label grouping and notification timings.
## Fundamentals
@ -130,8 +131,11 @@ Additionally, you can use [silences](ref:silences) and [mute timings](ref:mute-t
## Architecture
Grafana Alerting is based on the Prometheus model for designing alerting systems. Its architecture decouples the alert generator from the alert notification manager (known as the Alertmanager) to enhance scalability and performance.
Grafana Alerting is built on the Prometheus model, which separates two main components for scalability and performance:
- **An alert generator** that evaluates alert rules and sends firing and resolved alerts to the alert receiver.
- **An alert receiver** (also known as Alertmanager) that receives the alerts and is responsible for sending their notifications.
{{< figure src="/media/docs/alerting/alerting-alertmanager-architecture.png" max-width="750px" alt="A diagram with the alert generator and alert manager architecture" >}}
Grafana provides a custom Alertmanager, extending the Prometheus Alertmanager, to manage and deliver alert notifications. If you run a Prometheus or Mimir Alertmanager, you can configure Grafana Alerting to manage them and handle notifications for Grafana-managed alerts. For details, refer to [configure Alertmanagers](ref:configure-alertmanager).
Grafana includes a custom Alertmanager that extends the Prometheus Alertmanager to manage and deliver alert notifications. You can also [configure Grafana Alerting to work with other Alertmanagers](ref:configure-alertmanager).

@ -37,19 +37,19 @@ refs:
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/link-alert-rules-to-panels/
alert-rule-state:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/state-and-health/#alert-rule-state
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/alert-rule-state-and-health/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/state-and-health/#alert-rule-state
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/alert-rule-state-and-health/
alert-instance-state:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/stale-alert-instances/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#alert-instance-states
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/stale-alert-instances/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#alert-instance-states
alert-rule-health:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/state-and-health/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/alert-rule-state-and-health/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/state-and-health/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/alert-rule-state-and-health/
---
# View alert state

@ -5,8 +5,6 @@ labels:
title: 'Set alert rule name'
---
## Set alert rule name
1. Click **Alerts & IRM** -> **Alert rules** -> **+ New alert rule**.
1. Enter a name to identify your alert rule.

@ -5,8 +5,6 @@ labels:
title: 'Configure notification message'
---
## Configure notification message
Use [annotations](ref:shared-annotations) to add information to alert messages that can help respond to the alert.
Annotations are included by default in notification messages, and can use text or [templates](ref:shared-alert-rule-template) to display dynamic data from queries.

@ -1,10 +0,0 @@
---
labels:
products:
- oss
title: 'Note Prometheus data source-managed rules'
---
> Rules from a Prometheus data source appear in the **Data source-managed** section of the **Alert rules** page when [Manage alerts via Alerting UI](ref:shared-configure-prometheus-data-source-alerting) is enabled.
>
> However, Grafana can only create and edit data source-managed rules for Mimir and Loki, not for a Prometheus instance.
Loading…
Cancel
Save