The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
grafana/docs/sources/alerting/fundamentals/alert-rule-evaluation/state-and-health.md

13 KiB

aliases canonical description keywords labels title weight refs
[../../fundamentals/alert-rules/state-and-health/ ../../fundamentals/state-and-health/ ../../unified-alerting/alerting-rules/state-and-health/] https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rule-evaluation/state-and-health/ Learn about the state and health of alert rules to understand several key status indicators about your alerts [grafana alerting keep last state guide state] [{products [cloud enterprise oss]}] State and health of alerts 109 [{pending-period {pattern /docs/grafana/} {destination /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rule-evaluation/#pending-period}] [{pattern /docs/grafana-cloud/} {destination /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rule-evaluation/#pending-period}} {no-data-and-error-handling {pattern /docs/grafana/} {destination /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-grafana-managed-rule/#configure-no-data-and-error-handling}] [{pattern /docs/grafana-cloud/} {destination /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/create-grafana-managed-rule/#configure-no-data-and-error-handling}} {notifications {pattern /docs/grafana/} {destination /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/notifications/}] [{pattern /docs/grafana-cloud/} {destination /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/notifications/}}]

State and health of alerts

There are three key components that help you understand how your alerts behave during their evaluation: alert instance state, alert rule state, and alert rule health. Although related, each component conveys subtly different information.

Alert instance state

An alert instance can be in either of the following states:

State Description
Normal The state of an alert when the condition (threshold) is not met.
Pending The state of an alert that has breached the threshold but for less than the pending period.
Alerting The state of an alert that has breached the threshold for longer than the pending period.
No Data* The state of an alert whose query returns no data or all values are null.
An alert in this state generates a new DatasourceNoData alert. You can modify the default behavior of the no data state.
Error* The state of an alert when an error or timeout occurred evaluating the alert rule.
An alert in this state generates a new DatasourceError alert. You can modify the default behavior of the error state.

If an alert rule changes (except for updates to annotations, the evaluation interval, or other internal fields), its alert instances reset to the Normal state. The alert instance state then updates accordingly during the next evaluation.

{{< figure src="/media/docs/alerting/alert-instance-states-v3.png" caption="Alert instance state diagram" alt="A diagram of the distinct alert instance states and transitions." max-width="750px" >}}

{{< admonition type="note" >}}

No Data and Error states are supported only for Grafana-managed alert rules.

{{< /admonition >}}

Notification routing

Alert instances will be routed for notifications when they are in the Alerting state or have been Resolved, transitioning from Alerting to Normal state.

{{< figure src="/media/docs/alerting/alert-rule-evaluation-overview-statediagram-v2.png" alt="A diagram of the alert instance states and when to route their notifications." max-width="750px" >}}

Stale alert instances (MissingSeries)

The No Data state occurs when the alert rule query runs successfully but returns no data points at all.

An alert instance is considered stale if the query returns data but its dimension or series has disappeared for two evaluation intervals. In this case, the alert instance transitions to the Normal (MissingSeries) state as resolved, and is then evicted.

  1. The alert rule runs and returns data for some label sets.

  2. An alert instance that previously existed is now missing.

  3. Grafana keeps the previous state of the alert instance for two evaluation intervals.

  4. If it remains missing after two intervals, it transitions to the Normal state and sets MissingSeries in the grafana_state_reason annotation.

  5. Stale alert instances in the Alerting, No Data, or Error states transition to the Normal state as Resolved, and are routed for notifications like other resolved alerts.

  6. The alert instance is removed from the UI.

No Data and Error alerts

When an alert rule evaluation results in a No Data or Error state, Grafana Alerting immediately creates a new alert instance —skipping the pending period—with the following additional labels:

  • alertname: Either DatasourceNoData or DatasourceError depending on the state.
  • datasource_uid: The UID of the data source that caused the state.
  • rulename: The name of the alert rule that originated the alert.

Note that DatasourceNoData and DatasourceError alert instances are independent from the original alert instance. They have different labels, which means existing silences, mute timings, and notification policies applied to the original alert may not apply to them.

You can manage these alerts like regular ones by using their labels to apply actions such as adding a silence, routing via notification policies, and more.

If the alert rule is configured to send notifications directly to a selected contact point (instead of using notification policies), the DatasourceNoData and DatasourceError alerts are also sent to that contact point. Any additional notification settings defined in the alert rule, such as muting or grouping, are preserved.

Modify the No Data or Error state

These states are supported only for Grafana-managed alert rules.

In Configure no data and error handling, you can change the default behaviour when the evaluation returns no data or an error. You can set the alert instance state to Alerting, Normal, Error, or Keep Last State.

{{< figure src="/media/docs/alerting/alert-rule-configure-no-data-and-error-v2.png" alt="A screenshot of the Configure no data and error handling option in Grafana Alerting." max-width="500px" >}}

{{< docs/shared lookup="alerts/table-configure-no-data-and-error.md" source="grafana" version="<GRAFANA_VERSION>" >}}

Note that when you configure the No Data or Error behavior to Alerting or Normal, Grafana attempts to keep a stable set of fields under notification Values. If your query returns no data or an error, Grafana re-uses the latest known set of fields in Values, but will use -1 in place of the measured value.

Reduce No Data or Error alerts

To minimize the number of No Data or Error state alerts received, try the following.

  1. Use the Keep last state option. For more information, refer to the section below. This option allows the alert to retain its last known state when there is no data available, rather than switching to a No Data state.

  2. For No Data alerts, you can optimize your alert rule by expanding the time range of the query. However, if the time range is too big, it affects the performance of the query and can lead to errors due to timeout.

    To minimize timeouts resulting in the Error state, reduce the time range to request less data every evaluation cycle.

  3. Change the default evaluation time out. The default is set at 30 seconds. To increase the default evaluation timeout, open a support ticket from the Cloud Portal. Note that this should be a last resort, because it may affect the performance of all alert rules and cause missed evaluations if the timeout is too long.

Keep last state

The "Keep Last State" option helps mitigate temporary data source issues, preventing alerts from unintentionally firing, resolving, and re-firing.

However, in situations where strict monitoring is critical, relying solely on the "Keep Last State" option may not be appropriate. Instead, consider using an alternative or implementing additional alert rules to ensure that issues with prolonged data source disruptions are detected.

grafana_state_reason for troubleshooting

Occasionally, an alert instance may be in a state that isn't immediately clear to everyone. For example:

  • Stale alert instances in the Alerting state transition to the Normal state when the series disappear.
  • If "no data" handling is configured to transition to a state other than No Data.
  • If "error" handling is configured to transition to a state other than Error.
  • If the alert rule is deleted, paused, or updated in some cases, the alert instance also transitions to the Normal state.

In these situations, the evaluation state may differ from the alert state, and it might be necessary to understand the reason for being in that state when receiving the notification.

The grafana_state_reason annotation is included in these situations, providing the reason that explains why the alert instance transitioned to its current state. For example:

  • Stale alert instances in the Normal state include the grafana_state_reason annotation with the value MissingSeries.
  • If "no data" or "error" handling transitions to the Normal state, the grafana_state_reason annotation is included with the value No Data or Error, respectively.
  • If the alert rule is deleted or paused, the grafana_state_reason is set to Paused or RuleDeleted. For some updates, it is set to Updated.

Alert rule state

The alert rule state is determined by the “worst case” state of the alert instances produced. For example, if one alert instance is Alerting, the alert rule state is firing.

An alert rule can be in either of the following states:

State Description
Normal None of the alert instances returned by the evaluation engine is in a Pending or Alerting state.
Pending At least one alert instances returned by the evaluation engine is Pending.
Firing At least one alert instances returned by the evaluation engine is Alerting.

Alert rule health

An alert rule can have one of the following health statuses:

State Description
Ok No error when evaluating an alerting rule.
Error An error occurred when evaluating an alerting rule.
No Data The absence of data in at least one time series returned during a rule evaluation.
{status}, KeepLast The rule would have received another status but was configured to keep the last state of the alert rule.