# This workflow depends on the ./actionlint-format.txt file. It is MIT licensed (thanks, rhysd!): https://github.com/rhysd/actionlint/blob/2ab3a12c7848f6c15faca9a92612ef4261d0e370/testdata/format/sarif_template.txt
# shellcheck disable=SC2102,SC2016,SC2125 # this is just a string. we _want_ all the bash features to be disabled.
extract_error_message='::error::Extraction failed. Make sure that you have no dynamic translation phrases, such as "t(`preferences.theme.{themeID}`, themeName)" and that no translation key is used twice. Search the output for '[warning]' to find the offending file.'
make i18n-extract || (echo "${extract_error_message}" && false)
- run:|
uncommited_error_message="::error::Translation extraction has not been committed. Please run 'make i18n-extract', commit the changes and push again."
# We need "write" permissions on the PR to be able to add a label.
pull_request_target: # zizmor:ignore[dangerous-triggers] We need this to have labelling permissions. There are no user inputs here, so we should be fine.
types:
- opened
permissions:{}
jobs:
label-if-external:
name:Add 'pr/external' label if the PR is external
@ -17,10 +17,10 @@ Grafana uses the [i18next](https://www.i18next.com/) framework for managing tran
### JSX
1. For JSX children, use the `<Trans />` component from `app/core/internationalization` with the `i18nKey`, ensuring it conforms to the following guidelines, with the default English translation. For example:
1. For JSX children, use the `<Trans />` component from `@grafana/i18n` with the `i18nKey`, ensuring it conforms to the following guidelines, with the default English translation. For example:
```jsx
import { Trans } from 'app/core/internationalization';
import { Trans } from '@grafana/i18n';
const SearchTitle = ({ term }) => <Transi18nKey="search-page.results-title">Results for {{ term }}</Trans>;
```
@ -32,7 +32,7 @@ There may be cases where you need to interpolate variables inside other componen
If the nested component is displaying the variable only (e.g. to add emphasis or color), the best solution is to create a new wrapping component:
```jsx
import { Trans } from 'app/core/internationalization';
import { Trans } from '@grafana/i18n';
import { Text } from '@grafana/ui';
const SearchTerm = ({ term }) => <Textcolor="success">{term}</Text>;
However there are also cases where the nested component might be displaying additional text which also needs to be translated. In this case, you can use the `values` prop to explicitly pass variables to the translation, and reference them as templated strings in the markup. For example:
```jsx
import { Trans } from 'app/core/internationalization';
Variables must be strings (or, must support calling `.toString()`, which we almost never want). For example:
```jsx
import { Trans } from 'app/core/internationalization';
import { Trans } from '@grafana/i18n';
// This will not work
const userName = <strong>user.name</strong>;
@ -183,7 +184,7 @@ const userName = user.name;
Both HTML tags and React components can be included in a phase. The `Trans` function handles interpolating for its children.
```js
import { Trans } from "app/core/internationalization"
import { Trans } from "@grafana/i18n"
<Transi18nKey="page.explainer">
Click <button>here</button> to <ahref="https://grafana.com">learn more.</a>
@ -202,7 +203,7 @@ import { Trans } from "app/core/internationalization"
Plurals require special handling to make sure they can be translated according to the rules of each locale (which may be more complex than you think). Use either the `<Trans />` component or the `t` function, with the `count` prop to provide a singular form. For example:
```js
import { Trans } from 'app/core/internationalization';
@ -26,3 +26,7 @@ Update organization permissions when you want to enhance or restrict a user's ac
1. In the Organizations section, click **Change role** for the role you want to change
1. Select another role.
1. Click **Save**.
{{<admonitiontype="note">}}
In order for the change to take effect and be reflected within the instance, the account where permissions were altered will need to sign out fully and back in. Role assignment is evaluated during sign in, so if a user has not signed back in after their role was adjusted the instance will continue to reflect their previous role.
Grafana provides an internal tool in Alerting which allows you to import Prometheus and Loki alert rules into Grafana-managed alert rules.
Grafana provides an internal tool in Alerting which allows you to import Mimir and Loki alert rules as Grafana-managed alert rules. To import Prometheus rules, use the [API](ref:import-ds-rules-api).
## Before you begin
@ -43,6 +48,10 @@ When data source-managed alert rules are converted to Grafana-managed alert rule
Plugin rules that have the label `__grafana_origin` are not included on alert rule imports.
{{</admonition>}}
### Evaluation of imported rules
The imported rules are evaluated sequentially within each rule group, mirroring Prometheus behavior. Sequential evaluation applies to rules only while they remain read‑only (displayed as "Provisioned"). If you import rules with the `X-Disable-Provenance: true` header or via the regular provisioning API, they behave like regular Grafana alert rules and are evaluated in parallel.
## Import alert rules
To convert data source-managed alert rules to Grafana managed alerts:
@ -53,11 +62,11 @@ To convert data source-managed alert rules to Grafana managed alerts:
The import alert rules page opens.
1. In the Data source dropdown, select the Loki or Prometheus data source of the alert rules.
1. In the Data source dropdown, select the Loki or Mimir data source of the alert rules.
1. In Additional settings, select a target folder or designate a new folder to import the rules into.
If you import the rules into an existing folder, don't chose a folder with existing alert rules, as they could get overwritten.
If you import the rules into an existing folder, don't choose a folder with existing alert rules, as they could get overwritten.
1. (Optional) Select a Namespace and/or Group to determine which rules are imported.
@ -48,7 +48,7 @@ When data source-managed alert rules are converted to Grafana-managed alert rule
- The newly created rules are given unique UIDs.
If you don't want the UID to be automatically generated, you can specify a specific UID with the `__grafana_alert_rule_uid__` label.
## Import alert rules with Mimirtool or coretextool
## Import alert rules with Mimirtool or cortextool
You can use either [Mimirtool](/docs/mimir/latest/manage/tools/mimirtool/) or [`cortextool`](https://github.com/grafana/cortex-tools) (version `0.11.3` or later) to import your alert rules. For more information about Mimirtool commands, see the [Mimirtool documentation](/docs/mimir/latest/manage/tools/mimirtool/#rules).
@ -58,13 +58,13 @@ To convert your alert rules, use the following command prompt substituting the y
For coretextool, you need to set `--backend=loki` to import Loki alert rules. For example:
For cortextool, you need to set `--backend=loki` to import Loki alert rules. For example:
```bash
CORTEX_ADDRESS=<grafanaurl>/api/convert/ CORTEX_AUTH_TOKEN=<yourtoken> CORTEX_TENANT_ID=1 cortextool rules --backend=loki list
```
Headers can be passed to the `mimirtool` or `coretextool` via `--extra-headers`.
Headers can be passed to the `mimirtool` or `cortextool` via `--extra-headers`.
For more information about the Rule API points and examples of Mimirtool commands, see the [Mimir HTTP API documentation](/docs/mimir/latest/references/http-api/#ruler-rules:~:text=config/v1/rules-,Get%20rule%20groups%20by%20namespace,DELETE%20%3Cprometheus%2Dhttp%2Dprefix%3E/config/v1/rules/%7Bnamespace%7D,-Delete%20tenant%20configuration) for more information about the Rule API points and examples of Mimirtool commands.
@ -85,7 +85,11 @@ The rules are stored within the data source. In a distributed architecture, they
We recommend using [Grafana-managed alert rules](ref:configure-grafana-managed-rules) whenever possible and opting for data source-managed alert rules when scaling your alerting setup is necessary.
> Rules from a Prometheus data source appear in the **Data source-managed** section of the **Alert rules** page when [Manage alerts via Alerting UI](ref:shared-configure-prometheus-data-source-alerting) is enabled.
>
> However, Grafana can only create and edit data source-managed rules for Mimir and Loki, not for a Prometheus instance.
Note that if you delete an alert resource created in the UI, you can no longer retrieve it.
To backup and manage alert rules, you can [provision alerting resources](ref:shared-provision-alerting-resources) using options such as configuration files, Terraform, or the Alerting API.
Admin users can delete all of the alert rules within a folder. To delete all the alert rules in a folder, click the menu icon and select **Delete**. Then type "Delete" into the field and click **Delete** to confirm the bulk deletion.
@ -145,7 +145,13 @@ Verify that the data sources you plan to query in the alert rule are [compatible
Only users with **Edit** permissions for the folder storing the rules can edit or delete Grafana-managed alert rules. Only admins can restore deleted Grafana-managed alert rules.
Note that if you delete an alert resource created in the UI, you can no longer retrieve it.
To backup and manage alert rules, you can [provision alerting resources](ref:shared-provision-alerting-resources) using options such as configuration files, Terraform, or the Alerting API.
Use [annotations](ref:shared-annotations) to add information to alert messages that can help respond to the alert.
Annotations are included by default in notification messages, and can use text or [templates](ref:shared-alert-rule-template) to display dynamic data from queries.
Grafana provides several optional annotations.
1. Optional: Add a summary.
Short summary of what happened and why.
1. Optional: Add a description.
Description of what the alert rule does.
1. Optional: Add a Runbook URL.
Webpage where you keep your runbook for the alert
1. Optional: Add a custom annotation.
Add any additional information that could help address the alert.
1. Optional: **Link dashboard and panel**.
[Link the alert rule to a panel](ref:shared-link-alert-rules-to-panels) to facilitate alert investigation.
Admin users can delete all of the alert rules within a folder. To delete all the alert rules in a folder, click the menu icon and select **Delete**. Then type "Delete" into the field and click **Delete** to confirm the bulk deletion.
## Permanently delete or restore deleted alert rules
To choose the remote-write Prometheus data source individually for each recording rule, also enable the `grafanaManagedRecordingRulesDatasources` feature flag.
When this flag is on, Grafana does not use the `url` defined in the configuration file, and the rule editor shows a dropdown to select the target data source. If a rule does not specify a target, for example it was created before the flag was enabled, Grafana writes to the data source identified by `default_datasource_uid` in the Grafana configuration:
In this example, the value of the `severity` label is determined by the query value, and the possible options are `critical`, `high`, `medium`, or `low`. You can then use the `severity` label to change their notifications—for instance, sending `critical` alerts immediately or routing `low` alerts to a specific team for further review.
> **Note:** An alert instance is uniquely identified by its set of labels.
>
> - Avoid displaying query values in labels, as this can create numerous alert instances—one for each distinct label set. Instead, use annotations for query values.
> - If a templated label's value changes, it maps to a different alert instance, and the previous instance is considered [stale (MissingSeries)](ref:shared-stale-alert-instances) when its label value is no longer present.
@ -209,7 +214,12 @@ In this example, the `severity` label is determined by the query value:
You can then use the `severity` label to control how alerts are handled. For instance, you could send `critical` alerts immediately, while routing `low` severity alerts to a team for further investigation.
> **Note:** An alert instance is uniquely identified by its set of labels.
>
> - Avoid displaying query values in labels, as this can create numerous alert instances—one for each distinct label set. Instead, use annotations for query values.
> - If a templated label's value changes, it maps to a different alert instance, and the previous instance is considered [stale (MissingSeries)](ref:shared-stale-alert-instances) when its label value is no longer present.
@ -67,7 +67,18 @@ Silences stop notifications from being created for a specified time window but d
Silences are assigned to a [specific Alertmanager](ref:alertmanager-architecture) and only suppress notifications for alerts managed by that Alertmanager.
[Mute timings](ref:shared-mute-timings) and [silences](ref:shared-silences) are distinct methods to suppress notifications. They do not prevent alert rules from being evaluated or stop alert instances from appearing in the user interface; they only prevent notifications from being created.
The following table highlights the key differences between mute timings and silences.
@ -81,9 +92,60 @@ To add a silence, complete the following steps.
1. Optionally, in **Duration**, specify how long the silence is enforced. This automatically updates the end time in the **Silence start and end** field.
1. In the **Label** and **Value** fields, enter one or more _Matching Labels_ to determine which alerts the silence applies to.
Use [labels](ref:shared-alert-labels) and label matchers to link alert rules to [notification policies](ref:shared-notification-policies) and [silences](ref:shared-silences). This allows for a flexible way to manage your alert instances, specify which policy should handle them, and which alerts to silence.
A label matchers consists of 3 distinct parts, the **label**, the **value** and the **operator**.
- The **Label** field is the name of the label to match. It must exactly match the label name.
- The **Value** field matches against the corresponding value for the specified **Label** name. How it matches depends on the **Operator** value.
- The **Operator** field is the operator to match against the label value. The available operators are:
| `=` | Select labels that are exactly equal to the value. |
| `!=` | Select labels that are not equal to the value. |
| `=~` | Select labels that regex-match the value. |
| `!~` | Select labels that do not regex-match the value. |
{{% admonition type="note" %}}
If you are using multiple label matchers, they are combined using the AND logical operator. This means that all matchers must match in order to link a rule to a policy.
{{% /admonition %}}
**Label matching example**
If you define the following set of labels for your alert:
`{ foo=bar, baz=qux, id=12 }`
then:
- A label matcher defined as `foo=bar` matches this alert rule.
- A label matcher defined as `foo!=bar` does _not_ match this alert rule.
- A label matcher defined as `id=~[0-9]+` matches this alert rule.
- A label matcher defined as `baz!~[0-9]+` matches this alert rule.
- Two label matchers defined as `foo=bar` and `id=~[0-9]+` match this alert rule.
**Exclude labels**
You can also write label matchers to exclude labels.
Here is an example that shows how to exclude the label `Team`. You can choose between any of the values below to exclude labels.
@ -47,7 +47,18 @@ Use mute timings to temporarily pause notifications for a specific recurring per
Mute timings are assigned to a [specific Alertmanager](ref:alertmanager-architecture) and only suppress notifications for alerts managed by that Alertmanager.
[Mute timings](ref:shared-mute-timings) and [silences](ref:shared-silences) are distinct methods to suppress notifications. They do not prevent alert rules from being evaluated or stop alert instances from appearing in the user interface; they only prevent notifications from being created.
The following table highlights the key differences between mute timings and silences.
@ -50,7 +55,7 @@ Rules in different groups can be evaluated simultaneously.
- **Data source-managed** rules within the same group are evaluated sequentially, one after the other—this is useful to ensure that recording rules are evaluated before alert rules.
- **Grafana-managed rules [imported from data source-managed rules](ref:import-ds-rules)**are evaluated sequentially, like data source-managed rules.
- **Grafana-managed rules [imported from data source-managed rules](ref:import-ds-rules)**can be evaluated sequentially or in parallel, depending on how they are imported. For more information, refer to [Evaluation of imported rules](ref:evaluation-of-imported-ds-rules).
> Rules from a Prometheus data source appear in the **Data source-managed** section of the **Alert rules** page when [Manage alerts via Alerting UI](ref:shared-configure-prometheus-data-source-alerting) is enabled.
>
> However, Grafana can only create and edit data source-managed rules for Mimir and Loki, not for a Prometheus instance.
@ -53,13 +68,18 @@ Alerting periodically runs the queries and expressions, evaluating the condition
## Data source queries
Alerting queries are the same as the queries used in Grafana panels, but Grafana-managed alerts are limited to querying [data sources that have Alerting enabled](ref:data-source-alerting).
Alerting queries are the same as the queries used in Grafana panels, but Grafana-managed alerts are limited to querying [data sources that have Alerting enabled](/grafana/plugins/data-source-plugins/?features=alerting).
Queries in Grafana can be applied in various ways, depending on the data source and query language being used. Each data source’s query editor provides a customized user interface to help you write queries that take advantage of its unique capabilities. For details about query editors and syntax in Grafana, refer to [Query and transform data](ref:query-transform-data).
Queries in Grafana can be applied in various ways, depending on the data source and query language being used. Each data source’s query editor provides a customized user interface to help you write queries that take advantage of its unique capabilities.
Alerting can work with two types of data:
For more details about queries in Grafana, refer to [Query and transform data](ref:query-transform-data).
1. **Time series data** — The query returns a collection of time series, where each series must be [reduced](#reduce) to a single numeric value for evaluating the alert condition.
1. **Tabular data** — The query must return data in a table format with only one numeric column. Each row must have a value in that column, used to evaluate the alert condition. See a [tabular data example](ref:table-data-example).
{{<figuresrc="/media/docs/alerting/alerting-query-conditions-default-options.png"max-width="750px"caption="Define alert query and alert condition">}}
Each time series or table row is evaluated as a separate [alert instance](ref:alert-instance).
{{<figuresrc="/media/docs/alerting/alerting-query-conditions-default-options.png"max-width="750px"caption="Alert query using the Prometheus query editor and alert condition">}}
## Alert condition
@ -89,7 +109,7 @@ Aggregates time series values within the selected time range into a single numbe
Reduce takes one or more time series and transform each series into a single number, which can then be compared in the alert condition.
The following aggregations functions are included: `Min`, `Max`, `Mean`, `Mediam`, `Sum`, `Count`, and `Last`.
The following aggregations functions are included: `Min`, `Max`, `Mean`, `Mediam`, `Sum`, `Count`, and `Last`. For more details, refer to the [Reduce documentation](ref:reduce-operation).
### Math
@ -114,6 +134,8 @@ You can also use a Math expression to define the **alert condition**. For exampl
Realigns a time range to a new set of timestamps, this is useful when comparing time series data from different data sources where the timestamps would otherwise not align.
For more details, refer to the [Resample documentation](ref:resample-operation).
### Threshold
Compares single numbers from previous queries or expressions (e.g., `$A`, `$B`) to a specified condition. It's often used to define the alert condition.
@ -192,63 +214,3 @@ The following aggregation functions are also available to further refine your qu
| `count_non_null` | Displays a count of values in the result set that aren't `null` |
{{</collapse>}}
## Alert on numeric data
Among certain data sources numeric data that is not time series can be directly alerted on, or passed into Server Side Expressions (SSE). This allows for more processing and resulting efficiency within the data source, and it can also simplify alert rules.
When alerting on numeric data instead of time series data, there is no need to [reduce](#reduce) each labeled time series into a single number. Instead labeled numbers are returned to Grafana instead.
#### Tabular Data
This feature is supported with backend data sources that query tabular data:
- SQL data sources such as MySQL, Postgres, MSSQL, and Oracle.
- The Azure Kusto based services: Azure Monitor (Logs), Azure Monitor (Azure Resource Graph), and Azure Data Explorer.
A query with Grafana managed alerts or SSE is considered numeric with these data sources, if:
- The "Format AS" option is set to "Table" in the data source query.
- The table response returned to Grafana from the query includes only one numeric (e.g. int, double, float) column, and optionally additional string columns.
If there are string columns then those columns become labels. The name of column becomes the label name, and the value for each row becomes the value of the corresponding label. If multiple rows are returned, then each row should be uniquely identified their labels.
**Example**
For a MySQL table called "DiskSpace":
| Time | Host | Disk | PercentFree |
| ----------- | ---- | ---- | ----------- |
| 2021-June-7 | web1 | /etc | 3 |
| 2021-June-7 | web2 | /var | 4 |
| 2021-June-7 | web3 | /var | 8 |
You can query the data filtering on time, but without returning the time series to Grafana. For example, an alert that would trigger per Host, Disk when there is less than 5% free space:
```sql
SELECT Host, Disk, CASE WHEN PercentFree <5.0THENPercentFreeELSE0ENDFROM(
SELECT
Host,
Disk,
Avg(PercentFree)
FROM DiskSpace
Group By
Host,
Disk
Where __timeFilter(Time)
```
This query returns the following Table response to Grafana:
| Host | Disk | PercentFree |
| ---- | ---- | ----------- |
| web1 | /etc | 3 |
| web2 | /var | 4 |
| web3 | /var | 0 |
When this query is used as the **condition** in an alert rule, then the non-zero is alerting. As a result, three alert instances are produced:
@ -71,7 +71,58 @@ Notification policies are _not_ a list, but rather are structured according to a
Each policy consists of a set of label matchers (0 or more) that specify which alerts they are or aren't interested in handling. A matching policy refers to a notification policy with label matchers that match the alert instance’s labels.
Use [labels](ref:shared-alert-labels) and label matchers to link alert rules to [notification policies](ref:shared-notification-policies) and [silences](ref:shared-silences). This allows for a flexible way to manage your alert instances, specify which policy should handle them, and which alerts to silence.
A label matchers consists of 3 distinct parts, the **label**, the **value** and the **operator**.
- The **Label** field is the name of the label to match. It must exactly match the label name.
- The **Value** field matches against the corresponding value for the specified **Label** name. How it matches depends on the **Operator** value.
- The **Operator** field is the operator to match against the label value. The available operators are:
| `=` | Select labels that are exactly equal to the value. |
| `!=` | Select labels that are not equal to the value. |
| `=~` | Select labels that regex-match the value. |
| `!~` | Select labels that do not regex-match the value. |
{{% admonition type="note" %}}
If you are using multiple label matchers, they are combined using the AND logical operator. This means that all matchers must match in order to link a rule to a policy.
{{% /admonition %}}
**Label matching example**
If you define the following set of labels for your alert:
`{ foo=bar, baz=qux, id=12 }`
then:
- A label matcher defined as `foo=bar` matches this alert rule.
- A label matcher defined as `foo!=bar` does _not_ match this alert rule.
- A label matcher defined as `id=~[0-9]+` matches this alert rule.
- A label matcher defined as `baz!~[0-9]+` matches this alert rule.
- Two label matchers defined as `foo=bar` and `id=~[0-9]+` match this alert rule.
**Exclude labels**
You can also write label matchers to exclude labels.
Here is an example that shows how to exclude the label `Team`. You can choose between any of the values below to exclude labels.
{{<figuresrc="/media/docs/alerting/notification-routing.png"max-width="750px"caption="Matching alert instances with notification policies"alt="Example of a notification policy tree">}}
description: This section provides practical examples of alert rules for common monitoring scenarios.
keywords:
- grafana
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Examples
title: Grafana Alerting Examples
weight: 1100
---
# Grafana Alerting Examples
This section provides practical examples of alert rules for common monitoring scenarios. Each example focuses on a specific use case, showing how to structure queries, evaluate conditions, and understand how Grafana generates alert instances.
# Example of multi-dimensional alerts on time series data
This example shows how a single alert rule can generate multiple alert instances — one for each label set (or time series). This is called **multi-dimensional alerting**: one alert rule, many alert instances.
In Prometheus, each unique combination of labels defines a distinct time series. Grafana Alerting uses the same model: each label set is evaluated independently, and a separate alert instance is created for each series.
This pattern is common in dynamic environments when monitoring a group of components like multiple CPUs, containers, or per-host availability. Instead of defining individual alert rules or aggregated alerts, you alert on _each dimension_ — so you can detect particular issues and include that level of detail in notifications.
For example, a query returns one series per CPU:
| `cpu` label value | CPU percent usage |
| :---------------- | :---------------- |
| cpu-0 | 95 |
| cpu-1 | 30 |
| cpu-2 | 85 |
With a threshold of `> 80`, this would trigger two alert instances for `cpu-0` and one for `cpu-2`.
## Examples overview
Imagine you want to trigger alerts when CPU usage goes above 80%, and you want to track each CPU core independently.
You can use a Prometheus query like this:
```
sum by(cpu) (
rate(node_cpu_seconds_total{mode!="idle"}[1m])
)
```
This query returns the active CPU usage rate per CPU core, averaged over the past minute.
| CPU core | Active usage rate |
| :------- | :---------------- |
| cpu-0 | 95 |
| cpu-1 | 30 |
| cpu-2 | 85 |
This produces one series for each existing CPU.
When Grafana Alerting evaluates the query, it creates an individual alert instance for each returned series.
| Alert instance | Value |
| :------------- | :---- |
| {cpu="cpu-0"} | 95 |
| {cpu="cpu-1"} | 30 |
| {cpu="cpu-2"} | 85 |
With a threshold condition like `$A > 80`, Grafana evaluates each instance separately and fires alerts only where the condition is met:
| Alert instance | Value | State |
| :------------- | :---- | :----- |
| {cpu="cpu-0"} | 95 | Firing |
| {cpu="cpu-1"} | 30 | Normal |
| {cpu="cpu-2"} | 85 | Firing |
Multi-dimensional alerts help you surface issues on individual components—problems that might be missed when alerting on aggregated data (like total CPU usage).
Each alert instance targets a specific component, identified by its unique label set. This makes alerts more specific and actionable. For example, you can set a [`summary` annotation](ref:annotations) in your alert rule that identifies the affected CPU:
```
High CPU usage on {{$labels.cpu}}
```
In the previous example, the two firing alert instances would display summaries indicating the affected CPUs:
- High CPU usage on `cpu-0`
- High CPU usage on `cpu-2`
## Try it with TestData
You can quickly experiment with multi-dimensional alerts using the [**TestData** data source](ref:testdata-data-source), which can generate multiple random time series.
1. Add the **TestData** data source through the **Connections** menu.
1. Go to **Alerting** and create an alert rule
1. Select **TestData** as the data source.
1. Configure the TestData scenario
1. Scenario: **Random Walk**
1. Series count: 3
1. Start value: 70, Max: 100
1. Labels: `cpu=cpu-$seriesIndex`
{{<figuresrc="/media/docs/alerting/testdata-random-series.png"max-width="750px"alt="Generating random time series data using the TestData data source">}}
## Reduce time series data for comparison
The example returns three time series like shown above with values across the selected time range.
To alert on each series, you need to reduce the time series to a single value that the alert condition can evaluate and determine the alert instance state.
Grafana Alerting provides several ways to reduce time series data:
- **Data source query functions**. The earlier example used the Prometheus `sum` function to sum the rate results by `cpu,`producing a single value per CPU core.
- **Reduce expression**. In the query and condition section, Grafana provides the `Reduce` expression to aggregate time series data.
- In **Default mode**, the **When** input selects a reducer (like `last`, `mean`, or `min`), and the threshold compares that reduced value.
- In **Advanced mode**, you can add the [**Reduce** expression](ref:reduce-expression) (e.g., `last()`, `mean()`) before defining the threshold (alert condition).
For demo purposes, this example uses the **Advanced mode** with a **Reduce** expression:
1. Toggle **Advanced mode** in the top right section of the query panel to enable adding additional expressions.
1. Add the **Reduce** expression using a function like `mean()` to reduce each time series to a single value.
1. Define the alert condition using a **Threshold** like `$reducer > 80`
1. Click **Preview** to evaluate the alert rule.
{{<figuresrc="/media/docs/alerting/using-expressions-with-multiple-series.png"max-width="750px"caption="The alert condition evaluates the reduced value for each alert instance and shows whether each instance is Firing or Normal."alt="Alert preview using a Reduce expression and a threshold condition">}}
## Learn more
This example shows how Grafana Alerting implements a multi-dimensional alerting model: one rule, many alert instances and why reducing time series data to a single value is required for evaluation.
For additional learning resources, check out:
- [Get started with Grafana Alerting – Part 2](https://grafana.com/tutorials/alerting-get-started-pt2/)
- [Example of alerting on tabular data](ref:table-data-example)
Some files were not shown because too many files have changed in this diff
Show More