* Alerting docs: adds recording rule info
* ran prettier
* Updates with feedback from pepe and removes external reference
* couple of minor edits
* removes reference
* feedback from sonia
* adds links per gilles
* adds correct version link
In Grafana Cloud, you can only create data source-managed recording rules.
In Grafana OSS, you can create both Grafana-managed and data source-managed recording rules if you enable the `grafanaManagedRecordingRules` feature flag.
For more information on enabling feature toggles, refer to [Configure feature toggles](https://grafana.com/docs/grafana/latest/setup-grafana//<GRAFANA_VERSION>/configure-grafana/feature-toggles/)
{{</admonition>}}
You can create and manage recording rules for an external Grafana Mimir or Loki instance.
Recording rules calculate frequently needed expressions or computationally expensive expressions in advance and save the result as a new set of time series. Querying this new time series is faster, especially for dashboards since they query the same expression every time the dashboards refresh.
For more information on recording rules in Prometheus, refer to [Defining recording rules in Prometheus](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/).
**Note:**
Recording rules are run as instant rules, which means that they run every 10s. To overwrite this configuration, update the min_interval in your custom configuration file.
[min_interval](ref:configure-grafana) sets the minimum interval to enforce between rule evaluations. The default value is 10s which equals the scheduler interval. Rules will be adjusted if they are less than this value or if they are not multiple of the scheduler interval (10s). Higher values can help with resource management as fewer evaluations are scheduled over time.
This setting has precedence over each individual rule frequency. If a rule frequency is lower than this value, then this value is enforced.
## Before you begin
## Configure data source-managed recording rules
To configure data-source managed recording rules, complete the following steps.
### Before you begin
- Verify that you have write permission to the Prometheus or Loki data source. Otherwise, you will not be able to create or update Grafana Mimir managed alerting rules.
@ -54,25 +63,120 @@ This setting has precedence over each individual rule frequency. If a rule frequ
- **Grafana Mimir** - use the `/prometheus` prefix. The Prometheus data source supports both Grafana Mimir and Prometheus, and Grafana expects that both the [Query API](/docs/mimir/latest/operators-guide/reference-http-api/#querier--query-frontend) and [Ruler API](/docs/mimir/latest/operators-guide/reference-http-api/#ruler) are under the same URL. You cannot provide a separate URL for the Ruler API.
## Create recording rules
To create recording rules, follow these steps.
### Steps
1. Click **Alerts & IRM** -> **Alerting** ->
**Alert rules**.
1. Select **Rule type** -> **Recording**.
1. Click **+New recording rule**.
1. Scroll to the **Data-source managed section** and click **+New recording rule**.
#### Enter recording rule name
The recording rule name must be a Prometheus metric name and contain no whitespace.
#### Define recording rule
Select your data source and enter a query.
#### Add namespace and group
1. From the **Namespace** dropdown, select an existing rule namespace or add a new one.
Namespaces can contain one or more rule groups and only have an organizational purpose.
1. Enter recording rule name.
1. From the **Group** dropdown, select an existing group within the selected namespace or add a new one.
The recording rule name must be a Prometheus metric name and contain no whitespace.
Newly created rules are appended to the end of the group. Rules within a group are run sequentially at a regular interval, with the same evaluation time.
#### Add labels
1. Add custom labels selecting existing key-value pairs from the drop down, or add new labels by entering the new key or value.
1. Define recording rule.
- Select your Loki or Prometheus data source.
- Enter a query.
1. Add namespace and group.
- From the **Namespace** dropdown, select an existing rule namespace or add a new one. Namespaces can contain one or more rule groups and only have an organizational purpose.
- From the **Group** dropdown, select an existing group within the selected namespace or add a new one. Newly created rules are appended to the end of the group. Rules within a group are run sequentially at a regular interval, with the same evaluation time.
1. Add labels.
- Add custom labels selecting existing key-value pairs from the drop down, or add new labels by entering the new key or value .
1. Click **Save rule** to save the rule or **Save rule and exit** to save the rule and go back to the Alerting page.
## Configure Grafana-managed recording rules
To configure Grafana-managed recording rules, complete the following steps.
### Before you begin
If you are using Grafana OSS, enable the `grafanaManagedRecordingRules` feature flag.
### Steps
1. Click **Alerts & IRM** -> **Alerting** ->
**Alert rules**.
1. Select **Rule type** -> **Recording**.
1. Scroll to the **Grafana-managed section** and click **+New recording rule**.
#### Enter a recording rule and metric name
Enter a names to identify your recording rule and metric. The metric name must be a Prometheus metric name and contain no whitespace.
For more information, refer to [Metrics and labels](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).
#### Define recording rule
Define a query to get the data you want to measure and a condition that needs to be met before an alert rule fires.
1. Select a data source.
1. From the **Options** dropdown, specify a time range.
{{<admonitiontype="note">}}
Grafana Alerting only supports fixed relative time ranges, for example, `now-24hr: now`.
It does not support absolute time ranges: `2021-12-02 00:00:00 to 2021-12-05 23:59:592` or semi-relative time ranges: `now/d to: now`.
{{</admonition>}}
1. Add a query.
To add multiple queries, click **Add query**.
All alert rules are managed by Grafana by default. If you want to switch to a data source-managed alert rule, click **Switch to data source-managed alert rule**.
2. Add one or more [expressions].
a. For each expression, select either **Classic condition** to create a single alert rule, or choose from the **Math**, **Reduce**, and **Resample** options to generate separate alert for each series.
{{% admonition type="note" %}}
When using Prometheus, you can use an instant vector and built-in functions, so you don't need to add additional expressions.
{{% /admonition %}}
b. Click **Preview** to verify that the expression is successful.
3. To add a recovery threshold, turn the **Custom recovery threshold** toggle on and fill in a value for when your alert rule should stop firing.
You can only add one recovery threshold in a query and it must be the alert condition.
4. Click **Set as alert condition** on the query or expression you want to set as your alert condition.
#### Set evaluation behavior
Use alert rule evaluation to determine how frequently an alert rule should be evaluated and how quickly it should change its state.
To do this, you need to make sure that your alert rule is in the right evaluation group and set a pending period time that works best for your use case.
1. Select a folder or click **+ New folder**.
1. Select an evaluation group or click **+ New evaluation group**.
If you are creating a new evaluation group, specify the interval for the group.
All rules within the same group are evaluated concurrently over the same time interval.
1. Enter a pending period.
The pending period is the period in which an alert rule can be in breach of the condition until it fires.
Once a condition is met, the alert goes into the **Pending** state. If the condition remains active for the duration specified, the alert transitions to the **Firing** state, else it reverts to the **Normal** state.
1. Turn on pause alert notifications, if required.
{{<admonitiontype="note">}}
You can pause alert rule evaluation to prevent noisy alerting while tuning your alerts.
Pausing stops alert rule evaluation and doesn't create any alert instances.
This is different to mute timings, which stop notifications from being delivered, but still allows for alert rule evaluation and the creation of alert instances.
{{</admonition>}}
#### Add labels
Add labels to your rule for searching, silencing, or routing to a notification policy.
@ -98,7 +98,7 @@ They are only supported for Prometheus-based or Loki data sources with the Ruler
1. Alert rules are evaluated by the Alert Rule Evaluation Engine.
1. Firing and resolved alert instances are forwarded to [handle their notifications](ref:notifications).
### Recording rules
## Recording rules
A recording rule allows you to pre-compute frequently needed or computationally expensive expressions and save their result as a new set of time series. This is useful if you want to run alerts on aggregated data or if you have dashboards that query computationally expensive expressions repeatedly.
@ -114,7 +114,7 @@ When choosing which alert rule type to use, consider the following comparison be
| Create alert rules<wbr/> based on data from any of the supported data sources | Yes | No. You can only create alert rules that are based on Prometheus-based data. |
| Mix and match data sources | Yes | No |
| Includes support for recording rules | No | Yes |
| Includes support for recording rules | Yes. Only for Grafana OSS users with the `grafanaManagedRecordingRules` feature flag enabled. | Yes |
| Add expressions to transform<wbr/> your data and set alert conditions | Yes | No |
| Use images in alert notifications | Yes | No |
| Organization | Organize and manage access with folders | Use namespaces |
@ -34,23 +34,27 @@ Grafana Alerting is based on the architecture of the Prometheus alerting system.
{{<figuresrc="/media/docs/alerting/alerting-alertmanager-architecture.png"max-width="750px"alt="A diagram with the alert generator and alert manager architecture">}}
Grafana has its own pre-configured Alertmanager, referred to as "Grafana" in the user interface:
**Grafana Alertmanager**
- **Grafana Alertmanager** is the default internal Alertmanager if you run Grafana on-premises or as open source. It can receive alerts from Grafana but cannot receive alerts from external alert generators such as Mimir or Loki.
Grafana has its own built-in Alertmanager, referred to as "Grafana" in the user interface. It is the default Alertmanager and can only handle Grafana-managed alerts.
- **Cloud Alertmanager** runs in Grafana Cloud and can receive Grafana-managed alerts and Data sources-managed alerts like Mimir, Loki, and Prometheus.
**Cloud Alertmanager**
Grafana Alerting also supports sending alerts to **External Alertmanagers**, such as the [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), which can receive alerts from Grafana, Loki, Mimir, and Prometheus.
Each Grafana Cloud instance comes preconfigured with an additional Alertmanager (`grafanacloud-STACK_NAME-ngalertmanager`) from the Mimir (Prometheus) instance running in the Grafana Cloud Stack. The Cloud Alertmanager can handle both Grafana-managed and data source-managed alerts.
You can use both internal and external Alertmanagers. The decision often depends on your alerting setup and where your alerts are being generated. Here are two examples of when you may want to [add an external Alertmanager](#add-an-external-alertmanager) and send your alerts there instead of the default Grafana Alertmanager:
**Other Alertmanagers**
Grafana Alerting also supports sending alerts to other alertmanagers, such as the [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), which can handle Grafana-managed alerts and data sources-managed alerts such as alerts from Loki, Mimir, and Prometheus.
You can use a combination of Alertmanagers. The decision often depends on your alerting setup and where your alerts are being generated. Here are two examples of when you may want to add an Alertmanager and send your alerts there instead of using the built-in Grafana Alertmanager.
1. You may already have Alertmanagers on-premises in your own Cloud infrastructure that you still want to use because you have other alert generators, such as Prometheus.
2. You want to use both Prometheus on-premises and hosted Grafana to send alerts to the same Alertmanager that runs in your Cloud infrastructure.
## Add an external Alertmanager
## Add an Alertmanager
From Grafana, you can configure and administer your own external Alertmanager to receive Grafana alerts.
From Grafana, you can configure and administer your own Alertmanager to receive Grafana alerts.
{{% admonition type="note" %}}
Grafana Alerting does not support sending alerts to the AWS Managed Service for Prometheus due to the lack of sigv4 support in Prometheus.
@ -60,23 +64,23 @@ After you have added the Alertmanager, you can use the Grafana Alerting UI to ma
{{<figuresrc="/media/docs/alerting/alerting-choose-alertmanager.png"max-width="750px"alt="A screenshot choosing an Alertmanager in the notification policies UI">}}
External alertmanagers should now be configured as data sources using Grafana Configuration from the main Grafana navigation menu. This enables you to manage the contact points and notification policies of external alertmanagers from within Grafana and also encrypts HTTP basic authentication credentials.
Alertmanagers should now be configured as data sources using Grafana Configuration from the main Grafana navigation menu. This enables you to manage the contact points and notification policies of external alertmanagers from within Grafana and also encrypts HTTP basic authentication credentials.
To add an external Alertmanager, complete the following steps.
To add an Alertmanager, complete the following steps.
1. Click **Connections** in the left-side menu.
1. On the Connections page, search for `Alertmanager`.
1. Click the **Create a new data source** button.
2. On the Connections page, search for `Alertmanager`.
3. Click the **Create a new data source** button.
If you don't see this button, you may need to install the plugin, relaunch your Cloud instance, and then repeat steps 1 and 2.
1. Fill out the fields on the page, as required.
4. Fill out the fields on the page, as required.
If you are provisioning your data source, set the flag `handleGrafanaManagedAlerts` in the `jsonData` field to `true` to send Grafana-managed alerts to this Alertmanager.
**Note:** Prometheus, Grafana Mimir, and Cortex implementations of Alertmanager are supported. For Prometheus, contact points and notification policies are read-only in the Grafana Alerting UI.
1. Click **Save & test**.
5. Click **Save & test**.
{{<admonitiontype="note">}}
On the Settings page, you can manage your Alertmanager configurations and configure where Grafana-managed alert instances are forwarded.