mirror of https://github.com/grafana/grafana
Remove legacy alerting docs (#84190)
parent
da327ce807
commit
e33e219a9a
@ -1,54 +0,0 @@ |
||||
--- |
||||
_build: |
||||
list: false |
||||
aliases: |
||||
- ./unified-alerting/difference-old-new/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/difference-old-new/ |
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/difference-old-new/ |
||||
description: Learn about how Grafana Alerting compares to legacy alerting |
||||
keywords: |
||||
- grafana |
||||
- alerting |
||||
- guide |
||||
labels: |
||||
products: |
||||
- cloud |
||||
- enterprise |
||||
- oss |
||||
title: Grafana Alerting vs Legacy dashboard alerting |
||||
weight: 108 |
||||
--- |
||||
|
||||
# Grafana Alerting vs Legacy dashboard alerting |
||||
|
||||
Introduced in Grafana 8.0, and the only system since Grafana 10.0, Grafana Alerting has several enhancements over legacy dashboard alerting. |
||||
|
||||
## Multi-dimensional alerting |
||||
|
||||
You can now create alerts that give you system-wide visibility with a single alerting rule. Generate multiple alert instances from a single alert rule. For example, you can create a rule to monitor the disk usage of multiple mount points on a single host. The evaluation engine returns multiple time series from a single query, with each time series identified by its label set. |
||||
|
||||
## Create alerts outside of Dashboards |
||||
|
||||
Unlike legacy dashboard alerts, Grafana alerts allow you to create queries and expressions that combine data from multiple sources in unique ways. You can still link dashboards and panels to alerting rules using their ID and quickly troubleshoot the system under observation. |
||||
|
||||
Since unified alerts are no longer directly tied to panel queries, they do not include images or query values in the notification email. You can use customized notification templates to view query values. |
||||
|
||||
## Create Loki and Grafana Mimir alerting rules |
||||
|
||||
In Grafana Alerting, you can manage Loki and Grafana Mimir alerting rules using the same UI and API as your Grafana managed alerts. |
||||
|
||||
## View and search for alerts from Prometheus compatible data sources |
||||
|
||||
Alerts for Prometheus compatible data sources are now listed under the Grafana alerts section. You can search for labels across multiple data sources to quickly find relevant alerts. |
||||
|
||||
## Special alerts for alert state NoData and Error |
||||
|
||||
Grafana Alerting introduced a new concept of the alert states. When evaluation of an alerting rule produces state NoData or Error, Grafana Alerting will generate special alerts that will have the following labels: |
||||
|
||||
- `alertname` with value DatasourceNoData or DatasourceError depending on the state. |
||||
- `rulename` name of the alert rule the special alert belongs to. |
||||
- `datasource_uid` will have the UID of the data source that caused the state. |
||||
- all labels and annotations of the original alert rule |
||||
|
||||
You can handle these alerts the same way as regular alerts by adding a silence, route to a contact point, and so on. |
||||
|
||||
> **Note:** If the rule uses many data sources and one or many returns no data, the special alert will be created for each data source that caused the alert state. |
||||
@ -1,58 +0,0 @@ |
||||
--- |
||||
aliases: |
||||
- alerting/legacy-alerting-deprecation/ |
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/set-up/migrating-alerts/legacy-alerting-deprecation/ |
||||
description: Learn about legacy alerting deprecation |
||||
keywords: |
||||
- grafana |
||||
- alerting |
||||
labels: |
||||
products: |
||||
- enterprise |
||||
- oss |
||||
title: Legacy alerting deprecation |
||||
weight: 109 |
||||
--- |
||||
|
||||
# Legacy alerting deprecation |
||||
|
||||
Starting with Grafana v9.0.0, legacy alerting is deprecated, meaning that it is no longer actively maintained or supported by Grafana. As of Grafana v10.0.0, we do not contribute or accept external contributions to the codebase apart from CVE fixes. |
||||
|
||||
Legacy alerting refers to the old alerting system that was used prior to the introduction of Grafana Alerting; the new alerting system in Grafana. |
||||
|
||||
The decision to deprecate legacy alerting was made to encourage users to migrate to the new alerting system, which offers a more powerful and flexible alerting experience based on Prometheus Alertmanager. |
||||
|
||||
Users who are still using legacy alerting are encouraged to migrate their alerts to the new system as soon as possible to ensure that they continue to receive new features, bug fixes, and support. |
||||
|
||||
However, we will still patch CVEs until legacy alerting is completely removed in Grafana 11; honoring our commitment to building and distributing secure software. |
||||
|
||||
We have provided [instructions][migrating-alerts] on how to migrate to the new alerting system, making the process as easy as possible for users. |
||||
|
||||
## Why are we deprecating legacy alerting? |
||||
|
||||
The new Grafana alerting system is more powerful and flexible than the legacy alerting feature. |
||||
|
||||
The new system is based on Prometheus Alertmanager, which offers a more comprehensive set of features for defining and managing alerts. With the new alerting system, users can create alerts based on complex queries, configure alert notifications via various integrations, and set up sophisticated alerting rules with support for conditional expressions, aggregation, and grouping. |
||||
|
||||
Overall, the new alerting system in Grafana is a major improvement over the legacy alerting feature, providing users with a more powerful and flexible alerting experience. |
||||
|
||||
Additionally, legacy alerting still requires Angular to function and we are [planning to remove support for it][angular_deprecation] in Grafana 11. |
||||
|
||||
## When will we remove legacy alerting completely? |
||||
|
||||
Legacy alerting will be removed from the code-base in Grafana 11, following the same timeline as the [Angular deprecation][angular_deprecation]. |
||||
|
||||
## How do I migrate to the new Grafana alerting? |
||||
|
||||
Refer to our [upgrade instructions][migrating-alerts]. |
||||
|
||||
### Useful links |
||||
|
||||
- [Upgrade Alerting][migrating-alerts] |
||||
- [Angular support deprecation][angular_deprecation] |
||||
|
||||
{{% docs/reference %}} |
||||
[angular_deprecation]: "/docs/ -> /docs/grafana/<GRAFANA_VERSION>/developers/angular_deprecation" |
||||
|
||||
[migrating-alerts]: "/docs/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/migrating-alerts" |
||||
{{% /docs/reference %}} |
||||
@ -1,184 +0,0 @@ |
||||
--- |
||||
aliases: |
||||
- ../../http_api/alerting/ |
||||
canonical: /docs/grafana/latest/developers/http_api/alerting/ |
||||
description: Grafana Alerts HTTP API |
||||
keywords: |
||||
- grafana |
||||
- http |
||||
- documentation |
||||
- api |
||||
- alerting |
||||
- alerts |
||||
labels: |
||||
products: |
||||
- enterprise |
||||
- oss |
||||
title: Legacy Alerting API |
||||
--- |
||||
|
||||
# Legacy Alerting API |
||||
|
||||
{{% admonition type="note" %}} |
||||
Starting with v9.0, the Legacy Alerting HTTP API is deprecated. It will be removed in a future release. |
||||
{{% /admonition %}} |
||||
|
||||
This topic is relevant for the [legacy dashboard alerts](/docs/grafana/v8.5/alerting/old-alerting/) only. |
||||
|
||||
If you are using Grafana Alerting, refer to [Alerting provisioning API]({{< relref "./alerting_provisioning" >}}) |
||||
|
||||
You can find Grafana Alerting API specification details [here](https://editor.swagger.io/?url=https://raw.githubusercontent.com/grafana/grafana/main/pkg/services/ngalert/api/tooling/post.json). Also, refer to [Grafana Alerting alerts documentation][] for details on how to create and manage new alerts. |
||||
|
||||
You can use the Alerting API to get information about legacy dashboard alerts and their states but this API cannot be used to modify the alert. |
||||
To create new alerts or modify them you need to update the dashboard JSON that contains the alerts. |
||||
|
||||
## Get alerts |
||||
|
||||
`GET /api/alerts/` |
||||
|
||||
**Example Request**: |
||||
|
||||
```http |
||||
GET /api/alerts HTTP/1.1 |
||||
Accept: application/json |
||||
Content-Type: application/json |
||||
Authorization: Bearer eyJrIjoiT0tTcG1pUlY2RnVKZTFVaDFsNFZXdE9ZWmNrMkZYbk |
||||
``` |
||||
|
||||
Querystring Parameters: |
||||
|
||||
These parameters are used as querystring parameters. For example: |
||||
|
||||
`/api/alerts?dashboardId=1` |
||||
|
||||
- **dashboardId** – Limit response to alerts in specified dashboard(s). You can specify multiple dashboards, e.g. dashboardId=23&dashboardId=35. |
||||
- **panelId** – Limit response to alert for a specified panel on a dashboard. |
||||
- **query** - Limit response to alerts having a name like this value. |
||||
- **state** - Return alerts with one or more of the following alert states: `ALL`,`no_data`, `paused`, `alerting`, `ok`, `pending`. To specify multiple states use the following format: `?state=paused&state=alerting` |
||||
- **limit** - Limit response to _X_ number of alerts. |
||||
- **folderId** – Limit response to alerts of dashboards in specified folder(s). You can specify multiple folders, e.g. folderId=23&folderId=35. |
||||
- **dashboardQuery** - Limit response to alerts having a dashboard name like this value. |
||||
- **dashboardTag** - Limit response to alerts of dashboards with specified tags. To do an "AND" filtering with multiple tags, specify the tags parameter multiple times e.g. dashboardTag=tag1&dashboardTag=tag2. |
||||
|
||||
**Example Response**: |
||||
|
||||
```http |
||||
HTTP/1.1 200 |
||||
Content-Type: application/json |
||||
|
||||
[ |
||||
{ |
||||
"id": 1, |
||||
"dashboardId": 1, |
||||
"dashboardUId": "ABcdEFghij" |
||||
"dashboardSlug": "sensors", |
||||
"panelId": 1, |
||||
"name": "fire place sensor", |
||||
"state": "alerting", |
||||
"newStateDate": "2018-05-14T05:55:20+02:00", |
||||
"evalDate": "0001-01-01T00:00:00Z", |
||||
"evalData": null, |
||||
"executionError": "", |
||||
"url": "http://grafana.com/dashboard/db/sensors" |
||||
} |
||||
] |
||||
``` |
||||
|
||||
## Get alert by id |
||||
|
||||
`GET /api/alerts/:id` |
||||
|
||||
**Example Request**: |
||||
|
||||
```http |
||||
GET /api/alerts/1 HTTP/1.1 |
||||
Accept: application/json |
||||
Content-Type: application/json |
||||
Authorization: Bearer eyJrIjoiT0tTcG1pUlY2RnVKZTFVaDFsNFZXdE9ZWmNrMkZYbk |
||||
``` |
||||
|
||||
**Example Response**: |
||||
|
||||
```http |
||||
HTTP/1.1 200 |
||||
Content-Type: application/json |
||||
|
||||
{ |
||||
"id": 1, |
||||
"dashboardId": 1, |
||||
"dashboardUId": "ABcdEFghij" |
||||
"dashboardSlug": "sensors", |
||||
"panelId": 1, |
||||
"name": "fire place sensor", |
||||
"state": "alerting", |
||||
"message": "Someone is trying to break in through the fire place", |
||||
"newStateDate": "2018-05-14T05:55:20+02:00", |
||||
"evalDate": "0001-01-01T00:00:00Z", |
||||
"evalData": "evalMatches": [ |
||||
{ |
||||
"metric": "movement", |
||||
"tags": { |
||||
"name": "fireplace_chimney" |
||||
}, |
||||
"value": 98.765 |
||||
} |
||||
], |
||||
"executionError": "", |
||||
"url": "http://grafana.com/dashboard/db/sensors" |
||||
} |
||||
``` |
||||
|
||||
**Important Note**: |
||||
"evalMatches" data is cached in the db when and only when the state of the alert changes |
||||
(e.g. transitioning from "ok" to "alerting" state). |
||||
|
||||
If data from one server triggers the alert first and, before that server is seen leaving alerting state, |
||||
a second server also enters a state that would trigger the alert, the second server will not be visible in "evalMatches" data. |
||||
|
||||
## Pause alert by id |
||||
|
||||
`POST /api/alerts/:id/pause` |
||||
|
||||
**Example Request**: |
||||
|
||||
```http |
||||
POST /api/alerts/1/pause HTTP/1.1 |
||||
Accept: application/json |
||||
Content-Type: application/json |
||||
Authorization: Bearer eyJrIjoiT0tTcG1pUlY2RnVKZTFVaDFsNFZXdE9ZWmNrMkZYbk |
||||
|
||||
{ |
||||
"paused": true |
||||
} |
||||
``` |
||||
|
||||
The :id query parameter is the id of the alert to be paused or unpaused. |
||||
|
||||
JSON Body Schema: |
||||
|
||||
- **paused** – Can be `true` or `false`. True to pause an alert. False to unpause an alert. |
||||
|
||||
**Example Response**: |
||||
|
||||
```http |
||||
HTTP/1.1 200 |
||||
Content-Type: application/json |
||||
|
||||
{ |
||||
"alertId": 1, |
||||
"state": "Paused", |
||||
"message": "alert paused" |
||||
} |
||||
``` |
||||
|
||||
## Pause all alerts |
||||
|
||||
See [Admin API][]. |
||||
|
||||
{{% docs/reference %}} |
||||
[Admin API]: "/docs/grafana/ -> /docs/grafana/<GRAFANA VERSION>/developers/http_api/admin#pause-all-alerts" |
||||
[Admin API]: "/docs/grafana/ -> /docs/grafana/<GRAFANA VERSION>/developers/http_api/admin#pause-all-alerts" |
||||
|
||||
[Grafana Alerting alerts documentation]: "/docs/grafana/ -> /docs/grafana/<GRAFANA VERSION>/alerting" |
||||
[Grafana Alerting alerts documentation]: "/docs/grafana-cloud/ -> /docs/grafana/<GRAFANA VERSION>/alerting" |
||||
{{% /docs/reference %}} |
||||
@ -1,424 +0,0 @@ |
||||
--- |
||||
aliases: |
||||
- ../../http_api/alerting_notification_channels/ |
||||
canonical: /docs/grafana/latest/developers/http_api/alerting_notification_channels/ |
||||
description: Grafana Alerting Notification Channel HTTP API |
||||
keywords: |
||||
- grafana |
||||
- http |
||||
- documentation |
||||
- api |
||||
- alerting |
||||
- alerts |
||||
- notifications |
||||
labels: |
||||
products: |
||||
- enterprise |
||||
- oss |
||||
title: Legacy Alerting Notification Channels API |
||||
--- |
||||
|
||||
# Legacy Alerting Notification Channels API |
||||
|
||||
{{% admonition type="note" %}} |
||||
Starting with v9.0, the Legacy Alerting Notification Channels API is deprecated. It will be removed in a future release. |
||||
{{% /admonition %}} |
||||
|
||||
This page documents the Alerting Notification Channels API. |
||||
|
||||
## Identifier (id) vs unique identifier (uid) |
||||
|
||||
The identifier (id) of a notification channel is an auto-incrementing numeric value and is only unique per Grafana install. |
||||
|
||||
The unique identifier (uid) of a notification channel can be used for uniquely identify a notification channel between |
||||
multiple Grafana installs. It's automatically generated if not provided when creating a notification channel. The uid |
||||
allows having consistent URLs for accessing notification channels and when syncing notification channels between multiple |
||||
Grafana installations, refer to [alert notification channel provisioning]({{< relref "/docs/grafana/latest/administration/provisioning#alert-notification-channels" >}}). |
||||
|
||||
The uid can have a maximum length of 40 characters. |
||||
|
||||
## Get all notification channels |
||||
|
||||
Returns all notification channels that the authenticated user has permission to view. |
||||
|
||||
`GET /api/alert-notifications` |
||||
|
||||
**Example request**: |
||||
|
||||
```http |
||||
GET /api/alert-notifications HTTP/1.1 |
||||
Accept: application/json |
||||
Content-Type: application/json |
||||
Authorization: Bearer eyJrIjoiT0tTcG1pUlY2RnVKZTFVaDFsNFZXdE9ZWmNrMkZYbk |
||||
``` |
||||
|
||||
**Example response**: |
||||
|
||||
```http |
||||
HTTP/1.1 200 |
||||
Content-Type: application/json |
||||
|
||||
[ |
||||
{ |
||||
"id": 1, |
||||
"uid": "team-a-email-notifier", |
||||
"name": "Team A", |
||||
"type": "email", |
||||
"isDefault": false, |
||||
"sendReminder": false, |
||||
"disableResolveMessage": false, |
||||
"settings": { |
||||
"addresses": "dev@grafana.com" |
||||
}, |
||||
"created": "2018-04-23T14:44:09+02:00", |
||||
"updated": "2018-08-20T15:47:49+02:00" |
||||
} |
||||
] |
||||
|
||||
``` |
||||
|
||||
## Get all notification channels (lookup) |
||||
|
||||
Returns all notification channels, but with less detailed information. Accessible by any authenticated user and is mainly used by providing alert notification channels in Grafana UI when configuring alert rule. |
||||
|
||||
`GET /api/alert-notifications/lookup` |
||||
|
||||
**Example request**: |
||||
|
||||
```http |
||||
GET /api/alert-notifications/lookup HTTP/1.1 |
||||
Accept: application/json |
||||
Content-Type: application/json |
||||
Authorization: Bearer eyJrIjoiT0tTcG1pUlY2RnVKZTFVaDFsNFZXdE9ZWmNrMkZYbk |
||||
``` |
||||
|
||||
**Example response**: |
||||
|
||||
```http |
||||
HTTP/1.1 200 |
||||
Content-Type: application/json |
||||
|
||||
[ |
||||
{ |
||||
"id": 1, |
||||
"uid": "000000001", |
||||
"name": "Test", |
||||
"type": "email", |
||||
"isDefault": false |
||||
}, |
||||
{ |
||||
"id": 2, |
||||
"uid": "000000002", |
||||
"name": "Slack", |
||||
"type": "slack", |
||||
"isDefault": false |
||||
} |
||||
] |
||||
|
||||
``` |
||||
|
||||
## Get notification channel by uid |
||||
|
||||
`GET /api/alert-notifications/uid/:uid` |
||||
|
||||
Returns the notification channel given the notification channel uid. |
||||
|
||||
**Example request**: |
||||
|
||||
```http |
||||
GET /api/alert-notifications/uid/team-a-email-notifier HTTP/1.1 |
||||
Accept: application/json |
||||
Content-Type: application/json |
||||
Authorization: Bearer eyJrIjoiT0tTcG1pUlY2RnVKZTFVaDFsNFZXdE9ZWmNrMkZYbk |
||||
``` |
||||
|
||||
**Example response**: |
||||
|
||||
```http |
||||
HTTP/1.1 200 |
||||
Content-Type: application/json |
||||
|
||||
{ |
||||
"id": 1, |
||||
"uid": "team-a-email-notifier", |
||||
"name": "Team A", |
||||
"type": "email", |
||||
"isDefault": false, |
||||
"sendReminder": false, |
||||
"disableResolveMessage": false, |
||||
"settings": { |
||||
"addresses": "dev@grafana.com" |
||||
}, |
||||
"created": "2018-04-23T14:44:09+02:00", |
||||
"updated": "2018-08-20T15:47:49+02:00" |
||||
} |
||||
``` |
||||
|
||||
## Get notification channel by id |
||||
|
||||
`GET /api/alert-notifications/:id` |
||||
|
||||
Returns the notification channel given the notification channel id. |
||||
|
||||
**Example request**: |
||||
|
||||
```http |
||||
GET /api/alert-notifications/1 HTTP/1.1 |
||||
Accept: application/json |
||||
Content-Type: application/json |
||||
Authorization: Bearer eyJrIjoiT0tTcG1pUlY2RnVKZTFVaDFsNFZXdE9ZWmNrMkZYbk |
||||
``` |
||||
|
||||
**Example response**: |
||||
|
||||
```http |
||||
HTTP/1.1 200 |
||||
Content-Type: application/json |
||||
|
||||
{ |
||||
"id": 1, |
||||
"uid": "team-a-email-notifier", |
||||
"name": "Team A", |
||||
"type": "email", |
||||
"isDefault": false, |
||||
"sendReminder": false, |
||||
"disableResolveMessage": false, |
||||
"settings": { |
||||
"addresses": "dev@grafana.com" |
||||
}, |
||||
"created": "2018-04-23T14:44:09+02:00", |
||||
"updated": "2018-08-20T15:47:49+02:00" |
||||
} |
||||
``` |
||||
|
||||
## Create notification channel |
||||
|
||||
You can find the full list of [supported notifiers](/docs/grafana/v8.5/alerting/old-alerting/notifications/) on the alert notifiers page. |
||||
|
||||
`POST /api/alert-notifications` |
||||
|
||||
**Example request**: |
||||
|
||||
```http |
||||
POST /api/alert-notifications HTTP/1.1 |
||||
Accept: application/json |
||||
Content-Type: application/json |
||||
Authorization: Bearer eyJrIjoiT0tTcG1pUlY2RnVKZTFVaDFsNFZXdE9ZWmNrMkZYbk |
||||
|
||||
{ |
||||
"uid": "new-alert-notification", // optional |
||||
"name": "new alert notification", //Required |
||||
"type": "email", //Required |
||||
"isDefault": false, |
||||
"sendReminder": false, |
||||
"settings": { |
||||
"addresses": "dev@grafana.com" |
||||
} |
||||
} |
||||
``` |
||||
|
||||
**Example response**: |
||||
|
||||
```http |
||||
HTTP/1.1 200 |
||||
Content-Type: application/json |
||||
|
||||
{ |
||||
"id": 1, |
||||
"uid": "new-alert-notification", |
||||
"name": "new alert notification", |
||||
"type": "email", |
||||
"isDefault": false, |
||||
"sendReminder": false, |
||||
"settings": { |
||||
"addresses": "dev@grafana.com" |
||||
}, |
||||
"created": "2018-04-23T14:44:09+02:00", |
||||
"updated": "2018-08-20T15:47:49+02:00" |
||||
} |
||||
``` |
||||
|
||||
## Update notification channel by uid |
||||
|
||||
`PUT /api/alert-notifications/uid/:uid` |
||||
|
||||
Updates an existing notification channel identified by uid. |
||||
|
||||
**Example request**: |
||||
|
||||
```http |
||||
PUT /api/alert-notifications/uid/cIBgcSjkk HTTP/1.1 |
||||
Accept: application/json |
||||
Content-Type: application/json |
||||
Authorization: Bearer eyJrIjoiT0tTcG1pUlY2RnVKZTFVaDFsNFZXdE9ZWmNrMkZYbk |
||||
|
||||
{ |
||||
"uid": "new-alert-notification", // optional |
||||
"name": "new alert notification", //Required |
||||
"type": "email", //Required |
||||
"isDefault": false, |
||||
"sendReminder": true, |
||||
"frequency": "15m", |
||||
"settings": { |
||||
"addresses": "dev@grafana.com" |
||||
} |
||||
} |
||||
``` |
||||
|
||||
**Example response**: |
||||
|
||||
```http |
||||
HTTP/1.1 200 |
||||
Content-Type: application/json |
||||
|
||||
{ |
||||
"id": 1, |
||||
"uid": "new-alert-notification", |
||||
"name": "new alert notification", |
||||
"type": "email", |
||||
"isDefault": false, |
||||
"sendReminder": true, |
||||
"frequency": "15m", |
||||
"settings": { |
||||
"addresses": "dev@grafana.com" |
||||
}, |
||||
"created": "2017-01-01 12:34", |
||||
"updated": "2017-01-01 12:34" |
||||
} |
||||
``` |
||||
|
||||
## Update notification channel by id |
||||
|
||||
`PUT /api/alert-notifications/:id` |
||||
|
||||
Updates an existing notification channel identified by id. |
||||
|
||||
**Example request**: |
||||
|
||||
```http |
||||
PUT /api/alert-notifications/1 HTTP/1.1 |
||||
Accept: application/json |
||||
Content-Type: application/json |
||||
Authorization: Bearer eyJrIjoiT0tTcG1pUlY2RnVKZTFVaDFsNFZXdE9ZWmNrMkZYbk |
||||
|
||||
{ |
||||
"id": 1, |
||||
"uid": "new-alert-notification", // optional |
||||
"name": "new alert notification", //Required |
||||
"type": "email", //Required |
||||
"isDefault": false, |
||||
"sendReminder": true, |
||||
"frequency": "15m", |
||||
"settings": { |
||||
"addresses": "dev@grafana.com" |
||||
} |
||||
} |
||||
``` |
||||
|
||||
**Example response**: |
||||
|
||||
```http |
||||
HTTP/1.1 200 |
||||
Content-Type: application/json |
||||
|
||||
{ |
||||
"id": 1, |
||||
"uid": "new-alert-notification", |
||||
"name": "new alert notification", |
||||
"type": "email", |
||||
"isDefault": false, |
||||
"sendReminder": true, |
||||
"frequency": "15m", |
||||
"settings": { |
||||
"addresses": "dev@grafana.com" |
||||
}, |
||||
"created": "2017-01-01 12:34", |
||||
"updated": "2017-01-01 12:34" |
||||
} |
||||
``` |
||||
|
||||
## Delete alert notification by uid |
||||
|
||||
`DELETE /api/alert-notifications/uid/:uid` |
||||
|
||||
Deletes an existing notification channel identified by uid. |
||||
|
||||
**Example request**: |
||||
|
||||
```http |
||||
DELETE /api/alert-notifications/uid/team-a-email-notifier HTTP/1.1 |
||||
Accept: application/json |
||||
Content-Type: application/json |
||||
Authorization: Bearer eyJrIjoiT0tTcG1pUlY2RnVKZTFVaDFsNFZXdE9ZWmNrMkZYbk |
||||
``` |
||||
|
||||
**Example response**: |
||||
|
||||
```http |
||||
HTTP/1.1 200 |
||||
Content-Type: application/json |
||||
|
||||
{ |
||||
"message": "Notification deleted" |
||||
} |
||||
``` |
||||
|
||||
## Delete alert notification by id |
||||
|
||||
`DELETE /api/alert-notifications/:id` |
||||
|
||||
Deletes an existing notification channel identified by id. |
||||
|
||||
**Example request**: |
||||
|
||||
```http |
||||
DELETE /api/alert-notifications/1 HTTP/1.1 |
||||
Accept: application/json |
||||
Content-Type: application/json |
||||
Authorization: Bearer eyJrIjoiT0tTcG1pUlY2RnVKZTFVaDFsNFZXdE9ZWmNrMkZYbk |
||||
``` |
||||
|
||||
**Example response**: |
||||
|
||||
```http |
||||
HTTP/1.1 200 |
||||
Content-Type: application/json |
||||
|
||||
{ |
||||
"message": "Notification deleted" |
||||
} |
||||
``` |
||||
|
||||
## Test notification channel |
||||
|
||||
Sends a test notification message for the given notification channel type and settings. |
||||
You can find the full list of [supported notifiers](/alerting/notifications/#all-supported-notifier) at the alert notifiers page. |
||||
|
||||
`POST /api/alert-notifications/test` |
||||
|
||||
**Example request**: |
||||
|
||||
```http |
||||
POST /api/alert-notifications/test HTTP/1.1 |
||||
Accept: application/json |
||||
Content-Type: application/json |
||||
Authorization: Bearer eyJrIjoiT0tTcG1pUlY2RnVKZTFVaDFsNFZXdE9ZWmNrMkZYbk |
||||
|
||||
{ |
||||
"type": "email", |
||||
"settings": { |
||||
"addresses": "dev@grafana.com" |
||||
} |
||||
} |
||||
``` |
||||
|
||||
**Example response**: |
||||
|
||||
```http |
||||
HTTP/1.1 200 |
||||
Content-Type: application/json |
||||
|
||||
{ |
||||
"message": "Test notification sent" |
||||
} |
||||
``` |
||||
@ -1,33 +0,0 @@ |
||||
--- |
||||
draft: true |
||||
labels: |
||||
products: |
||||
- enterprise |
||||
- oss |
||||
title: Legacy Grafana alerts |
||||
weight: 114 |
||||
--- |
||||
|
||||
# Legacy Grafana alerts |
||||
|
||||
Grafana Alerting is enabled by default for new OSS installations. For older installations, it is still an [opt-in]({{< relref "../alerting/migrating-alerts/opt-in" >}}) feature. |
||||
|
||||
{{% admonition type="note" %}} |
||||
Legacy dashboard alerts are deprecated and will be removed in Grafana 9. We encourage you to migrate to [Grafana Alerting]({{< relref "../alerting/migrating-alerts" >}}) for all existing installations. |
||||
{{% /admonition %}} |
||||
|
||||
Legacy dashboard alerts have two main components: |
||||
|
||||
- Alert rule - When the alert is triggered. Alert rules are defined by one or more conditions that are regularly evaluated by Grafana. |
||||
- Notification channel - How the alert is delivered. When the conditions of an alert rule are met, the Grafana notifies the channels configured for that alert. |
||||
|
||||
## Alert tasks |
||||
|
||||
You can perform the following tasks for alerts: |
||||
|
||||
- [Create an alert rule]({{< relref "./create-alerts" >}}) |
||||
- [View existing alert rules and their current state]({{< relref "./view-alerts" >}}) |
||||
- [Test alert rules and troubleshoot]({{< relref "./troubleshoot-alerts" >}}) |
||||
- [Add or edit an alert contact point]({{< relref "./notifications" >}}) |
||||
|
||||
{{< docs/shared lookup="alerts/grafana-managed-alerts.md" source="grafana" version="<GRAFANA VERSION>" >}} |
||||
@ -1,38 +0,0 @@ |
||||
--- |
||||
aliases: |
||||
- ../alerting/add-notification-template/ |
||||
draft: true |
||||
keywords: |
||||
- grafana |
||||
- documentation |
||||
- alerting |
||||
- alerts |
||||
- notification |
||||
- templating |
||||
labels: |
||||
products: |
||||
- enterprise |
||||
- oss |
||||
title: Alert notification templating |
||||
weight: 110 |
||||
--- |
||||
|
||||
# Alert notification templating |
||||
|
||||
You can provide detailed information to alert notification recipients by injecting alert query data into an alert notification. This topic explains how you can use alert query labels in alert notifications. |
||||
|
||||
You can use labels generated during an alerting query evaluation to create alert notification messages. For multiple unique values for the same label, the values are comma-separated. |
||||
|
||||
When an alert fires, the alerting data series indicates the violation. For resolved alerts, all data series are included in the resolved notification. |
||||
|
||||
This topic explains how you can use alert query labels in alert notifications. |
||||
|
||||
## Adding alert label data into your alert notification |
||||
|
||||
1. Navigate to the panel you want to add or edit an alert rule for. |
||||
1. Click on the panel title, and then click **Edit**. |
||||
1. On the Alert tab, click **Create Alert**. If an alert already exists for this panel, then you can edit the alert directly. |
||||
1. Refer to the alert query labels in the alert rule name and/or alert notification message field by using the `${Label}` syntax. |
||||
1. Click **Save** in the upper right corner to save the alert rule and the dashboard. |
||||
|
||||
 |
||||
@ -1,138 +0,0 @@ |
||||
--- |
||||
aliases: |
||||
- ../alerting/create-alerts/ |
||||
description: Configure alert rules |
||||
draft: true |
||||
keywords: |
||||
- grafana |
||||
- alerting |
||||
- guide |
||||
- rules |
||||
labels: |
||||
products: |
||||
- enterprise |
||||
- oss |
||||
title: Create alerts |
||||
weight: 200 |
||||
--- |
||||
|
||||
# Create alerts |
||||
|
||||
Grafana Alerting allows you to attach rules to your dashboard panels. When you save the dashboard, Grafana extracts the alert rules into a separate alert rule storage and schedules them for evaluation. |
||||
|
||||
 |
||||
|
||||
In the Alert tab of the graph panel you can configure how often the alert rule should be evaluated and the conditions that need to be met for the alert to change state and trigger its [notifications]({{< relref "./notifications" >}}). |
||||
|
||||
Currently only the graph panel supports alert rules. |
||||
|
||||
## Add or edit an alert rule |
||||
|
||||
1. Navigate to the panel you want to add or edit an alert rule for, click the title, and then click **Edit**. |
||||
1. On the Alert tab, click **Create Alert**. If an alert already exists for this panel, then you can just edit the fields on the Alert tab. |
||||
1. Fill out the fields. Descriptions are listed below in [Alert rule fields](#alert-rule-fields). |
||||
1. When you have finished writing your rule, click **Save** in the upper right corner to save alert rule and the dashboard. |
||||
1. (Optional but recommended) Click **Test rule** to make sure the rule returns the results you expect. |
||||
|
||||
## Delete an alert |
||||
|
||||
To delete an alert, scroll to the bottom of the alert and then click **Delete**. |
||||
|
||||
## Alert rule fields |
||||
|
||||
This section describes the fields you fill out to create an alert. |
||||
|
||||
### Rule |
||||
|
||||
- **Name -** Enter a descriptive name. The name will be displayed in the Alert Rules list. This field supports [templating]({{< relref "./add-notification-template" >}}). |
||||
- **Evaluate every -** Specify how often the scheduler should evaluate the alert rule. This is referred to as the _evaluation interval_. |
||||
- **For -** Specify how long the query needs to violate the configured thresholds before the alert notification triggers. |
||||
|
||||
You can set a minimum evaluation interval in the `alerting.min_interval_seconds` configuration field, to set a minimum time between evaluations. Refer to [Configuration]({{< relref "../setup-grafana/configure-grafana#min_interval_seconds" >}}) for more information. |
||||
|
||||
{{% admonition type="caution" %}} |
||||
Do not use `For` with the `If no data or all values are null` setting set to `No Data`. The triggering of `No Data` will trigger instantly and not take `For` into consideration. This may also result in that an OK notification not being sent if alert transitions from `No Data -Pending -OK`. |
||||
{{% /admonition %}} |
||||
|
||||
If an alert rule has a configured `For` and the query violates the configured threshold, then it will first go from `OK` to `Pending`. Going from `OK` to `Pending` Grafana will not send any notifications. Once the alert rule has been firing for more than `For` duration, it will change to `Alerting` and send alert notifications. |
||||
|
||||
Typically, it's always a good idea to use this setting since it's often worse to get false positive than wait a few minutes before the alert notification triggers. Looking at the `Alert list` or `Alert list panels` you will be able to see alerts in pending state. |
||||
|
||||
Below you can see an example timeline of an alert using the `For` setting. At ~16:04 the alert state changes to `Pending` and after 4 minutes it changes to `Alerting` which is when alert notifications are sent. Once the series falls back to normal the alert rule goes back to `OK`. |
||||
{{< figure class="float-right" src="/static/img/docs/v54/alerting-for-dark-theme.png" caption="Alerting For" >}} |
||||
|
||||
{{< figure class="float-right" max-width="40%" src="/static/img/docs/v4/alerting_conditions.png" caption="Alerting Conditions" >}} |
||||
|
||||
### Conditions |
||||
|
||||
Currently the only condition type that exists is a `Query` condition that allows you to |
||||
specify a query letter, time range and an aggregation function. |
||||
|
||||
#### Query condition example |
||||
|
||||
```sql |
||||
avg() OF query(A, 15m, now) IS BELOW 14 |
||||
``` |
||||
|
||||
- `avg()` Controls how the values for **each** series should be reduced to a value that can be compared against the threshold. Click on the function to change it to another aggregation function. |
||||
- `query(A, 15m, now)` The letter defines what query to execute from the **Metrics** tab. The second two parameters define the time range, `15m, now` means 15 minutes ago to now. You can also do `10m, now-2m` to define a time range that will be 10 minutes ago to 2 minutes ago. This is useful if you want to ignore the last 2 minutes of data. |
||||
- `IS BELOW 14` Defines the type of threshold and the threshold value. You can click on `IS BELOW` to change the type of threshold. |
||||
|
||||
The query used in an alert rule cannot contain any template variables. Currently we only support `AND` and `OR` operators between conditions and they are executed serially. |
||||
For example, we have 3 conditions in the following order: |
||||
_condition:A(evaluates to: TRUE) OR condition:B(evaluates to: FALSE) AND condition:C(evaluates to: TRUE)_ |
||||
so the result will be calculated as ((TRUE OR FALSE) AND TRUE) = TRUE. |
||||
|
||||
We plan to add other condition types in the future, like `Other Alert`, where you can include the state of another alert in your conditions, and `Time Of Day`. |
||||
|
||||
#### Multiple Series |
||||
|
||||
If a query returns multiple series, then the aggregation function and threshold check will be evaluated for each series. What Grafana does not do currently is track alert rule state **per series**. This has implications that are detailed in the scenario below. |
||||
|
||||
- Alert condition with query that returns 2 series: **server1** and **server2** |
||||
- **server1** series causes the alert rule to fire and switch to state `Alerting` |
||||
- Notifications are sent out with message: _load peaking (server1)_ |
||||
- In a subsequent evaluation of the same alert rule, the **server2** series also causes the alert rule to fire |
||||
- No new notifications are sent as the alert rule is already in state `Alerting`. |
||||
|
||||
So, as you can see from the above scenario Grafana will not send out notifications when other series cause the alert to fire if the rule already is in state `Alerting`. To improve support for queries that return multiple series we plan to track state **per series** in a future release. |
||||
|
||||
> Starting with Grafana v5.3 you can configure reminders to be sent for triggered alerts. This will send additional notifications |
||||
> when an alert continues to fire. If other series (like server2 in the example above) also cause the alert rule to fire they will be included in the reminder notification. Depending on what notification channel you're using you may be able to take advantage of this feature for identifying new/existing series causing alert to fire. |
||||
|
||||
### No Data & Error Handling |
||||
|
||||
Below are conditions you can configure how the rule evaluation engine should handle queries that return no data or only null values. |
||||
|
||||
| No Data Option | Description | |
||||
| --------------- | ------------------------------------------------------------------------------------------ | |
||||
| No Data | Set alert rule state to `NoData` | |
||||
| Alerting | Set alert rule state to `Alerting` | |
||||
| Keep Last State | Keep the current alert rule state, whatever it is. | |
||||
| Ok | Not sure why you would want to send yourself an alert when things are okay, but you could. | |
||||
|
||||
### Execution errors or timeouts |
||||
|
||||
Tell Grafana how to handle execution or timeout errors. |
||||
|
||||
| Error or timeout option | Description | |
||||
| ----------------------- | -------------------------------------------------- | |
||||
| Alerting | Set alert rule state to `Alerting` | |
||||
| Keep Last State | Keep the current alert rule state, whatever it is. | |
||||
|
||||
If you have an unreliable time series store from which queries sometime timeout or fail randomly you can set this option to `Keep Last State` in order to basically ignore them. |
||||
|
||||
## Notifications |
||||
|
||||
In alert tab you can also specify alert rule notifications along with a detailed message about the alert rule. The message can contain anything, information about how you might solve the issue, link to runbook, and so on. |
||||
|
||||
The actual notifications are configured and shared between multiple alerts. Read |
||||
[Alert notifications]({{< relref "./notifications" >}}) for information on how to configure and set up notifications. |
||||
|
||||
- **Send to -** Select an alert notification channel if you have one set up. |
||||
- **Message -** Enter a text message to be sent on the notification channel. Some alert notifiers support transforming the text to HTML or other rich formats. This field supports [templating]({{< relref "./add-notification-template" >}}). |
||||
- **Tags -** Specify a list of tags (key/value) to be included in the notification. It is only supported by [some notifiers]({{< relref "./notifications#list-of-supported-notifiers" >}}). |
||||
|
||||
## Alert state history and annotations |
||||
|
||||
Alert state changes are recorded in the internal annotation table in Grafana's database. The state changes are visualized as annotations in the alert rule's graph panel. You can also go into the `State history` submenu in the alert tab to view and clear state history. |
||||
@ -1,303 +0,0 @@ |
||||
--- |
||||
aliases: |
||||
- ../alerting/notifications/ |
||||
description: Alerting notifications guide |
||||
draft: true |
||||
keywords: |
||||
- Grafana |
||||
- alerting |
||||
- guide |
||||
- notifications |
||||
labels: |
||||
products: |
||||
- enterprise |
||||
- oss |
||||
title: Alert notifications |
||||
weight: 100 |
||||
--- |
||||
|
||||
# Alert notifications |
||||
|
||||
When an alert changes state, it sends out notifications. Each alert rule can have |
||||
multiple notifications. In order to add a notification to an alert rule you first need |
||||
to add and configure a `notification` channel (can be email, PagerDuty, or other integration). |
||||
|
||||
This is done from the Notification channels page. |
||||
|
||||
{{% admonition type="note" %}} |
||||
Alerting is only available in Grafana v4.0 and above. |
||||
{{% /admonition %}} |
||||
|
||||
## Add a notification channel |
||||
|
||||
1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Notification channels**. |
||||
1. Click **Add channel**. |
||||
1. Fill out the fields or select options described below. |
||||
|
||||
## New notification channel fields |
||||
|
||||
### Default (send on all alerts) |
||||
|
||||
- **Name -** Enter a name for this channel. It will be displayed when users add notifications to alert rules. |
||||
- **Type -** Select the channel type. Refer to the [List of supported notifiers](#list-of-supported-notifiers) for details. |
||||
- **Default (send on all alerts) -** When selected, this option sends a notification on this channel for all alert rules. |
||||
- **Include Image -** See [Enable images in notifications](#enable-images-in-notifications-external-image-store) for details. |
||||
- **Disable Resolve Message -** When selected, this option disables the resolve message [OK] that is sent when the alerting state returns to false. |
||||
- **Send reminders -** When this option is checked additional notifications (reminders) will be sent for triggered alerts. You can specify how often reminders should be sent using number of seconds (s), minutes (m) or hours (h), for example `30s`, `3m`, `5m` or `1h`. |
||||
|
||||
**Important:** Alert reminders are sent after rules are evaluated. Therefore a reminder can never be sent more frequently than a configured alert rule evaluation interval. |
||||
|
||||
These examples show how often and when reminders are sent for a triggered alert. |
||||
|
||||
| Alert rule evaluation interval | Send reminders every | Reminder sent every (after last alert notification) | |
||||
| ------------------------------ | -------------------- | --------------------------------------------------- | |
||||
| `30s` | `15s` | ~30 seconds | |
||||
| `1m` | `5m` | ~5 minutes | |
||||
| `5m` | `15m` | ~15 minutes | |
||||
| `6m` | `20m` | ~24 minutes | |
||||
| `1h` | `15m` | ~1 hour | |
||||
| `1h` | `2h` | ~2 hours | |
||||
|
||||
<div class="clearfix"></div> |
||||
|
||||
## List of supported notifiers |
||||
|
||||
| Name | Type | Supports images | Supports alert rule tags | |
||||
| --------------------------------------------- | ------------------------- | ------------------ | ------------------------ | |
||||
| [DingDing](#dingdingdingtalk) | `dingding` | yes, external only | no | |
||||
| [Discord](#discord) | `discord` | yes | no | |
||||
| [Email](#email) | `email` | yes | no | |
||||
| [Google Hangouts Chat](#google-hangouts-chat) | `googlechat` | yes, external only | no | |
||||
| Hipchat | `hipchat` | yes, external only | no | |
||||
| [Kafka](#kafka) | `kafka` | yes, external only | no | |
||||
| Line | `line` | yes, external only | no | |
||||
| Microsoft Teams | `teams` | yes, external only | no | |
||||
| [Opsgenie](#opsgenie) | `opsgenie` | yes, external only | yes | |
||||
| [Pagerduty](#pagerduty) | `pagerduty` | yes, external only | yes | |
||||
| Prometheus Alertmanager | `prometheus-alertmanager` | yes, external only | yes | |
||||
| [Pushover](#pushover) | `pushover` | yes | no | |
||||
| Sensu | `sensu` | yes, external only | no | |
||||
| [Sensu Go](#sensu-go) | `sensugo` | yes, external only | no | |
||||
| [Slack](#slack) | `slack` | yes | no | |
||||
| Telegram | `telegram` | yes | no | |
||||
| Threema | `threema` | yes, external only | no | |
||||
| VictorOps | `victorops` | yes, external only | yes | |
||||
| [Webhook](#webhook) | `webhook` | yes, external only | yes | |
||||
|
||||
### Email |
||||
|
||||
To enable email notifications you have to set up [SMTP settings]({{< relref "../setup-grafana/configure-grafana#smtp" >}}) |
||||
in the Grafana config. Email notifications will upload an image of the alert graph to an |
||||
external image destination if available or fallback to attaching the image to the email. |
||||
Be aware that if you use the `local` image storage email servers and clients might not be |
||||
able to access the image. |
||||
|
||||
{{% admonition type="note" %}} |
||||
Template variables are not supported in email alerts. |
||||
{{% /admonition %}} |
||||
|
||||
| Setting | Description | |
||||
| ------------ | -------------------------------------------------------------------------------------------- | |
||||
| Single email | Send a single email to all recipients. Disabled per default. | |
||||
| Addresses | Email addresses to recipients. You can enter multiple email addresses using a ";" separator. | |
||||
|
||||
### Slack |
||||
|
||||
{{< figure class="float-right" max-width="40%" src="/static/img/docs/v4/slack_notification.png" caption="Alerting Slack Notification" >}} |
||||
|
||||
To set up Slack, you need to configure an incoming Slack webhook URL. You can follow |
||||
[Sending messages using Incoming Webhooks](https://api.slack.com/incoming-webhooks) on how to do that. If you want to include screenshots of the |
||||
firing alerts in the Slack messages you have to configure either the [external image destination](#enable-images-in-notifications-external-image-store) |
||||
in Grafana or a bot integration via Slack Apps. [Follow Slack's guide to set up a bot integration](https://api.slack.com/bot-users) and use the token |
||||
provided, which starts with "xoxb". |
||||
|
||||
| Setting | Description | |
||||
| --------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
||||
| Url | Slack incoming webhook URL, or eventually the [chat.postMessage](https://api.slack.com/methods/chat.postMessage) Slack API endpoint. | |
||||
| Username | Set the username for the bot's message. | |
||||
| Recipient | Allows you to override the Slack recipient. You must either provide a channel Slack ID, a user Slack ID, a username reference (@<user>, all lowercase, no whitespace), or a channel reference (#<channel>, all lowercase, no whitespace). If you use the `chat.postMessage` Slack API endpoint, this is required. | |
||||
| Icon emoji | Provide an emoji to use as the icon for the bot's message. Ex :smile: | |
||||
| Icon URL | Provide a URL to an image to use as the icon for the bot's message. | |
||||
| Mention Users | Optionally mention one or more users in the Slack notification sent by Grafana. You have to refer to users, comma-separated, via their corresponding Slack IDs (which you can find by clicking the overflow button on each user's Slack profile). | |
||||
| Mention Groups | Optionally mention one or more groups in the Slack notification sent by Grafana. You have to refer to groups, comma-separated, via their corresponding Slack IDs (which you can get from each group's Slack profile URL). | |
||||
| Mention Channel | Optionally mention either all channel members or just active ones. | |
||||
| Token | If provided, Grafana will upload the generated image via Slack's file.upload API method, not the external image destination. If you use the `chat.postMessage` Slack API endpoint, this is required. | |
||||
|
||||
If you are using the token for a slack bot, then you have to invite the bot to the channel you want to send notifications and add the channel to the recipient field. |
||||
|
||||
### Opsgenie |
||||
|
||||
To setup Opsgenie you will need an API Key and the Alert API Url. These can be obtained by configuring a new [Grafana Integration](https://docs.opsgenie.com/docs/grafana-integration). |
||||
|
||||
| Setting | Description | |
||||
| ------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
||||
| Alert API URL | The API URL for your Opsgenie instance. This will normally be either `https://api.opsgenie.com` or, for EU customers, `https://api.eu.opsgenie.com`. | |
||||
| API Key | The API Key as provided by Opsgenie for your configured Grafana integration. | |
||||
| Override priority | Configures the alert priority using the `og_priority` tag. The `og_priority` tag must have one of the following values: `P1`, `P2`, `P3`, `P4`, or `P5`. Default is `False`. | |
||||
| Send notification tags as | Specify how you would like [Notification Tags]({{< relref "./create-alerts#notifications" >}}) delivered to Opsgenie. They can be delivered as `Tags`, `Extra Properties` or both. Default is Tags. See note below for more information. | |
||||
|
||||
{{% admonition type="note" %}} |
||||
When notification tags are sent as `Tags` they are concatenated into a string with a `key:value` format. If you prefer to receive the notifications tags as key/values under Extra Properties in Opsgenie then change the `Send notification tags as` to either `Extra Properties` or `Tags & Extra Properties`. |
||||
{{% /admonition %}} |
||||
|
||||
### PagerDuty |
||||
|
||||
To set up PagerDuty, all you have to do is to provide an integration key. |
||||
|
||||
| Setting | Description | |
||||
| ---------------------- | ----------------------------------------------------------------------------------------------- | |
||||
| Integration Key | Integration key for PagerDuty. | |
||||
| Severity | Level for dynamic notifications, default is `critical` (1) | |
||||
| Auto resolve incidents | Resolve incidents in PagerDuty once the alert goes back to ok | |
||||
| Message in details | Removes the Alert message from the PD summary field and puts it into custom details instead (2) | |
||||
|
||||
> **Note:** The tags `Severity`, `Class`, `Group`, `dedup_key`, and `Component` have special meaning in the [Pagerduty Common Event Format - PD-CEF](https://support.pagerduty.com/docs/pd-cef). If an alert panel defines these tag keys, then they are transposed to the root of the event sent to Pagerduty. This means they will be available within the Pagerduty UI and Filtering tools. A Severity tag set on an alert overrides the global Severity set on the notification channel if it's a valid level. |
||||
|
||||
> Using Message In Details will change the structure of the `custom_details` field in the PagerDuty Event. |
||||
> This might break custom event rules in your PagerDuty rules if you rely on the fields in `payload.custom_details`. |
||||
> Move any existing rules using `custom_details.myMetric` to `custom_details.queries.myMetric`. |
||||
> This behavior will become the default in a future version of Grafana. |
||||
|
||||
> **Note:** The `dedup_key` tag overrides the Grafana-generated `dedup_key` with a custom key. |
||||
|
||||
> **Note:** The `state` tag overrides the current alert state inside the `custom_details` payload. |
||||
|
||||
> **Note:** Grafana uses the `Events API V2` integration. This can be configured for each service. |
||||
|
||||
### VictorOps |
||||
|
||||
To configure VictorOps, provide the URL from the Grafana Integration and substitute `$routing_key` with a valid key. |
||||
|
||||
> **Note:** The tag `Severity` has special meaning in the [VictorOps Incident Fields](https://help.victorops.com/knowledge-base/incident-fields-glossary/). If an alert panel defines this key, then it replaces the `message_type` in the root of the event sent to VictorOps. |
||||
|
||||
### Pushover |
||||
|
||||
To set up Pushover, you must provide a user key and an API token. Refer to [What is Pushover and how do I use it](https://support.pushover.net/i7-what-is-pushover-and-how-do-i-use-it) for instructions on how to generate them. |
||||
|
||||
| Setting | Description | |
||||
| -------------- | ----------------------------------------------------------------------------------------------------------------------------------- | |
||||
| API Token | Application token | |
||||
| User key(s) | A comma-separated list of user keys | |
||||
| Device(s) | A comma-separated list of devices | |
||||
| Priority | The priority alerting nottifications are sent | |
||||
| OK priority | The priority OK notifications are sent; if not set, then OK notifications are sent with the priority set for alerting notifications | |
||||
| Retry | How often (in seconds) the Pushover servers send the same notification to the user. (minimum 30 seconds) | |
||||
| Expire | How many seconds your notification will continue to be retried for (maximum 86400 seconds) | |
||||
| TTL | The number of seconds before a message expires and is deleted automatically. Examples: 10s, 5m30s, 8h. | |
||||
| Alerting sound | The sound for alerting notifications | |
||||
| OK sound | The sound for OK notifications | |
||||
|
||||
### Webhook |
||||
|
||||
The webhook notification is a simple way to send information about a state change over HTTP to a custom endpoint. |
||||
Using this notification you could integrate Grafana into a system of your choosing. |
||||
|
||||
Example json body: |
||||
|
||||
```json |
||||
{ |
||||
"dashboardId": 1, |
||||
"evalMatches": [ |
||||
{ |
||||
"value": 1, |
||||
"metric": "Count", |
||||
"tags": {} |
||||
} |
||||
], |
||||
"imageUrl": "https://grafana.com/static/assets/img/blog/mixed_styles.png", |
||||
"message": "Notification Message", |
||||
"orgId": 1, |
||||
"panelId": 2, |
||||
"ruleId": 1, |
||||
"ruleName": "Panel Title alert", |
||||
"ruleUrl": "http://localhost:3000/d/hZ7BuVbWz/test-dashboard?fullscreen\u0026edit\u0026tab=alert\u0026panelId=2\u0026orgId=1", |
||||
"state": "alerting", |
||||
"tags": { |
||||
"tag name": "tag value" |
||||
}, |
||||
"title": "[Alerting] Panel Title alert" |
||||
} |
||||
``` |
||||
|
||||
- **state** - The possible values for alert state are: `ok`, `paused`, `alerting`, `pending`, `no_data`. |
||||
|
||||
### DingDing/DingTalk |
||||
|
||||
DingTalk supports the following "message type": `text`, `link` and `markdown`. Only the `link` message type is supported. Refer to the [configuration instructions](https://developers.dingtalk.com/document/app/custom-robot-access) in Chinese language. |
||||
|
||||
In DingTalk PC Client: |
||||
|
||||
1. Click "more" icon on upper right of the panel. |
||||
|
||||
2. Click "Robot Manage" item in the pop menu, there will be a new panel call "Robot Manage". |
||||
|
||||
3. In the "Robot Manage" panel, select "customized: customized robot with Webhook". |
||||
|
||||
4. In the next new panel named "robot detail", click "Add" button. |
||||
|
||||
5. In "Add Robot" panel, input a nickname for the robot and select a "message group" which the robot will join in. click "next". |
||||
|
||||
6. There will be a Webhook URL in the panel, looks like this: https://oapi.dingtalk.com/robot/send?access_token=xxxxxxxxx. Copy this URL to the Grafana DingTalk setting page and then click "finish". |
||||
|
||||
### Discord |
||||
|
||||
To set up Discord, you must create a Discord channel webhook. For instructions on how to create the channel, refer to |
||||
[Intro to Webhooks](https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks). |
||||
|
||||
| Setting | Description | |
||||
| ------------------------------ | ----------------------------------------------------------------------------------------------------- | |
||||
| Webhook URL | Discord webhook URL. | |
||||
| Message Content | Mention a group using @ or a user using <@ID> when notifying in a channel. | |
||||
| Avatar URL | Optionally, provide a URL to an image to use as the avatar for the bot's message. | |
||||
| Use Discord's Webhook Username | Use the username configured in Discord's webhook settings. Otherwise, the username will be 'Grafana.' | |
||||
|
||||
Alternately, use the [Slack](#slack) notifier by appending `/slack` to a Discord webhook URL. |
||||
|
||||
### Kafka |
||||
|
||||
Notifications can be sent to a Kafka topic from Grafana using the [Kafka REST Proxy](https://docs.confluent.io/1.0/kafka-rest/docs/index.html). |
||||
There are a couple of configuration options which need to be set up in Grafana UI under Kafka Settings: |
||||
|
||||
1. Kafka REST Proxy endpoint. |
||||
|
||||
1. Kafka Topic. |
||||
|
||||
Once these two properties are set, you can send the alerts to Kafka for further processing or throttling. |
||||
|
||||
### Google Hangouts Chat |
||||
|
||||
Notifications can be sent by setting up an incoming webhook in Google Hangouts chat. For more information about configuring a webhook, refer to [webhooks](https://developers.google.com/hangouts/chat/how-tos/webhooks). |
||||
|
||||
### Prometheus Alertmanager |
||||
|
||||
Alertmanager handles alerts sent by client applications such as Prometheus server or Grafana. It takes care of deduplicating, grouping, and routing them to the correct receiver. Grafana notifications can be sent to Alertmanager via a simple incoming webhook. Refer to the official [Prometheus Alertmanager documentation](https://prometheus.io/docs/alerting/alertmanager) for configuration information. |
||||
|
||||
{{% admonition type="caution" %}} |
||||
In case of a high-availability setup, do not load balance traffic between Grafana and Alertmanagers to keep coherence between all your Alertmanager instances. Instead, point Grafana to a list of all Alertmanagers, by listing their URLs comma-separated in the notification channel configuration. |
||||
{{% /admonition %}} |
||||
|
||||
### Sensu Go |
||||
|
||||
Grafana alert notifications can be sent to [Sensu](https://sensu.io) Go as events via the API. This operation requires an API key. For information on creating this key, refer to [Sensu Go documentation](https://docs.sensu.io/sensu-go/latest/operations/control-access/use-apikeys/#api-key-authentication). |
||||
|
||||
## Enable images in notifications {#external-image-store} |
||||
|
||||
Grafana can render the panel associated with the alert rule as a PNG image and include that in the notification. Read more about the requirements and how to configure |
||||
[image rendering]({{< relref "../setup-grafana/image-rendering" >}}). |
||||
|
||||
You must configure an [external image storage provider]({{< relref "../setup-grafana/configure-grafana#external_image_storage" >}}) in order to receive images in alert notifications. If your notification channel requires that the image be publicly accessible (e.g. Slack, PagerDuty), configure a provider which uploads the image to a remote image store like Amazon S3, Webdav, Google Cloud Storage, or Azure Blob Storage. Otherwise, the local provider can be used to serve the image directly from Grafana. |
||||
|
||||
Notification services which need public image access are marked as 'external only'. |
||||
|
||||
## Configure the link back to Grafana from alert notifications |
||||
|
||||
All alert notifications contain a link back to the triggered alert in the Grafana instance. |
||||
This URL is based on the [domain]({{< relref "../setup-grafana/configure-grafana#domain" >}}) setting in Grafana. |
||||
|
||||
## Notification templating |
||||
|
||||
{{% admonition type="note" %}} |
||||
Alert notification templating is only available in Grafana v7.4 and above. |
||||
{{% /admonition %}} |
||||
|
||||
The alert notification template feature allows you to take the [label]({{< relref "../fundamentals/timeseries-dimensions#labels" >}}) value from an alert query and [inject that into alert notifications]({{< relref "./add-notification-template" >}}). |
||||
@ -1,26 +0,0 @@ |
||||
--- |
||||
aliases: |
||||
- ../alerting/pause-an-alert-rule/ |
||||
description: Pause an existing alert rule |
||||
draft: true |
||||
keywords: |
||||
- grafana |
||||
- alerting |
||||
- guide |
||||
- rules |
||||
- view |
||||
labels: |
||||
products: |
||||
- enterprise |
||||
- oss |
||||
title: Pause an alert rule |
||||
weight: 400 |
||||
--- |
||||
|
||||
# Pause an alert rule |
||||
|
||||
Pausing the evaluation of an alert rule can sometimes be useful. For example, during a maintenance window, pausing alert rules can avoid triggering a flood of alerts. |
||||
|
||||
1. In the Grafana side bar, hover your cursor over the Alerting (bell) icon and then click **Alert Rules**. All configured alert rules are listed, along with their current state. |
||||
1. Find your alert in the list, and click the **Pause** icon on the right. The **Pause** icon turns into a **Play** icon. |
||||
1. Click the **Play** icon to resume evaluation of your alert. |
||||
@ -1,53 +0,0 @@ |
||||
--- |
||||
aliases: |
||||
- ../alerting/troubleshoot-alerts/ |
||||
description: Troubleshoot alert rules |
||||
draft: true |
||||
keywords: |
||||
- grafana |
||||
- alerting |
||||
- guide |
||||
- rules |
||||
- troubleshoot |
||||
labels: |
||||
products: |
||||
- enterprise |
||||
- oss |
||||
title: Troubleshoot alerts |
||||
weight: 500 |
||||
--- |
||||
|
||||
# Troubleshoot alerts |
||||
|
||||
If alerts are not behaving as you expect, here are some steps you can take to troubleshoot and figure out what is going wrong. |
||||
|
||||
 |
||||
|
||||
The first level of troubleshooting you can do is click **Test Rule**. You will get result back that you can expand to the point where you can see the raw data that was returned from your query. |
||||
|
||||
Further troubleshooting can also be done by inspecting the grafana-server log. If it's not an error or for some reason the log does not say anything you can enable debug logging for some relevant components. This is done in Grafana's ini config file. |
||||
|
||||
Example showing loggers that could be relevant when troubleshooting alerting. |
||||
|
||||
```ini |
||||
[log] |
||||
filters = alerting.scheduler:debug \ |
||||
alerting.engine:debug \ |
||||
alerting.resultHandler:debug \ |
||||
alerting.evalHandler:debug \ |
||||
alerting.evalContext:debug \ |
||||
alerting.extractor:debug \ |
||||
alerting.notifier:debug \ |
||||
alerting.notifier.slack:debug \ |
||||
alerting.notifier.pagerduty:debug \ |
||||
alerting.notifier.email:debug \ |
||||
alerting.notifier.webhook:debug \ |
||||
tsdb.graphite:debug \ |
||||
tsdb.prometheus:debug \ |
||||
tsdb.opentsdb:debug \ |
||||
tsdb.influxdb:debug \ |
||||
tsdb.elasticsearch:debug \ |
||||
tsdb.elasticsearch.client:debug \ |
||||
``` |
||||
|
||||
If you want to log raw query sent to your TSDB and raw response in log you also have to set grafana.ini option `app_mode` to `development`. |
||||
@ -1,32 +0,0 @@ |
||||
--- |
||||
aliases: |
||||
- ../alerting/view-alerts/ |
||||
description: View existing alert rules |
||||
draft: true |
||||
keywords: |
||||
- grafana |
||||
- alerting |
||||
- guide |
||||
- rules |
||||
- view |
||||
labels: |
||||
products: |
||||
- enterprise |
||||
- oss |
||||
menuTitle: View alerts |
||||
title: View existing alert rules |
||||
weight: 400 |
||||
--- |
||||
|
||||
# View existing alert rules |
||||
|
||||
Grafana stores individual alert rules in the panels where they are defined, but you can also view a list of all existing alert rules and their current state. |
||||
|
||||
In the Grafana side bar, hover your cursor over the Alerting (bell) icon and then click **Alert Rules**. All configured alert rules are listed, along with their current state. |
||||
|
||||
You can do several things while viewing alerts. |
||||
|
||||
- **Filter alerts by name -** Type an alert name in the **Search alerts** field. |
||||
- **Filter alerts by state -** In **States**, select which alert states you want to see. All others will be hidden. |
||||
- **Pause or resume an alert -** Click the **Pause** or **Play** icon next to the alert to pause or resume evaluation. See [Pause an alert rule]({{< relref "./pause-an-alert-rule" >}}) for more information. |
||||
- **Access alert rule settings -** Click the alert name or the **Edit alert rule** (gear) icon. Grafana opens the Alert tab of the panel where the alert rule is defined. This is helpful when an alert is firing but you don't know which panel it is defined in. |
||||
Loading…
Reference in new issue