Docs: fixes various links (#70384)

* Docs: fixes various links

* bit more moving

* corrects link

* fixes links

* fixes links from move

* fix links

* fix links

* fixes links

* fixes links

* fix typo

* fixes typo

* fix notif policy link

* fix link
pull/70403/head
brendamuir 2 years ago committed by GitHub
parent 13e3308959
commit 701c6b6f07
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 4
      docs/sources/alerting/alerting-rules/_index.md
  2. 2
      docs/sources/alerting/alerting-rules/create-notification-policy.md
  3. 2
      docs/sources/alerting/alerting-rules/manage-contact-points/_index.md
  4. 0
      docs/sources/alerting/alerting-rules/manage-contact-points/configure-integrations.md
  5. 2
      docs/sources/alerting/fundamentals/alert-rules/_index.md
  6. 2
      docs/sources/alerting/fundamentals/alertmanager.md
  7. 6
      docs/sources/alerting/fundamentals/contact-points/index.md
  8. 3
      docs/sources/alerting/manage-notifications/_index.md
  9. 2
      docs/sources/alerting/manage-notifications/declare-incident-from-alert.md
  10. 2
      docs/sources/alerting/manage-notifications/mute-timings.md
  11. 2
      docs/sources/alerting/manage-notifications/template-notifications/_index.md
  12. 0
      docs/sources/alerting/manage-notifications/view-alert-rules.md
  13. 0
      docs/sources/alerting/manage-notifications/view-state-health.md
  14. 5
      docs/sources/alerting/set-up/_index.md
  15. 38
      docs/sources/alerting/set-up/migrating-alerts/_index.md
  16. 2
      docs/sources/alerting/set-up/set-up-cloud/_index.md

@ -33,8 +33,8 @@ Alert rules for an external Grafana Mimir or Loki instance can be edited or dele
**Configure contact points** **Configure contact points**
For information on how to configure contact points, see [Configure contact points]({{< relref "../manage-notifications/manage-contact-points" >}}) For information on how to configure contact points, see [Configure contact points]({{< relref "./manage-contact-points/_index.md" >}})
**Configure notification policies** **Configure notification policies**
For information on how to configure notification policies, see [Configure notification policies]({{< relref "../manage-notifications/manage-contact-points" >}}) For information on how to configure notification policies, see [Configure notification policies]({{< relref "./create-notification-policy" >}})

@ -11,7 +11,7 @@ keywords:
- notification policies - notification policies
- routes - routes
title: Configure notification policies title: Configure notification policies
weight: 300 weight: 420
--- ---
# Configure notification policies # Configure notification policies

@ -13,7 +13,7 @@ keywords:
- contact point - contact point
- templating - templating
title: Configure contact points title: Configure contact points
weight: 200 weight: 410
--- ---
# Configure contact points # Configure contact points

@ -5,7 +5,7 @@ keywords:
- alerting - alerting
- rules - rules
title: Alert rules title: Alert rules
weight: 101 weight: 105
--- ---
# Alert rules # Alert rules

@ -5,7 +5,7 @@ aliases:
- ../unified-alerting/fundamentals/alertmanager/ - ../unified-alerting/fundamentals/alertmanager/
description: Intro to the different Alertmanagers description: Intro to the different Alertmanagers
title: Alertmanager title: Alertmanager
weight: 100 weight: 103
--- ---
# Alertmanager # Alertmanager

@ -51,9 +51,3 @@ The following table lists the contact point integrations supported by Grafana.
| Cisco Webex Teams | `webex` | Supported | Supported | | Cisco Webex Teams | `webex` | Supported | Supported |
| WeCom | `wecom` | Supported | N/A | | WeCom | `wecom` | Supported | N/A |
| [Zenduty](https://www.zenduty.com/) | `webhook` | Supported | N/A | | [Zenduty](https://www.zenduty.com/) | `webhook` | Supported | N/A |
## Useful links
[Manage contact points]({{< relref "../../manage-notifications/manage-contact-points" >}})
[Create and edit notification templates]({{< relref "../../manage-notifications/template-notifications/create-notification-templates" >}})

@ -1,4 +1,5 @@
--- ---
menuTitle: Manage
description: Manage alert notifications description: Manage alert notifications
keywords: keywords:
- grafana - grafana
@ -12,7 +13,7 @@ weight: 160
Choosing how, when, and where to send your alert notifications is an important part of setting up your alerting system. These decisions will have a direct impact on your ability to resolve issues quickly and not miss anything important. Choosing how, when, and where to send your alert notifications is an important part of setting up your alerting system. These decisions will have a direct impact on your ability to resolve issues quickly and not miss anything important.
As a first step, define your contact points; where to send your alert notifications to. A contact point is a set of one or more [integrations]({{< relref "./manage-contact-points/configure-integrations" >}}) that are used to deliver notifications. Add notification templates to contact points for reuse and consistent messaging in your notifications. As a first step, define your contact points; where to send your alert notifications to. A contact point is a set of one or more integrations that are used to deliver notifications. Add notification templates to contact points for reuse and consistent messaging in your notifications.
Next, create a notification policy which is a set of rules for where, when and how your alerts are routed to contact points. In a notification policy, you define where to send your alert notifications by choosing one of the contact points you created. Add mute timings to your notification policy. A mute timing is a recurring interval of time during which you don’t want any notifications to be sent out. Next, create a notification policy which is a set of rules for where, when and how your alerts are routed to contact points. In a notification policy, you define where to send your alert notifications by choosing one of the contact points you created. Add mute timings to your notification policy. A mute timing is a recurring interval of time during which you don’t want any notifications to be sent out.

@ -5,7 +5,7 @@ keywords:
- alert rules - alert rules
- incident - incident
title: Declare incidents from firing alerts title: Declare incidents from firing alerts
weight: 430 weight: 1010
--- ---
# Declare incidents from firing alerts # Declare incidents from firing alerts

@ -20,7 +20,7 @@ A mute timing is a recurring interval of time when no new notifications for a po
Similar to silences, mute timings do not prevent alert rules from being evaluated, nor do they stop alert instances from being shown in the user interface. They only prevent notifications from being created. Similar to silences, mute timings do not prevent alert rules from being evaluated, nor do they stop alert instances from being shown in the user interface. They only prevent notifications from being created.
You can configure Grafana managed mute timings as well as mute timings for an [external Alertmanager data source]({{< relref "../../datasources/alertmanager" >}}). For more information, refer to [Alertmanager documentation]({{< relref "./alertmanager" >}}). You can configure Grafana managed mute timings as well as mute timings for an [external Alertmanager data source]({{< relref "../../datasources/alertmanager" >}}). For more information, refer to [Alertmanager documentation]({{< relref "../fundamentals/alertmanager" >}}).
## Mute timings vs silences ## Mute timings vs silences

@ -33,8 +33,6 @@ You cannot use notification templates to:
Learn how to write the content of your notification templates in Go’s templating language. Learn how to write the content of your notification templates in Go’s templating language.
[Create notification templates]({{< relref "./create-notification-templates" >}})
Create reusable notification templates for your contact points. Create reusable notification templates for your contact points.
[Use notification templates]({{< relref "./use-notification-templates" >}}) [Use notification templates]({{< relref "./use-notification-templates" >}})

@ -18,7 +18,7 @@ Set up or upgrade your implementation of Grafana Alerting.
These are set-up instructions for Grafana Alerting Open Source. These are set-up instructions for Grafana Alerting Open Source.
To set up Grafana Alerting for Cloud, see ({{< relref "./_index.md" >}}) To set up Grafana Alerting for Cloud, see ({{< relref "./set-up-cloud/_index.md" >}})
## Before you begin ## Before you begin
@ -47,8 +47,7 @@ To set up Alerting, you need to:
- [Optional] Add labels and label matchers to control alert routing - [Optional] Add labels and label matchers to control alert routing
1. [Optional] Integrate with [Grafana OnCall] 1. [Optional] Integrate with [Grafana OnCall](/docs/oncall/latest/integrations/grafana-alerting)
(/docs/oncall/latest/integrations/grafana-alerting)
## Advanced set up options ## Advanced set up options

@ -42,9 +42,9 @@ The following table provides details on the upgrade for Cloud, Enterprise, and O
You can opt out of Grafana Alerting at any time and switch to using legacy alerting. Alternatively, you can opt out of using alerting in its entirety. You can opt out of Grafana Alerting at any time and switch to using legacy alerting. Alternatively, you can opt out of using alerting in its entirety.
### Stay on legacy alerting ## Stay on legacy alerting
When upgrading to Grafana > 9.0, existing installations that use legacy alerting are automatically upgraded to Grafana Alerting unless you have opted-out of Grafana Alerting before migration takes place. During the upgrade, legacy alerts are migrated to the new alerts type and no alerts or alerting data are lost. To keep using legacy alerting and disable Grafana Alerting: When upgrading to Grafana > 9.0, existing installations that use legacy alerting are automatically upgraded to Grafana Alerting unless you have opted-out of Grafana Alerting before migration takes place. During the upgrade, legacy alerts are migrated to the new alerts type and no alerts or alerting data are lost. To keep using legacy alerting and deactivate Grafana Alerting:
1. Go to your custom configuration file ($WORKING_DIR/conf/custom.ini). 1. Go to your custom configuration file ($WORKING_DIR/conf/custom.ini).
2. Enter the following in your configuration: 2. Enter the following in your configuration:
@ -65,12 +65,12 @@ This topic is only relevant for OSS and Enterprise customers. Contact customer s
The `ngalert` toggle previously used to enable or disable Grafana Alerting is no longer available. The `ngalert` toggle previously used to enable or disable Grafana Alerting is no longer available.
### Disable alerting ## Deactivate alerting
You can disable both Grafana Alerting and legacy alerting in Grafana. You can deactivate both Grafana Alerting and legacy alerting in Grafana.
1. Go to your custom configuration file ($WORKING_DIR/conf/custom.ini). 1. Go to your custom configuration file ($WORKING_DIR/conf/custom.ini).
2. Enter the following in your configuration: 1. Enter the following in your configuration:
``` ```
[alerting] [alerting]
@ -120,49 +120,35 @@ enabled = false
enabled = true enabled = true
``` ```
## Disable
To disable alerting in Grafana entirely (including both legacy and Grafana Alerting), enter the following in your configuration:
```
[alerting]
enabled = false
[unified_alerting]
enabled = false
```
If at any time you want to turn alerting back on, you can opt in.
## Differences and limitations ## Differences and limitations
There are some differences between Grafana Alerting and legacy dashboard alerts, and a number of features that are no There are some differences between Grafana Alerting and legacy dashboard alerts, and a number of features that are no
longer supported. longer supported.
### Differences **Differences**
1. When Grafana Alerting is enabled or upgraded to Grafana 9.0 or later, existing legacy dashboard alerts migrate in a format compatible with the Grafana Alerting. In the Alerting page of your Grafana instance, you can view the migrated alerts alongside any new alerts. 1. When Grafana Alerting is enabled or upgraded to Grafana 9.0 or later, existing legacy dashboard alerts migrate in a format compatible with the Grafana Alerting. In the Alerting page of your Grafana instance, you can view the migrated alerts alongside any new alerts.
This topic explains how legacy dashboard alerts are migrated and some limitations of the migration. This topic explains how legacy dashboard alerts are migrated and some limitations of the migration.
2. Read and write access to legacy dashboard alerts and Grafana alerts are governed by the permissions of the folders storing them. During migration, legacy dashboard alert permissions are matched to the new rules permissions as follows: 1. Read and write access to legacy dashboard alerts and Grafana alerts are governed by the permissions of the folders storing them. During migration, legacy dashboard alert permissions are matched to the new rules permissions as follows:
- If there are dashboard permissions, a folder named `Migrated {"dashboardUid": "UID", "panelId": 1, "alertId": 1}` is created to match the permissions of the dashboard (including the inherited permissions from the folder). - If there are dashboard permissions, a folder named `Migrated {"dashboardUid": "UID", "panelId": 1, "alertId": 1}` is created to match the permissions of the dashboard (including the inherited permissions from the folder).
- If there are no dashboard permissions and the dashboard is in a folder, then the rule is linked to this folder and inherits its permissions. - If there are no dashboard permissions and the dashboard is in a folder, then the rule is linked to this folder and inherits its permissions.
- If there are no dashboard permissions and the dashboard is in the General folder, then the rule is linked to the `General Alerting` folder and the rule inherits the default permissions. - If there are no dashboard permissions and the dashboard is in the General folder, then the rule is linked to the `General Alerting` folder and the rule inherits the default permissions.
3. `NoData` and `Error` settings are migrated as is to the corresponding settings in Grafana Alerting, except in two situations: 1. `NoData` and `Error` settings are migrated as is to the corresponding settings in Grafana Alerting, except in two situations:
3.1. As there is no `Keep Last State` option for `No Data` in Grafana Alerting, this option becomes `NoData`. The `Keep Last State` option for `Error` is migrated to a new option `Error`. To match the behavior of the `Keep Last State`, in both cases, during the migration Grafana automatically creates a silence for each alert rule with a duration of 1 year. 3.1. As there is no `Keep Last State` option for `No Data` in Grafana Alerting, this option becomes `NoData`. The `Keep Last State` option for `Error` is migrated to a new option `Error`. To match the behavior of the `Keep Last State`, in both cases, during the migration Grafana automatically creates a silence for each alert rule with a duration of 1 year.
3.2. Due to lack of validation, legacy alert rules imported via JSON or provisioned along with dashboards can contain arbitrary values for `NoData` and [`Error`](/docs/sources/alerting/alerting-rules/create-grafana-managed-rule.md#configure-no-data-and-error-handling). In this situation, Grafana will use the default setting: `NoData` for No data, and `Error` for Error. 3.2. Due to lack of validation, legacy alert rules imported via JSON or provisioned along with dashboards can contain arbitrary values for `NoData` and [`Error`](/docs/sources/alerting/alerting-rules/create-grafana-managed-rule.md#configure-no-data-and-error-handling). In this situation, Grafana will use the default setting: `NoData` for No data, and `Error` for Error.
4. Notification channels are migrated to an Alertmanager configuration with the appropriate routes and receivers. Default notification channels are added as contact points to the default route. Notification channels not associated with any Dashboard alert go to the `autogen-unlinked-channel-recv` route. 1. Notification channels are migrated to an Alertmanager configuration with the appropriate routes and receivers. Default notification channels are added as contact points to the default route. Notification channels not associated with any Dashboard alert go to the `autogen-unlinked-channel-recv` route.
5. Unlike legacy dashboard alerts where images in notifications are enabled per contact point, images in notifications for Grafana Alerting must be enabled in the Grafana configuration, either in the configuration file or environment variables, and are enabled for either all or no contact points. 1. Unlike legacy dashboard alerts where images in notifications are enabled per contact point, images in notifications for Grafana Alerting must be enabled in the Grafana configuration, either in the configuration file or environment variables, and are enabled for either all or no contact points.
6. The JSON format for webhook notifications has changed in Grafana Alerting and uses the format from [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/configuration/#webhook_config). 1. The JSON format for webhook notifications has changed in Grafana Alerting and uses the format from [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/configuration/#webhook_config).
### Limitations **Limitations**
1. Since `Hipchat` and `Sensu` notification channels are no longer supported, legacy alerts associated with these channels are not automatically migrated to Grafana Alerting. Assign the legacy alerts to a supported notification channel so that you continue to receive notifications for those alerts. 1. Since `Hipchat` and `Sensu` notification channels are no longer supported, legacy alerts associated with these channels are not automatically migrated to Grafana Alerting. Assign the legacy alerts to a supported notification channel so that you continue to receive notifications for those alerts.
Silences (expiring after one year) are created for all paused dashboard alerts. Silences (expiring after one year) are created for all paused dashboard alerts.

@ -30,7 +30,7 @@ Grafana Cloud Alerting's Prometheus-style alerts are built by querying directly
These are set up instructions for Grafana Alerting Cloud. These are set up instructions for Grafana Alerting Cloud.
To set up Grafana Alerting for Open Source, see ({{< relref "../set-up/" >}}) To set up Grafana Alerting for Open Source, see ({{< relref "../set-up/_index.md" >}})
To set up Alerting, you need to: To set up Alerting, you need to:

Loading…
Cancel
Save