In previous versions of Grafana, you could only use the API for provisioning data sources and dashboards. But that required the service to be running before you started creating dashboards and you also needed to set up credentials for the HTTP API. In v5.0 we decided to improve this experience by adding a new active provisioning system that uses config files. This will make GitOps more natural as data sources and dashboards can be defined via files that can be version controlled. We hope to extend this system to later add support for users and orgs as well.
Grafana has an active provisioning system that uses configuration files.
This makes GitOps more natural since data sources and dashboards can be defined using files that can be version controlled.
## Config File
## Configuration file
See [Configuration]({{< relref "../../setup-grafana/configure-grafana/" >}}) for more information on what you can configure in `grafana.ini`.
Refer to [Configuration]({{< relref "../../setup-grafana/configure-grafana/" >}}) for more information on what you can configure in `grafana.ini`.
### Config File Locations
### Configuration file locations
- Default configuration from `$WORKING_DIR/conf/defaults.ini`
- Custom configuration from `$WORKING_DIR/conf/custom.ini`
- The custom configuration file path can be overridden using the `--config` parameter
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
If you have installed Grafana using the `deb` or `rpm`
packages, then your configuration file is located at
`/etc/grafana/grafana.ini`. This path is specified in the Grafana
init.d script using `--config` file parameter.
{{% /admonition %}}
`init.d` script using the`--config` file parameter.
{{</admonition>}}
### Using Environment Variables
### Environment variables
It is possible to use environment variable interpolation in all 3 provisioning configuration types. Allowed syntax
is either `$ENV_VAR_NAME` or `${ENV_VAR_NAME}` and can be used only for values not for keys or bigger parts
of the configurations. It is not available in the dashboard's definition files just the dashboard provisioning
You can use environment variable interpolation in all three provisioning configuration types.
The allowed syntax is either `$ENV_VAR_NAME` or `${ENV_VAR_NAME}`, and it can be used only for values, not for keys or larger parts
of the configurations.
It's not available in the dashboard's definition files, just the dashboard provisioning
configuration.
Example:
```yaml
@ -51,13 +54,13 @@ datasources:
password: $PASSWORD
```
If you have a literal `$` in your value and want to avoid interpolation, `$$` can be used.
You can use `$$` if you have a literal `$` in your value and want to avoid interpolation.
<hr/>
## Configuration management tools
## Configuration Management Tools
Currently we do not provide any scripts/manifests for configuring Grafana. Rather than spending time learning and creating scripts/manifests for each tool, we think our time is better spent making Grafana easier to provision. Therefore, we heavily rely on the expertise of the community.
Currently, we don't provide any scripts or manifests for configuring Grafana.
Rather than spending time learning and creating scripts or manifests for each tool, we think our time is better spent making Grafana easier to provision.
Therefore, we heavily rely on the expertise of the community.
@ -70,12 +73,8 @@ Currently we do not provide any scripts/manifests for configuring Grafana. Rathe
## Data sources
{{% admonition type="note" %}}
Available in Grafana v5.0 and higher.
{{% /admonition %}}
You can manage data sources in Grafana by adding YAML configuration files in the [`provisioning/datasources`]({{< relref "../../setup-grafana/configure-grafana#provisioning" >}}) directory.
Each config file can contain a list of `datasources` to add or update during startup.
You can manage data sources in Grafana by adding YAML configuration files in the [`provisioning/data sources`]({{< relref "../../setup-grafana/configure-grafana#provisioning" >}}) directory.
Each configuration file can contain a list of `datasources` to add or update during startup.
If the data source already exists, Grafana reconfigures it to match the provisioned configuration file.
The configuration file can also list data sources to automatically delete, called `deleteDatasources`.
@ -85,17 +84,17 @@ You can configure Grafana to automatically delete provisioned data sources when
To do so, add `prune: true` to the root of your provisioning file.
With this configuration, Grafana also removes the provisioned data sources if you remove the provisioning file entirely.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
The `prune` parameter is available in Grafana v11.1 and higher.
{{% /admonition %}}
{{</admonition>}}
### Running multiple Grafana instances
If you run multiple instances of Grafana, add a version number to each data source in the configuration and increase it when you update the configuration.
Grafana updates only data sources with the same or lower version number than specified in the config.
Grafana updates only data sources with the same or lower version number than specified in the configuration.
This prevents old configurations from overwriting newer ones if you have different versions of the `datasource.yaml` file that don't define version numbers, and then restart instances at the same time.
### Example data source config file
### Example data source configuration file
This example provisions a [Graphite data source]({{< relref "../../datasources/graphite" >}}):
@ -179,16 +178,16 @@ datasources:
For provisioning examples of specific data sources, refer to that [data source's documentation]({{< relref "../../datasources" >}}).
#### JSON Data
#### JSON data
Since not all data sources have the same configuration settings, we include only the most common ones as fields.
Not all data sources have the same configuration settings. Only the most common fields are included in examples.
To provision the rest of a data source's settings, include them as a JSON blob in the `jsonData` field.
Common settings in the [built-in core data sources]({{< relref "../../datasources#built-in-core-data-sources" >}}) include:
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Data sources tagged with _HTTP\*_ communicate using the HTTP protocol, which includes all core data source plugins except MySQL, PostgreSQL, and MSSQL.
@ -249,11 +248,14 @@ For examples of specific data sources' JSON data, refer to that [data source's d
#### Secure JSON Data
Secure JSON data is a map of settings that will be encrypted with [secret key]({{< relref "../../setup-grafana/configure-grafana#secret_key" >}}) from the Grafana config. The purpose of this is only to hide content from the users of the application. This should be used for storing TLS Cert and password that Grafana will append to the request on the server side. All of these settings are optional.
Secure JSON data is a map of settings that are encrypted with a [secret key]({{< relref "../../setup-grafana/configure-grafana#secret_key" >}}) from the Grafana configuration.
The encryption hides content from the users of the application.
This should be used for storing the TLS Cert and password that Grafana appends to the request on the server side.
All of these settings are optional.
{{% admonition type="note" %}}
The _HTTP\*_ tag denotes data sources that communicate using the HTTP protocol, including all core data source plugins except MySQL, PostgreSQL, and MSSQL.
{{% /admonition %}}
{{<admonitiontype="note">}}
The _HTTP\*_ tag denotes data sources that communicate using the HTTP protocol, including all core data source plugins except MySQL, PostgreSQL, and MSSQL.
@ -270,7 +272,7 @@ The _HTTP\*_ tag denotes data sources that communicate using the HTTP protocol,
#### Custom HTTP headers for data sources
Data sources managed with provisioning can be configured to add HTTP headers to all requests.
The header name is configured in the `jsonData` field and the header value is configured in `secureJsonData`.
Configure the header name in the `jsonData` field and the header value in `secureJsonData`.
```yaml
apiVersion: 1
@ -287,16 +289,14 @@ datasources:
## Plugins
{{% admonition type="note" %}}
Available in Grafana v7.1 and higher.
{{% /admonition %}}
You can manage plugin applications in Grafana by adding one or more YAML config files in the [`provisioning/plugins`]({{< relref "../../setup-grafana/configure-grafana#provisioning" >}}) directory. Each config file can contain a list of `apps` that will be updated during start up. Grafana updates each app to match the configuration file.
You can manage plugin applications in Grafana by adding one or more YAML configuration files in the [`provisioning/plugins`]({{< relref "../../setup-grafana/configure-grafana#provisioning" >}}) directory.
Each configuration file can contain a list of `apps` that update during start up.
Grafana updates each app to match the configuration file.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
This feature enables you to provision plugin configurations, not the plugins themselves.
The plugins must already be installed on the Grafana instance.
{{% /admonition %}}
{{</admonition>}}
### Example plugin configuration file
@ -324,9 +324,10 @@ apps:
## Dashboards
You can manage dashboards in Grafana by adding one or more YAML config files in the [`provisioning/dashboards`]({{< relref "../../setup-grafana/configure-grafana#dashboards" >}}) directory. Each config file can contain a list of `dashboards providers` that load dashboards into Grafana from the local filesystem.
You can manage dashboards in Grafana by adding one or more YAML configuration files in the [`provisioning/dashboards`]({{< relref "../../setup-grafana/configure-grafana#dashboards" >}}) directory.
Each configuration file can contain a list of `dashboards providers` that load dashboards into Grafana from the local filesystem.
The dashboard provider config file looks somewhat like this:
The dashboard provider configuration file looks somewhat like this:
```yaml
apiVersion: 1
@ -355,40 +356,47 @@ providers:
foldersFromFilesStructure: true
```
When Grafana starts, it will update/insert all dashboards available in the configured path. Then later on poll that path every **updateIntervalSeconds** and look for updated json files and update/insert those into the database.
When Grafana starts, it updates and inserts all dashboards available in the configured path.
Then later on, Grafana polls that path every **updateIntervalSeconds**, looks for updated JSON files, and updates and inserts those into the database.
> **Note:** Dashboards are provisioned to the root level if the `folder` option is missing or empty.
#### Making changes to a provisioned dashboard
It's possible to make changes to a provisioned dashboard in the Grafana UI. However, it is not possible to automatically save the changes back to the provisioning source.
If `allowUiUpdates` is set to `true` and you make changes to a provisioned dashboard, you can `Save` the dashboard then changes will be persisted to the Grafana database.
While you can change a provisioned dashboard in the Grafana UI, those changes can't be saved back to the provisioning source.
If `allowUiUpdates` is set to `true` and you make changes to a provisioned dashboard, you can `Save` the dashboard, then changes persist to the Grafana database.
{{<admonitiontype="note">}}
If a provisioned dashboard is saved from the UI and then later updated from the source, the dashboard stored in the database will always be overwritten. The `version` property in the JSON file won't affect this, even if it's lower than the version of the existing dashboard.
> **Note:**
> If a provisioned dashboard is saved from the UI and then later updated from the source, the dashboard stored in the database will always be overwritten. The `version` property in the JSON file will not affect this, even if it is lower than the existing dashboard.
>
> If a provisioned dashboard is saved from the UI and the source is removed, the dashboard stored in the database will be deleted unless the configuration option `disableDeletion` is set to true.
If a provisioned dashboard is saved from the UI and the source is removed, the dashboard stored in the database is deleted unless the configuration option `disableDeletion` is set to `true`.
{{</admonition>}}
If `allowUiUpdates` is configured to `false`, you are not able to make changes to a provisioned dashboard. When you click `Save`, Grafana brings up a _Cannot save provisioned dashboard_ dialog. The screenshot below illustrates this behavior.
Grafana offers options to export the JSON definition of a dashboard. Either `Copy JSON to Clipboard` or `Save JSON to file` can help you synchronize your dashboard changes back to the provisioning source.
Note: The JSON definition in the input field when using `Copy JSON to Clipboard` or `Save JSON to file` will have the `id` field automatically removed to aid the provisioning workflow.
{{<admonitiontype="note">}}
The JSON definition in the input field when using `Copy JSON to Clipboard` or `Save JSON to file` has the `id` field automatically removed to aid the provisioning workflow.
If the dashboard in the JSON file contains an [UID]({{< relref "../../dashboards/build-dashboards/view-dashboard-json-model" >}}), Grafana forces insert/update on that UID. This allows you to migrate dashboards between Grafana instances and provisioning Grafana from configuration without breaking the URLs given because the new dashboard URL uses the UID as identifier.
When Grafana starts, it updates/inserts all dashboards available in the configured folders. If you modify the file, then the dashboard is also updated.
By default, Grafana deletes dashboards in the database if the file is removed. You can disable this behavior using the `disableDeletion` setting.
If the dashboard in the JSON file contains an [UID]({{< relref "../../dashboards/build-dashboards/view-dashboard-json-model" >}}), Grafana forces insert/update on that UID.
This allows you to migrate dashboards between Grafana instances and provisioning Grafana from configuration without breaking the URLs given because the new dashboard URL uses the UID as identifier.
When Grafana starts, it updates and inserts all dashboards available in the configured folders.
If you modify the file, then the dashboard is also updated.
By default, Grafana deletes dashboards in the database if the file is removed.
You can disable this behavior using the `disableDeletion` setting.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Provisioning allows you to overwrite existing dashboards
which leads to problems if you reuse settings that are supposed to be unique.
Be careful not to reuse the same `title` multiple times within a folder
or `uid` within the same installation as this will cause weird behaviors.
{{% /admonition %}}
or `uid` within the same installation as this causes weird behaviors.
{{</admonition>}}
### Provision folders structure from filesystem to Grafana
@ -406,7 +414,7 @@ For example, to replicate these dashboards structure from the filesystem to Graf
└── /resources_dashboard.json
```
you need to specify just this short provision configuration file.
You need to specify just this short provision configuration file.
```yaml
apiVersion: 1
@ -420,32 +428,24 @@ providers:
foldersFromFilesStructure: true
```
`server` and `application`will become new folders in Grafana menu.
In this example, `server` and `application` become new folders in the Grafana menu.
{{% admonition type="note" %}}
`folder` and `folderUid` options should be empty or missing to make `foldersFromFilesStructure` work.
{{% /admonition %}}
{{<admonitiontype="note">}}
The `folder` and `folderUid` options should be empty or missing to make `foldersFromFilesStructure` work.
{{% admonition type="note" %}}
To provision dashboards to the root level, store them in the root of your `path`.
{{% /admonition %}}
{{<admonitiontype="note">}}
This feature doesn't currently allow you to create nested folder structures, that is, where you have folders within folders.
You can't create nested folders structures, where you have folders within folders.
{{</admonition>}}
## Alerting
For information on provisioning Grafana Alerting, refer to [Provision Grafana Alerting resources]({{< relref "../../alerting/set-up/provision-alerting-resources/" >}}).
### Supported Settings
### Supported settings
The following sections detail the supported settings and secure settings for each alert notification type. Secure settings are stored encrypted in the database and you add them to `secure_settings` in the YAML file instead of `settings`.
| No Data | The default option. Sets alert instance state to `No data`. <br/> The alert rule also creates a new alert instance `DatasourceNoData` with the name and UID of the alert rule, and UID of the datasource that returned no data as labels. |
| Alerting | Sets alert instance state to `Alerting`. It waits until the [pending period](ref:pending-period) has finished. |
| Alerting | Sets alert instance state to `Alerting`. It transitions from `Pending` to `Alerting` after the [pending period](ref:pending-period) has finished. |
| Normal | Sets alert instance state to `Normal`. |
| Keep Last State | Maintains the alert instance in its last state. Useful for mitigating temporary issues, refer to [Keep last state](ref:keep-last-state). |
@ -265,18 +260,8 @@ You can also configure the alert instance state when its evaluation returns an e
| Error | The default option. Sets alert instance state to `Error`. <br/> The alert rule also creates a new alert instance `DatasourceError` with the name and UID of the alert rule, and UID of the datasource that returned no data as labels. |
| Alerting | Sets alert instance state to `Alerting`. It waits until the [pending period](ref:pending-period) has finished. |
| Alerting | Sets alert instance state to `Alerting`. It transitions from `Pending` to `Alerting` after the [pending period](ref:pending-period) has finished. |
| Normal | Sets alert instance state to `Normal`. |
| Keep Last State | Maintains the alert instance in its last state. Useful for mitigating temporary issues, refer to [Keep last state](ref:keep-last-state). |
When you configure the No data or Error behavior to `Alerting` or `Normal`, Grafana will attempt to keep a stable set of fields under notification `Values`. If your query returns no data or an error, Grafana re-uses the latest known set of fields in `Values`, but will use `-1` in place of the measured value.
## Create alerts from panels
Create alerts from any panel type. This means you can reuse the queries in the panel and create alerts based on them.
1. Navigate to a dashboard in the **Dashboards** section.
2. In the top right corner of the panel, click on the three dots (ellipses).
3. From the dropdown menu, select **More...** and then choose **New alert rule**.
This will open the alert rule form, allowing you to configure and create your alert based on the current panel's query.
| **Normal** | The state of an alert when the condition (threshold) is not met. |
| **Pending** | The state of an alert that has breached the threshold but for less than the [pending period](ref:pending-period). |
| **Alerting** | The state of an alert that has breached the threshold for longer than the [pending period](ref:pending-period). |
| **NoData** | The state of an alert whose query returns no data or all values are null. You can [change the default behavior](/docs/grafana/latest/alerting/alerting-rules/create-grafana-managed-rule/#configure-no-data-and-error-handling). |
| **Error** | The state of an alert when an error or timeout occurred evaluating the alert rule. You can [change the default behavior](/docs/grafana/latest/alerting/alerting-rules/create-grafana-managed-rule/#configure-no-data-and-error-handling). |
| **NoData** | The state of an alert whose query returns no data or all values are null. You can [change the default behavior of the no data state](#modify-the-no-data-and-error-state). |
| **Error** | The state of an alert when an error or timeout occurred evaluating the alert rule. You can [change the default behavior of the error state](#modify-the-no-data-and-error-state). |
{{<figuresrc="/media/docs/alerting/alert-instance-states-v3.png"caption="Alert instance state diagram"alt="A diagram of the distinct alert instance states and transitions."max-width="750px">}}
@ -64,18 +64,37 @@ Alert instances will be routed for [notifications](ref:notifications) when they
An alert instance is considered stale if its dimension or series has disappeared from the query results entirely for two evaluation intervals.
Stale alert instances that are in the **Alerting**, **NoData**, or **Error** states transition to the **Normal** state as **Resolved**, and include the `grafana_state_reason` annotation with the value **MissingSeries**. They are routed for notifications like other resolved alert instances.
Stale alert instances that are in the **Alerting**, **NoData**, or **Error** states transition to the **Normal** state as **Resolved**. Once transitioned, these resolved alert instances are routed for notifications like other resolved alerts.
### Keep last state
### Modify the no data and error state
The "Keep Last State" option helps mitigate temporary data source issues, preventing alerts from unintentionally firing, resolving, and re-firing.
In [Configure no data and error handling,](ref:no-data-and-error-handling) you can decide to keep the last state of the alert instance when a `NoData` and/or `Error` state is encountered. Just like normal evaluation, the alert instance transitions from `Pending` to `Alerting` after the pending period has elapsed.
In [Configure no data and error handling](ref:no-data-and-error-handling), you can change the default behaviour when the evaluation returns no data or an error. You can set the alert instance state to `Alerting`, `Normal`, or keep the last state.
{{<figuresrc="/media/docs/alerting/alert-rule-configure-no-data-and-error.png"alt="A screenshot of the `Configure no data and error handling` option in Grafana Alerting."max-width="500px">}}
#### Keep last state
The "Keep Last State" option helps mitigate temporary data source issues, preventing alerts from unintentionally firing, resolving, and re-firing.
However, in situations where strict monitoring is critical, relying solely on the "Keep Last State" option may not be appropriate. Instead, consider using an alternative or implementing additional alert rules to ensure that issues with prolonged data source disruptions are detected.
### `grafana_state_reason` annotation
Occasionally, an alert instance may be in a state that isn't immediately clear to everyone. For example:
- Stale alert instances in the `Alerting` state transition to the `Normal` state when the series disappear.
- If "no data" handling is configured to transition to a state other than `NoData`.
- If "error" handling is configured to transition to a state other than `Error`.
- If the alert rule is deleted, paused, or updated in some cases, the alert instance also transitions to the `Normal` state.
In these situations, the evaluation state may differ from the alert state, and it might be necessary to understand the reason for being in that state when receiving the notification.
The `grafana_state_reason` annotation is included in these situations, providing the reason in the notifications that explain why the alert instance transitioned to its current state. For example:
- Stale alert instances in the `Normal` state include the `grafana_state_reason` annotation with the value **MissingSeries**.
- If "no data" or "error" handling transitions to the `Normal` state, the `grafana_state_reason` annotation is included with the value **NoData** or **Error**, respectively.
- If the alert rule is deleted or paused, the `grafana_state_reason` is set to **Paused** or **RuleDeleted**. For some updates, it is set to **Updated**.
### Special alerts for `NoData` and `Error`
When evaluation of an alert rule produces state `NoData` or `Error`, Grafana Alerting generates a new alert instance that have the following additional labels:
@ -136,7 +136,7 @@ These functions are available for **Reduce** and **Classic condition** expressio
An alert condition is the query or expression that determines whether the alert fires or not depending on the value it yields. There can be only one condition which determines the triggering of the alert.
After you have defined your queries and/or expressions, choose one of them as the alert rule condition. By default, the last expression added is used as the alert condition.
After you have defined your queries and expressions, choose one of them as the alert rule condition. By default, the last expression added is used as the alert condition.
When the queried data satisfies the defined condition, Grafana triggers the associated alert, which can be configured to send notifications through various channels like email, Slack, or PagerDuty.
@ -335,11 +335,11 @@ You can customize the branding options.
Report branding:
- **Company logo:** Company logo displayed in the report PDF. It can be configured by specifying a URL, or by uploading a file. Defaults to the Grafana logo.
- **Company logo:** Company logo displayed in the report PDF. It can be configured by specifying a URL, or by uploading a file. The maximum file size is 16 MB. Defaults to the Grafana logo.
Email branding:
- **Company logo:** Company logo displayed in the report email. It can be configured by specifying a URL, or by uploading a file. Defaults to the Grafana logo.
- **Company logo:** Company logo displayed in the report email. It can be configured by specifying a URL, or by uploading a file. The maximum file size is 16 MB. Defaults to the Grafana logo.
- **Email footer:** Toggle to enable the report email footer. Select **Sent by** or **None**.
- **Footer link text:** Text of the link in the report email footer. Defaults to `Grafana`.
- **Footer link URL:** Link of the report email footer.
_Ad hoc filters_ enable you to add key/value filters that are automatically added to all metric queries that use the specified data source. Unlike other variables, you do not use ad hoc filters in queries. Instead, you use ad hoc filters to write filters for existing queries.
{{% admonition type="note" %}}
Ad hoc filter variables only work with Prometheus, Loki, InfluxDB, and Elasticsearch data sources.
Not all data sources support ad hoc filters. Examples of those that do include Prometheus, Loki, InfluxDB, and Elasticsearch.
{{% /admonition %}}
1. [Enter general options](#enter-general-options).
Grafana ships with built-in support for Parca, a continuous profiling OSS database for analysis of CPU and memory usage, down to the line number and throughout time. Add it as a data source, and you are ready to query your profiles in [Explore](ref:explore).
## Supported Parca versions
This data source supports these versions of Parca:
- v0.19+
## Configure the Parca data source
To configure basic settings for the data source, complete the following steps:
@ -63,15 +63,19 @@ The first option to configure is the name of your connection:
- **Default** - Toggle to select as the default name in dashboard panels. When you go to a dashboard panel this will be the default selected data source.
### HTTP section
### Connection section
- **URL** - The URL of your Prometheus server. If your Prometheus server is local, use `<http://localhost:9090>`. If it is on a server within a network, this is the URL with port where you are running Prometheus. Example: `<http://prometheus.example.orgname:9090>`.
- **Prometheus server URL** - The URL of your Prometheus server. If your Prometheus server is local, use `http://localhost:9090`. If it is on a server within a network, this is the URL with port where you are running Prometheus. Example: `http://prometheus.example.orgname:9090`.
- **Allowed cookies** - Specify cookies by name that should be forwarded to the data source. The Grafana proxy deletes all forwarded cookies by default.
{{<admonitiontype="note">}}
If you're running Grafana and Prometheus together in different container environments, each localhost refers to its own container - if the server URL is localhost:9090, that means port 9090 inside the Grafana container, not port 9090 on the host machine.
- **Timeout** - The HTTP request timeout. This must be in seconds. There is no default, so this setting is up to you.
You should use the IP address of the Prometheus container, or the hostname if you are using Docker Compose. Alternatively, you can consider `http://host.docker.internal:9090`.
### Auth section
{{</admonition>}}
### Authentication section
There are several authentication methods you can choose in the Authentication section.
@ -99,10 +103,16 @@ Use TLS (Transport Layer Security) for an additional layer of security when work
- **Value** - The value of the header.
## Additional settings
## Advanced settings
Following are additional configuration options.
### Advanced HTTP settings
- **Allowed cookies** - Specify cookies by name that should be forwarded to the data source. The Grafana proxy deletes all forwarded cookies by default.
- **Timeout** - The HTTP request timeout. This must be in seconds. The default is 30 seconds.
### Alerting
- **Manage alerts via Alerting UI** - Toggle to enable `Alertmanager` integration for this data source.
@ -121,12 +131,14 @@ Following are additional configuration options.
### Performance
- **Prometheus type** - The type of your Prometheus server. There are four options: `Prometheus`, `Cortex`, `Thanos`, `Mimir`.
- **Prometheus type** - The type of your Prometheus server. There are four options: `Prometheus`, `Cortex`, `Mimir`, and `Thanos`.
- **Version** Select the version you are using. Once the Prometheus type has been selected, a list of versions auto-populates using the Prometheus [buildinfo](https://semver.org/) API. The `Cortex` Prometheus type does not support this API so you will need to manually add the version.
- **Cache level** - The browser caching level for editor queries. There are four options: `Low`, `Medium`, `High`, or `None`.
- **Incremental querying (beta)** - Changes the default behavior of relative queries to always request fresh data from the Prometheus instance. Enable this option to decrease database and network load.
- **Disable recording rules (beta)** - Toggle on to disable the recording rules. Enable this option to improve dashboard performance.
### Other
- **Custom query parameters** - Add custom parameters to the Prometheus query URL. For example `timeout`, `partial_response`, `dedup`, or `max_source_resolution`. Multiple parameters should be concatenated together with an '&'.
To configure basic settings for the data source, complete the following steps:
The Pyroscope data source sets how Grafana connects to your Pyroscope database.
1. Click **Connections** in the left-side menu.
1. Under Your connections, click **Data sources**.
1. Enter `Grafana Pyroscope` in the search bar.
1. Select **Add new data source**.
1. Click **Grafana Pyroscope** to display the **Settings** tab of the data source.
1. Set the data source's basic configuration options.
1. Select **Save & test**.
You can configure the data source using either the data source interface in Grafana or using a configuration file.
This page explains how to set up and enable the data source capabilities using Grafana.
## Configuration options
If you make any changes, select **Save & test** to preserve those changes.
You can configure several options for the Pyroscope data source, including the name, HTTP, authentication, querying, and private data source connect.
If you're using your own installation of Grafana, you can provision the Pyroscope data source using a YAML configuration file.
For more information about provisioning and available configuration options, refer to [Provisioning Grafana](ref:provisioning-data-sources).
If you make any changes, select **Save & test** to preserve those changes.
## Before you begin

To configure a Pyroscope data source, you need administrator rights to your Grafana instance and a Pyroscope instance configured to send data to Grafana.
### Name and default
If you're provisioning a Pyroscope data source, then you also need administrative rights on the server hosting your Grafana instance.
**Name**
: Enter a name to specify the data source in panels, queries, and Explore.
## Add or modify a data source
**Default**
: The default data source is pre-selected for new panels.
You can use these procedures to configure a new Pyroscope data source or to edit an existing one.
### HTTP
### Create a new data source
The HTTP section is shown in number 1 in the screenshot.
To configure basic settings for the data source, complete the following steps:
**URL**
: The URL of the Grafana Pyroscope instance, for example, `https://localhost:4100`.
1. Select **Connections** in the main menu.
1. Enter `Grafana Pyroscope` in the search bar.
1. Select **Grafana Pyroscope**.
1. Select **Add new data source** in the top-right corner of the page.
1. On the **Settings** tab, complete the **Name**, **Connection**, and **Authentication** sections.
**Allowed cookies**
: The Grafana Proxy deletes forwarded cookies. Use this field to specify cookies by name that should be forwarded to the data source.
- Use the **Name** field to specify the name used for the data source in panels, queries, and Explore. Toggle the **Default** switch for the data source to be pre-selected for new panels.
- Under **Connection**, enter the **URL** of the Pyroscope instance. For example, `https://example.com:4100`.
- Complete the [**Authentication** section](#authentication).
**Timeout**
: HTTP request timeout in seconds.
1. Optional: Use **Additional settings** to configure other options.
1. Select **Save & test**.
### Auth
### Update an existing data source
The Auth section is shown in number 2 in the screenshot.
To modify an existing Pyroscope data source:
**Basic auth**
: Enable basic authentication to the data source. When activated, it provides **User** and **Password** fields.
1. Select **Connections** in the main menu.
1. Select **Data sources** to view a list of configured data sources.
1. Select the Pyroscope data source you wish to modify.
1. Optional: Use **Additional settings** to configure or modify other options.
1. After completing your updates, select **Save & test**.
**With Credentials**
: Whether credentials, such as cookies or auth headers, should be sent with cross-site requests.
## Authentication
**TLS Client Auth**
: Toggle on to use client authentication. When enabled, it adds the **Server name**, **Client cert**, and **Client key** fields. The client provides a certificate that is validated by the server to establish the client's trusted identity. The client key encrypts the data between client and server. These details are encrypted and stored in the Grafana database.
Use this section to select an authentication method to access the data source.
**With CA Cert**
: Activate this option to verify self-signed TLS certificates.
{{<admonitiontype="note">}}
Use Transport Layer Security (TLS) for an additional layer of security when working with Pyroscope.
For additional information on setting up TLS encryption with Pyroscope, refer to [Pyroscope configuration](https://grafana.com/docs/pyroscope/<PYROSCOPE_VERSION>/configure-server/reference-configuration-parameters/).
{{</admonition>}}
**Skip TLS Verify**
: When activated, it bypasses TLS certificate verification.
[//]: # 'Shared content for authentication section procedure in data sources'
**Forward OAuth Identity**
: When activated, the user’s upstream OAuth 2.0 identity is forwarded to the data source along with their access token.
: Select Add header to add Header and Value fields.
## Additional settings
**Header**
: Add a custom header. This allows custom headers to be passed based on the needs of your Pyroscope instance.
Use the down arrow to expand the **Additional settings** section to view these options.
**Value**
: The value of the header.
### Advanced HTTP settings
### Querying
The Grafana Proxy deletes forwarded cookies. Use the **Allowed cookies** field to specify cookies that should be forwarded to the data source by name.
The **Querying** section is shown in number 3 in the screenshot.
The **Timeout** field sets the HTTP request timeout in seconds.
### Querying
**Minimum step** is used for queries returning time-series data. The default value is 15 seconds.
@ -117,14 +120,6 @@ Adjusting this option can help prevent gaps when you zoom in to profiling data.
### Private data source connect
The **Private data source connect** section is shown in number 4 in the screenshot.
This feature is only available in Grafana Cloud.
This option lets you query data that lives within a secured network without opening the network to inbound traffic from Grafana Cloud.
Use the drop-down box to select a configured private data sources.
Select **Manage private data source connect** to configure and manage any private data sources you have configured.
[//]: # 'Shared content for authentication section procedure in data sources'
For more information, refer to [Private data source connect](https://grafana.com/docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/).
The Tempo data source sets how Grafana connects to your Tempo database and lets you configure features and integrations with other telemetry signals.
To configure basic settings for the Tempo data source, complete the following steps:
You can configure the data source using either the data source interface in Grafana or using a configuration file.
This page explains how to set up and enable the data source capabilities using Grafana.
If you're using your own installation of Grafana, you can provision the Tempo data source using a YAML configuration file.
Depending upon your tracing environment, you may have more than one Tempo instance.
Grafana supports multiple Tempo data sources.
## Before you begin
To configure a Tempo data source, you need administrator rights to your Grafana instance and a Tempo instance configured to send tracing data to Grafana.
If you're provisioning a Tempo data source, then you also need administrative rights on the server hosting your Grafana instance.
Refer to [Provision the data source](#provision-the-data-source) for next steps.

## Add or modify a data source
1. Click **Connections** in the left-side menu.
1. Under Your connections, click **Data sources**.
You can use these procedures to configure a new Tempo data source or to edit an existing one.
### Add a new data source
Follow these steps to set up a new Tempo data source:
1. Select **Connections** in the main menu.
1. Enter `Tempo` in the search bar.
1. Select **Tempo**.
1. Select **Add new data source** in the top-right corner of the page.
1. On the **Settings** tab, complete the **Name**, **Connection**, and **Authentication** sections.
- Use the **Name** field to specify the name used for the data source in panels, queries, and Explore. Toggle the **Default** switch for the data source to be pre-selected for new panels.
- Under **Connection**, enter the **URL** of the Tempo instance, for example, `https://example.com:4100`.
- Complete the [**Authentication** section](#authentication).
1. Optional: Configure other sections to add capabilities to your tracing data. Refer to the additional procedures for instructions.
1. Select **Save & test**.
### Update an existing data source
To modify an existing Tempo data source:
1. Select **Connections** in the main menu.
1. Select **Data sources** to view a list of configured data sources.
1. Select the Tempo data source you wish to modify.
1. Configure or update additional sections to add capabilities to your tracing data. Refer to the additional procedures for instructions.
1. After completing your updates, select **Save & test**.
1. On the **Settings** tab, set the data source's basic configuration options:
| **Name** | Sets the name you use to refer to the data source in panels and queries. |
| **Default** | Sets the data source that's pre-selected for new panels. |
| **URL** | Sets the URL of the Tempo instance, such as `http://tempo`. |
| **Basic Auth** | Enables authentication to the Tempo data source. |
| **User** | Sets the user name for basic authentication. |
| **Password** | Sets the password for basic authentication. |
Use this section to select an authentication method to access the data source.
You can also configure settings specific to the Tempo data source.
{{<admonitiontype="note">}}
Use Transport Layer Security (TLS) for an additional layer of security when working with Tempo.
For additional information on setting up TLS encryption with Tempo, refer to [Configure TLS communication](https://grafana.com/docs/tempo/<TEMPO_VERSION>/configuration/network/tls/) and [Tempo configuration](https://grafana.com/docs/tempo/<TEMPO_VERSION>/configuration/).
{{</admonition>}}
This video explains how to add data sources, including Loki, Tempo, and Mimir, to Grafana and Grafana Cloud. Tempo data source set up starts at 4:58 in the video.
[//]: # 'Shared content for authentication section procedure in data sources'
<!-- The traceQLStreaming toggle will be deprecated in Grafana 11.2 and removed in 11.3. -->
Streaming enables TraceQL query results to be displayed as they become available. Without streaming, no results are displayed until all results have returned.
Streaming enables TraceQL query results to be displayed as they become available.
Without streaming, no results are displayed until all results have returned.
- Be running Tempo version 2.2 or newer, or Grafana Enterprise Traces (GET) version 2.2 or newer, or be using Grafana Cloud Traces.
- Run Tempo version 2.2 or newer, or Grafana Enterprise Traces (GET) version 2.2 or newer, or use Grafana Cloud Traces.
- For self-managed Tempo or GET instances: If your Tempo or GET instance is behind a load balancer or proxy that doesn't supporting gRPC or HTTP2, streaming may not work and should be disabled.
### Activate streaming
For streaming to work for a particular Tempo data source, set your Grafana's`traceQLStreaming` [feature toggle](https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/feature-toggles/) to true and set **Streaming** to enabled in your Tempo data source configuration.
You can activate streaming by either setting the`traceQLStreaming` [feature toggle](https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/feature-toggles/) to true or by activating the **Streaming** toggle in the Tempo data source.

If you are using Grafana Cloud, the `traceQLStreaming` feature toggle is already set to `true` by default.
If the Tempo data source is set to allow streaming but the `traceQLStreaming` feature toggle is set to `false` in Grafana, no streaming will occur.
If the Tempo data source is set to allow streaming but the `traceQLStreaming` feature toggle is set to `false` in Grafana, streaming occurs.
If the data source has streaming disabled and `traceQLStreaming` is set to `true`, streaming happens for that data source.
If the data source has streaming disabled and `traceQLStreaming` is set to `true`, no streaming will happen for that data source.
When streaming is active, it's shows as **Enabled** in **Explore**.
To check the status, select Explore in the menu, select your Tempo data source, and expand the **Options** section.

## Trace to logs
The **Trace to logs** setting configures [trace to logs](ref:explore-trace-integration) that's available when you integrate Grafana with Tempo.
Trace to logs can also be used with other tracing data sources, such as Jaeger and Zipkin.


There are two ways to configure the trace to logs feature:
@ -171,7 +213,9 @@ There are two ways to configure the trace to metrics feature:
- Use a basic configuration with a default query, or
- Configure one or more custom queries where you can use a [template language](ref:variable-syntax) to interpolate variables from the trace or span.
Refer to the Trace to metrics configuration options section to learn about the available options.
Refer to the [Trace to metrics configuration options](#trace-tometrics-configuration-options) section to learn about the available options.

### Set up a simple configuration
@ -233,7 +277,7 @@ To use custom queries with the configuration, follow these steps:
[//]: # 'Shared content for Trace to profiles in the Tempo data source'
@ -252,27 +296,37 @@ For example, `${__span.name}`.
| **\_\_trace.duration** | The duration of the trace. |
| **\_\_trace.name** | The name of the trace. |
## Service Graph
## Additional settings
Use the down arrow to expand the **Additional settings** section to view these options.
### Advanced HTTP settings
The Grafana Proxy deletes forwarded cookies. Use the **Allowed cookies** field to specify cookies by name that should be forwarded to the data source.
The **Timeout** field sets the HTTP request timeout in seconds.
The **Service Graph** setting configures the [Service Graph](/docs/tempo/latest/metrics-generator/service_graphs/enable-service-graphs/) feature.
### Service graph
The **Service graph** setting configures the [Service Graph](/docs/tempo/latest/metrics-generator/service_graphs/enable-service-graphs/) data.
Configure the **Data source** setting to define in which Prometheus instance the Service Graph data is stored.
To use the Service Graph, refer to the [Service Graph documentation](#use-the-service-graph).
## Node Graph
### Node graph
The **Node Graph** setting enables the [node graph visualization](ref:node-graph), which is disabled by default.
The **Node graph** setting enables the [node graph visualization](ref:node-graph), which isn't activated by default.
Once enabled, Grafana displays the node graph above the trace view.
Once activated, Grafana displays the node graph above the trace view.
## Tempo search
### Tempo search
The **Search** setting configures [Tempo search](/docs/tempo/latest/configuration/#search).
You can configure the **Hide search** setting to hide the search query option in **Explore** if search is not configured in the Tempo instance.
## TraceID query
### TraceID query
The **TraceID query** setting modifies how TraceID queries are run. The time range can be used when there are performance issues or timeouts since it will narrow down the search to the defined range. This setting is disabled by default.
@ -284,7 +338,7 @@ You can configure this setting as follows:
| **Time shift start** | Time shift for start of search. Default: `30m`. |
| **Time shift end** | Time shift for end of search. Default: `30m`. |
## Span bar
### Span bar
The **Span bar** setting helps you display additional information in the span bar row.
@ -296,12 +350,30 @@ You can choose one of three options:
| **Duration** | _(Default)_ Displays the span duration on the span bar row. |
| **Tag** | Displays the span tag on the span bar row. You must also specify which tag key to use to get the tag value, such as `component`. |
### Private data source connect
[//]: # 'Shared content for authentication section procedure in data sources'
Use Explore to query, collect, and analyze data for detailed real-time data analysis.
cards:
title_class: pt-0 lh-1
items:
- title: Get started with Explore
href: ./get-started-with-explore/
description: Get started using Explore to create queries and do real-time analysis on your data.
height: 24
- title: Query management
href: ./query-management/
description: Learn how to manage queries in Explore.
height: 24
- title: Query inspector in Explore
href: ./explore-inspector/
description: Learn how to use the Query inspector to troubleshoot issues with your queries.
height: 24
- title: Logs in Explore
href: ./logs-integration/
description: Learn about working with logs and log data in Explore.
height: 24
- title: Traces in Explore
href: ./trace-integration/
description: Learn about working with traces and tracing data in Explore.
height: 24
- title: Correlations editor in Explore
href: ./correlations-editor-in-explore/
description: Learn how to create and use Correlations.
height: 24
---
# Explore
{{<docs/hero-simplekey="hero">}}
Grafana's dashboard UI is all about building dashboards for visualization. Explore strips away the dashboard and panel options so that you can focus on the query. It helps you iterate until you have a working query and then think about building a dashboard.
> Refer to [Role-based access control]({{< relref "../administration/roles-and-permissions/access-control/" >}}) in Grafana Enterprise to understand how you can control access with role-based permissions.
If you just want to explore your data and do not want to create a dashboard, then Explore makes this much easier. If your data source supports graph and table data, then Explore shows the results both as a graph and a table. This allows you to see trends in the data and more details at the same time. See also:
- [Query management in Explore]({{< relref "query-management/" >}})
- [Logs integration in Explore]({{< relref "logs-integration/" >}})
- [Trace integration in Explore]({{< relref "trace-integration/" >}})
- [Correlations Editor in Explore]({{< relref "correlations-editor-in-explore/" >}})
- [Inspector in Explore]({{< relref "explore-inspector/" >}})
## Start exploring
{{<youtubeid="1q3YzX2DDM4">}}
> Refer to [Role-based access Control]({{< relref "../administration/roles-and-permissions/access-control/" >}}) in Grafana Enterprise to understand how you can manage Explore with role-based permissions.
In order to access Explore, you must have an editor or an administrator role, unless the [viewers_can_edit option]({{< relref "../setup-grafana/configure-grafana/#viewers_can_edit" >}}) is enabled. Refer to [About users and permissions]({{< relref "../administration/roles-and-permissions/" >}}) for more information on what each role has access to.
{{% admonition type="note" %}}
If you are using Grafana Cloud, open a [support ticket in the Cloud Portal](/profile/org#support) to enable the `viewers_can_edit` option
{{% /admonition %}}
To access Explore:
1. Click on the Explore icon on the menu bar.
An empty Explore tab opens.
Alternately to start with an existing query in a panel, choose the Explore option from the Panel menu. This opens an Explore tab with the query from the panel and allows you to tweak or iterate in the query outside of your dashboard.
{{<figuresrc="/media/docs/grafana/panels-visualizations/screenshot-panel-menu-10.1.png"class="docs-image--no-shadow"max-width="650px"caption="Screenshot of the panel menu including the Explore option">}}
1. Choose your data source from the drop-down in the top left.
You can also click **Open advanced data source picker** to see more options, including adding a data source (Admins only).
1. Write the query using a query editor provided by the selected data source. Please check [data sources documentation]({{< relref "../datasources" >}}) to see how to use various query editors.
1. For general documentation on querying data sources in Grafana, see [Query and transform data]({{< relref "../panels-visualizations/query-transform-data" >}}).
1. Run the query using the button in the top right corner.
## Split and compare
The split view provides an easy way to compare visualizations side-by-side or to look at related data together on one page.
To open the split view:
1. Click the split button to duplicate the current query and split the page into two side-by-side queries.
It is possible to select another data source for the new query which for example, allows you to compare the same query for two different servers or to compare the staging environment to the production environment.
{{<figuresrc="/media/docs/grafana/panels-visualizations/screenshot-explore-split-10.1.png"max-width="950px"caption="Screenshot of Explore screen split">}}
In split view, timepickers for both panels can be linked (if you change one, the other gets changed as well) by clicking on one of the time-sync buttons attached to the timepickers. Linking of timepickers helps with keeping the start and the end times of the split view queries in sync. It ensures that you’re looking at the same time interval in both split panels.
To close the newly created query, click on the Close Split button.
## Content outline
The content outline is a side navigation bar that keeps track of the queries and visualization panels you created in Explore. It allows you to navigate between them quickly.
The content outline also works in a split view. When you are in split view, the content outline is generated for each pane.
To open the content outline:
1. Click the Outline button in the top left corner of the Explore screen.
You can then click on any panel icon in the content outline to navigate to that panel.
### Filter logs in content outline
When using Explore with logs, you can filter the logs in the content outline. You can filter by log level, which is currently supported for Elasticsearch and Loki data sources. To select multiple filters, press Command-click on a Mac or Ctrl+Click in Windows.
{{% admonition type="note" %}}
Log levels only show if the datasource supports the log volume histogram and contains multiple levels. Additionally, the query to the data source may have to format the log lines to see the levels. For example, in Loki, the `logfmt` parser commonly will display log levels.
{{% /admonition %}}
{{<figuresrc="/media/docs/grafana/explore/screenshot-explore-content-outline-logs-filtering-11.2.png"max-width="950px"caption="Screenshot of Explore content outline logs filtering">}}
### Pin logs to content outline
When using Explore with logs, you can pin logs to content outline by hovering over a log in the logs panel and clicking on the _Pin to content outline_ icon in the log row menu.
{{<figuresrc="/media/docs/grafana/explore/screenshot-explore-content-outline-logs-pinning-11.2.png"max-width="450px"caption="Screenshot of Explore content outline logs pinning">}}
Clicking on a pinned log opens the [log context modal](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/explore/logs-integration/#log-context), showing the log highlighted in context with other logs. From here, you can also open the log in split mode to preserve the time range in the left pane while having the time range specific to that log in the right pane.
## Share Explore URLs
When using Explore, the URL in the browser address bar updates as you make changes to the queries. You can share or bookmark this URL.
{{% admonition type="note" %}}
Explore may generate relatively long URLs, some tools, like messaging or videoconferencing apps, may truncate messages to a fixed length. In such cases Explore will display a warning message and load a default state. If you encounter issues when sharing Explore links in such apps, you can generate shortened links. See [Share shortened link](#share-shortened-link) for more information.
{{% /admonition %}}
### Generating Explore URLs from external tools
Because Explore URLs have a defined structure, you can build a URL from external tools and open it in Grafana. The URL structure is:
- `schema_version` is the schema version (should be set to the latest version which is `1`)
- `panes` is a url-encoded JSON object of panes, where each key is the pane ID and each value is an object matching the following schema:
```
{
datasource: string; // the pane's root datasource UID, or `-- Mixed --` for mixed datasources
queries: {
refId: string; // an alphanumeric identifier for this query, must be unique within the pane, i.e. "A", "B", "C", etc.
datasource: {
uid: string; // the query's datasource UID ie: "AD7864H6422"
type: string; // the query's datasource type-id, i.e: "loki"
}
// ... any other datasource-specific query parameters
}[]; // array of queries for this pane
range: {
from: string; // the start time, in milliseconds since epoch
to: string; // the end time, in milliseconds since epoch
}
}
```
{{% admonition type="note" %}}
The `from` and `to` also accept relative ranges defined in [Time units and relative ranges]({{< relref "../dashboards/use-dashboards/#time-units-and-relative-ranges" >}}).
{{% /admonition %}}
## Share shortened link
{{% admonition type="note" %}}
Available in Grafana 7.3 and later versions.
{{% /admonition %}}
The Share shortened link capability allows you to create smaller and simpler URLs of the format /goto/:uid instead of using longer URLs with query parameters. To create a shortened link to the executed query, click the **Share** option in the Explore toolbar.
---
A shortened link that is not accessed will automatically get deleted after a [configurable period](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/#short_links) (defaulting to seven days). If a link is used at least once, it won't be deleted.
## Overview
### Sharing shortened links with absolute time
Explore is your starting point for querying, analyzing, and aggregating data in Grafana. You can quickly begin creating queries to start analyzing data without having to create a dashboard or customize a visualization.
{{% admonition type="note" %}}
Available in Grafana 10.3 and later versions.
{{% /admonition %}}
## Explore
Short links have two options - keeping relative time (for example, from two hours ago to now) or absolute time (for example, from 8am to 10am). Sharing a shortened link by default will copy the time range selected, relative or absolute. Clicking the dropdown button next to the share shortened link button and selecting one of the options under "Time-Sync URL Links" will allow you to create a short link with the absolute time - meaning anyone receiving the link will see the same data you are seeing, even if they open the link at another time. This will not affect your selected time range.
Explore is your gateway for querying, analyzing, and aggregating data in Grafana. It allows you to visually explore and iterate until you develop a working query or set of queries for building visualizations and conducting data analysis. If your data source supports graph and table data, there's no need to create a dashboard, as Explore can display the results in both formats. This facilitates quick, detailed, real-time data analysis.
With Explore you can:
- Create visualizations to integrate into your dashboards.
- Create queries using mixed data sources.
- Create multiple queries within a single interface.
- Understand the shape of your data across various data sources.
- Perform real time data exploration and analysis.
Key features include:
- Query editor, based on specific data source, to create and iterate queries.
- [Query history](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/explore/query-management/) to track and maintain your queries.
- [Query inspector](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/explore/explore-inspector/) to help troubleshoot query performance.
Watch the following video to get started using Explore:
{{<youtubeid="1q3YzX2DDM4">}}
## Before you begin
In order to access Explore, you must have either the `editor` or `administrator` role, unless the [`viewers_can_edit` option](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/#viewers_can_edit) is enabled. Refer to [Role and permissions](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/administration/roles-and-permissions/) for more information on what each role can access.
Refer to [Role-based access control (RBAC)](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/administration/roles-and-permissions/access-control/) in Grafana Enterprise to understand how you can manage Explore with role-based permissions.
{{<admonitiontype="note">}}
If you are using Grafana Cloud, open a [support ticket in the Cloud Portal](/https://grafana.com/auth/sign-in) to enable the `viewers_can_edit` option.
{{</admonition>}}
## Explore elements
Explore consists of a toolbar, outline, query editor, the ability to add multiple queries, a query history and a query inspector.
- **Outline** - Keeps track of the queries and visualization panels created in Explore. Refer to [Content outline](#content-outline) for more detail.
- **Toolbar** - Provides quick access to frequently used tools and settings.
- **Data source picker** - Select a data source from the dropdown menu, or use absolute time.
- **Split** - Click to compare visualizations side by side. Refer to [Split and compare](#split-and-compare) for additional detail.
- **Add** - Click to add your exploration to a dashboard. You can also use this to declare an incident,create a forecast, detect outliers and to run an investigation.
- **Time picker** - Select a time range form the time picker. You can also enter an absolute time range. Refer to [Time picker](#time-picker) for more information.
- **Run query** - Click to run your query.
- **Query editor** - Interface where you construct the query for a specific data source. Query editor elements differ based on data source. In order to run queries across multiple data sources you need to select **Mixed** from the data source picker.
- **+Add query** - Add additional queries.
- **Query history** - Query history contains the list of queries that you created in Explore. Refer to [Query history](/docs/grafana/<GRAFANA_VERSION>/explore/query-management/#query-history) for detailed information on working with your query history.
- **Query inspector** - Provides detailed statistics regarding your query. Inspector functions as a kind of debugging tool that "inspects" your query. It provides query statistics under **Stats**, request response time under **Query**, data frame details under **{} JSON**, and the shape of your data under **Data**. Refer to [Query inspector in Explore](/docs/grafana/latest/explore/explore-inspector/) for additional information.
## Access Explore
To access Explore:
1. Click on **Explore** in the left side menu.
To start with an existing query from a dashboard panel, select the Explore option from the Panel menu in the upper right. This opens an Explore page with the panel's query, enabling you to tweak or iterate the query outside your dashboard.
{{<figuresrc="/media/docs/grafana/panels-visualizations/screenshot-panel-menu-10.1.png"class="docs-image--no-shadow"caption="Panel menu with Explore option">}}
1. Select a data source from the drop-down in the upper left.
1. Using the query editor provided for the specific data source, begin writing your query. Each query editor differs based on each data source's unique elements.
Some query editors provide a **Kick start your query** option, which gives you a list of basic pre-written queries. Refer to [Use query editors](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/#use-query-editors) to see how to use various query editors. For general information on querying data sources in Grafana, refer to [Query and transform data](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/panels-visualizations/query-transform-data/).
Based on specific data source, certain query editors allow you to select the label or labels to add to your query. Labels are fields that consist of key/value pairs representing information in the data. Some data sources allow for selecting fields.
1. Click **Run query** in the upper right to run your query.
## Content outline
The content outline is a side navigation bar that keeps track of the queries and visualizations you created in Explore. It allows you to navigate between them quickly.
The content outline works in a split view, with a separate outline generated for each pane.
To open the content outline:
1. Click the Outline button in the top left corner of the Explore screen.
You can then click on any panel icon in the content outline to navigate to that panel.
## Split and compare
The split view enables easy side-by-side comparison of visualizations or simultaneous viewing of related data on a single page.
To open the split view:
1. Click the split button to duplicate the current query and split the page into two side-by-side queries.
1. Run and re-run queries as often as needed.
You can select a different data source, or different metrics and label filters for the new query, allowing you to compare the same query across two different servers or compare the staging environment with the production environment.
{{<figuresrc="/media/docs/grafana/panels-visualizations/screenshot-explore-split-10.1.png"max-width="950px"caption="Screenshot of Explore screen split">}}
You can also link the time pickers for both panels by clicking on one of the time-sync buttons attached to the time pickers. When linked, changing the time in one panel automatically updates the other, keeping the start and end times synchronized. This ensures that both split panels display data for the same time interval.
Click **Close** to quit split view.
## Time picker
Use the time picker to select a time range for your query. The default is **last hour**. You can select a different option from the dropdown or use an absolute time range. You can also change the timezone associated with the query, or use a fiscal year.
1. Click **Change time settings** to change the timezone or apply a fiscal year.
Refer to [Set dashboard time range](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/dashboards/use-dashboards/#set-dashboard-time-range) for more information on absolute and relative time ranges. You can also [control the time range using a URL](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/dashboards/use-dashboards/#control-the-time-range-using-a-url).
## Mixed data source
Select **Mixed** from the data source dropdown to run queries across multiple data sources in the same panel. When you select Mixed, you can select a different data source for each new query that you add.
## Share Explore URLs
When using Explore, the URL in the browser address bar updates as you make changes to the queries. You can share or bookmark this URL.
{{% admonition type="note" %}}
Explore may generate long URLs, which some tools, like messaging or videoconferencing applications, might truncate due to fixed message lengths. In such cases, Explore displays a warning and loads a default state.
If you encounter issues when sharing Explore links in these applications, you can generate shortened links. See [Share shortened link](#share-shortened-link) for more information.
{{% /admonition %}}
### Generate Explore URLs from external tools
Because Explore URLs have a defined structure, you can build a URL from external tools and open it in Grafana. The URL structure is:
- `schema_version` is the schema version (should be set to the latest version which is `1`)
- `panes` is a URL-encoded JSON object of panes, where each key is the pane ID and each value is an object matching the following schema:
```
{
datasource: string; // the pane's root datasource UID, or `-- Mixed --` for mixed datasources
queries: {
refId: string; // an alphanumeric identifier for this query, must be unique within the pane, i.e. "A", "B", "C", etc.
datasource: {
uid: string; // the query's datasource UID ie: "AD7864H6422"
type: string; // the query's datasource type-id, i.e: "loki"
}
// ... any other datasource-specific query parameters
}[]; // array of queries for this pane
range: {
from: string; // the start time, in milliseconds since epoch
to: string; // the end time, in milliseconds since epoch
}
}
```
{{<admonitiontype="note">}}
The `from` and `to` also accept relative ranges defined in [Time units and relative ranges](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/dashboards/use-dashboards/#time-units-and-relative-ranges).
{{</admonition>}}
## Share shortened link
{{<admonitiontype="note">}}
Available in Grafana 7.3 and later versions.
{{</admonition>}}
The Share shortened link capability allows you to create smaller and simpler URLs of the format `/goto/:uid` instead of using longer URLs with query parameters. To create a shortened link to the executed query, click the **Share** option in the Explore toolbar.
A shortened link that's not accessed automatically gets deleted after a [configurable period](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/#short_links), which defaults to seven days. However, if the link is accessed at least once, it will not be deleted.
### Share shortened links with absolute time
{{<admonitiontype="note">}}
Available in Grafana 10.3 and later versions.
{{</admonition>}}
Shortened links have two options: relative time (e.g., from two hours ago to now) or absolute time (e.g., from 8am to 10am). By default, sharing a shortened link copies the selected time range, whether it's relative or absolute.
To create a short link with an absolute time:
1. Click the dropdown button next to the share shortened link button.
1. Select one of the options under **Time-Sync URL Links**.
This ensures that anyone receiving the link will see the same data you see, regardless of when they open it. Your selected time range will remain unaffected.
@ -1423,6 +1423,27 @@ For each generated **Trend** field value, a calculation function can be selected
> **Note:** This transformation is available in Grafana 9.5+ as an opt-in beta feature. Modify the Grafana [configuration file][] to use it.
### Transpose
Use this transformation to pivot the data frame, converting rows into columns and columns into rows. This transformation is particularly useful when you want to switch the orientation of your data to better suit your visualization needs.
If you have multiple types it will default to string type.
**Before Transformation:**
| env | January | February |
| ---- | ------- | -------- |
| prod | 1 | 2 |
| dev | 3 | 4 |
**After applying transpose transformation:**
| Field | prod | dev |
| -------- | ---- | --- |
| January | 1 | 3 |
| February | 2 | 4 |
{{<figuresrc="/media/docs/grafana/transformations/screenshot-grafana-11-2-transpose-transformation.png"class="docs-image--no-shadow"max-width="1100px"alt="Before and after transpose transformation">}}
### Regression analysis
Use this transformation to create a new data frame containing values predicted by a statistical model. This is useful for finding a trend in chaotic data. It works by fitting a mathematical function to the data, using either linear or polynomial regression. The data frame can then be used in a visualization to display a trendline.
@ -952,6 +952,14 @@ This setting is ignored if multiple OAuth providers are configured. Default is `
How many seconds the OAuth state cookie lives before being deleted. Default is `600` (seconds)
Administrators can increase this if they experience OAuth login state mismatch errors.
### oauth_refresh_token_server_lock_min_wait_ms
Minimum wait time in milliseconds for the server lock retry mechanism. Default is `1000` (milliseconds). The server lock retry mechanism is used to prevent multiple Grafana instances from simultaneously refreshing OAuth tokens. This mechanism waits at least this amount of time before retrying to acquire the server lock.
There are five retries in total, so with the default value, the total wait time (for acquiring the lock) is at least 5 seconds (the wait time between retries is calculated as random(n, n + 500)), which means that the maximum token refresh duration must be less than 5-6 seconds.
If you experience issues with the OAuth token refresh mechanism, you can increase this value to allow more time for the token refresh to complete.
| `openSearchBackendFlowEnabled` | Enables the backend query flow for Open Search datasource plugin | Yes |
| `cloudWatchRoundUpEndTime` | Round up end time for metric queries to the next minute to avoid missing data | Yes |
## Public preview feature toggles
@ -101,7 +102,8 @@ Most [generally available](https://grafana.com/docs/release-life-cycle/#general-
| `onPremToCloudMigrations` | Enable the Grafana Migration Assistant, which helps you easily migrate on-prem dashboards, folders, and data source configurations to your Grafana Cloud stack. |
| `newPDFRendering` | New implementation for the dashboard-to-PDF rendering |
| `ssoSettingsSAML` | Use the new SSO Settings API to configure the SAML connector |
| `openSearchBackendFlowEnabled` | Enables the backend query flow for Open Search datasource plugin |
| `accessActionSets` | Introduces action sets for resource permissions. Also ensures that all folder editors and admins can create subfolders without needing any additional permissions. |
| `azureMonitorPrometheusExemplars` | Allows configuration of Azure Monitor as a data source that can provide Prometheus exemplars |
| `expressionParser` | Enable new expression parser |
| `accessActionSets` | Introduces action sets for resource permissions |
| `disableNumericMetricsSortingInExpressions` | In server-side expressions, disable the sorting of numeric-kind metrics by their metric name or labels. |
| `queryLibrary` | Enables Query Library feature in Explore |
| `logsExploreTableDefaultVisualization` | Sets the logs table as default visualisation in logs explore |
@ -185,13 +185,14 @@ Experimental features might be changed or removed without prior notice.
| `notificationBanner` | Enables the notification banner UI and API |
| `adhocFilterOneOf` | Exposes a new 'one of' operator for ad-hoc filters. This operator allows users to filter by multiple values in a single filter. |
@ -75,17 +75,19 @@ Sign in to Grafana and navigate to **Administration > Authentication > Configure
| **Single logout** | The SAML single logout feature enables users to log out from all applications associated with the current IdP session established using SAML SSO. For more information, refer to [SAML single logout documentation]]({{<relref"../saml#single-logout">}}). |
| **Identity provider initiated login** | Enables users to log in to Grafana directly from the SAML IdP. For more information, refer to [IdP initiated login documentation]({{< relref "../saml#idp-initiated-single-sign-on-sso" >}}). |
1. Click **Next: Key and certificate**.
1. Click **Next: Sign requests**.
### 2. Key and Certificate Section
### 2. Sign Requests Section
1. Provide a certificate and a private key that will be used by the service provider (Grafana) and the SAML IdP.
1. In the **Sign requests** field, specify whether you want the outgoing requests to be signed, and, if so, then:
1. Provide a certificate and a private key that will be used by the service provider (Grafana) and the SAML IdP.
Use the [PKCS #8](https://en.wikipedia.org/wiki/PKCS_8) format to issue the private key.
For more information, refer to an [example on how to generate SAML credentials]({{< relref "../saml#generate-private-key-for-saml-authentication" >}}).
1. In the **Sign requests** field, specify whether you want the outgoing requests to be signed, and, if so, which signature algorithm should be used.
1. Choose which signature algorithm should be used.
The SAML standard recommends using a digital signature for some types of messages, like authentication or logout requests to avoid [man-in-the-middle attacks](https://en.wikipedia.org/wiki/Man-in-the-middle_attack).
@ -390,6 +390,10 @@ The Alerting Provisioning HTTP API can only be used to manage Grafana-managed al
- [cortex-tools](https://github.com/grafana/cortex-tools#cortextool): to interact with the Cortex alertmanager and ruler configuration.
- [lokitool](https://grafana.com/docs/loki/<GRAFANA_VERSION>/alert/#lokitool): to configure the Loki Ruler.
Alternatively, the [Grafana Alerting API](https://editor.swagger.io/?url=https://raw.githubusercontent.com/grafana/grafana/main/pkg/services/ngalert/api/tooling/post.json) can be used to access data from data source-managed alerts. This API is primarily intended for internal usage, with the exception of the `/api/v1/provisioning/` endpoints. It's important to note that internal APIs may undergo changes without prior notice and are not officially supported for user consumption.
For Prometheus, `amtool` can also be used to interact with the [AlertManager API](https://petstore.swagger.io/?url=https://raw.githubusercontent.com/prometheus/alertmanager/main/api/v2/openapi.yaml#/).
## Paths
### <spanid="route-delete-alert-rule"></span> Delete a specific alert rule by UID. (_RouteDeleteAlertRule_)
@ -1060,7 +1064,7 @@ GET /api/v1/provisioning/templates/:name
| Code | Status | Description | Has headers | Schema |
[//]: # 'If you make changes to this file, verify that the meaning and content are not changed in any place where the file is included.'
[//]: # 'Any links should be fully qualified and not relative: /docs/grafana/ instead of ../grafana/.'
<!-- Authentication procedure from shared file -->
To set up authentication:
1. Select an authentication method from the drop-down list:
- **Basic authentication**: Authenticates your data source using a username and password
- **Forward OAuth identity**: Forwards the OAuth access token and the OIDC ID token, if available, of the user querying to the data source
- **No authentication**: No authentication is required to access the data source
1. For **Basic authentication** only: Enter the **User** and **Password**.
1. Optional: Complete the **TLS settings** for additional security methods.
**TLS Client Authentication**
: Toggle on to use client authentication. When enabled, it adds the **Server name**, **Client cert**, and **Client key** fields. The client provides a certificate that is validated by the server to establish the client's trusted identity. The client key encrypts the data between client and server. These details are encrypted and stored in the Grafana database.
**Add self-signed certificate**
: Activate this option to use a self-signed TLS certificate. You can add your own Certificate Authority (CA) certificate on top of one generated by the certificate authorities for additional security measure.
**Skip TLS certification validation**
: When activated, it bypasses TLS certificate verification. Not recommended, unless absolutely necessary for testing.

1. Optional: Add **HTTP Headers**. You can pass along additional context and metadata data about the request and response. Select **Add header** to add **Header** and **Value** fields.
1. Select **Save & test** to preserve your changes.
[//]: # 'If you make changes to this file, verify that the meaning and content are not changed in any place where the file is included.'
[//]: # 'Any links should be fully qualified and not relative: /docs/grafana/ instead of ../grafana/.'
<!-- Procedure for using private data source connect section in the data sources -->
{{<admonitiontype="note">}}
This feature is only available in Grafana Cloud.
{{</admonition>}}
Use private data source connect (PDC) to connect to and query data within a secure network without opening that network to inbound traffic from Grafana Cloud.
Refer to [Private data source connect](https://grafana.com/docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/) for more information on how PDC works and [Configure Grafana private data source connect (PDC)](https://grafana.com/docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/configure-pdc/#configure-grafana-private-data-source-connect-pdc) for steps on setting up a PDC connection.
Use the drop-down list to select a configured private data source. If you make changes, select **Test & save** to preserve your changes.
Use **Manage private data source connect** to configure and manage any private data sources you have configured.
Using Trace to profiles, you can use Grafana’s ability to correlate different signals by adding the functionality to link between traces and profiles.
**Trace to profiles** lets you link your Grafana Pyroscope data source to tracing data.
When configured, this connection lets you run queries from a trace span into the profile data.
When configured, this connection lets you run queries from a trace span into the profile data using **Explore**.
Each span links to your queries. Clicking a link runs the query in a split panel.
If tags are configured, Grafana dynamically inserts the span attribute values into the query.
The query runs over the time range of the (span start time - 60) to (span end time + 60 seconds).
Embedded flame graphs are also inserted into each span details section that has a linked profile.
This lets you see resource consumption in a flame graph visualization for each span without having to navigate away from the current view.
{{<youtubeid="AG8VzfFMLxo">}}
@ -28,37 +34,13 @@ There are two ways to configure the trace to profiles feature:
- Use a basic configuration with default query, or
- Configure a custom query where you can use a template language to interpolate variables from the trace or span.
{{<admonitiontype="note">}}
Traces to profile requires a Tempo data source with Traces to profiles configured and a Pyroscope data source.
As with traces, your application needs to be instrumented to emit profiling data. For more information, refer to [Linking tracing and profiling with span profiles](/docs/pyroscope/<PYROSCOPE_VERSION>/configure-client/trace-span-profiles/).
{{</admonition>}}
To use trace to profiles, navigate to **Explore** and query a trace.
Each span links to your queries. Clicking a link runs the query in a split panel.
If tags are configured, Grafana dynamically inserts the span attribute values into the query.
The query runs over the time range of the (span start time - 60) to (span end time + 60 seconds).


To use trace to profiles, you must have a configured Grafana Pyroscope data source.
For more information, refer to the [Grafana Pyroscope data source](/docs/grafana/<GRAFANA_VERSION>/datasources/grafana-pyroscope/) documentation.
## Before you begin
**Embedded flame graphs** are also inserted into each span details section that has a linked profile.
This lets you see resource consumption in a flame graph visualization for each span without having to navigate away from the current view.
Hover over a particular block in the flame graph to see more details about the consumed resources.
## Configuration options
Traces to profile requires a Tempo data source with Traces to profiles configured and a [Grafana Pyroscope data source](/docs/grafana/<GRAFANA_VERSION>/datasources/grafana-pyroscope/).
The following table describes options for configuring your Trace to profiles settings:
| **Data source** | Defines the target data source. You can select a Pyroscope \[profiling\] data source. |
| **Tags** | Defines the tags to use in the profile query. Default: `cluster`, `hostname`, `namespace`, `pod`, `service.name`, `service.namespace`. You can change the tag name for example to remove dots from the name if they're not allowed in the target data source. For example, map `http.status` to `http_status`. |
| **Profile type** | Defines the profile type that used in the query. |
| **Use custom query** | Toggles use of custom query with interpolation. |
| **Query** | Input to write custom query. Use variable interpolation to customize it with variables from span. |
As with traces, your application needs to be instrumented to emit profiling data. For more information, refer to [Linking tracing and profiling with span profiles](/docs/pyroscope/<PYROSCOPE_VERSION>/configure-client/trace-span-profiles/).
## Use a basic configuration
@ -77,7 +59,6 @@ To use a basic configuration, follow these steps:
1. Select one or more profile types to use in the query. Select the drop-down and choose options from the menu.
The profile type or app must be selected for the query to be valid. Grafana doesn't show any data if the profile type or app isn’t selected when a query runs.

1. Select **Save and Test**.
@ -94,9 +75,23 @@ To use a custom query with the configuration, follow these steps:
1. Select a Pyroscope data source in the **Data source** drop-down.
1. Optional: Choose any tags to use in the query. If left blank, the default values of `service.name` and `service.namespace` are used.
These tags can be used in the custom query with `${__tags}` variable. This variable interpolates the mapped tags as list in an appropriate syntax for the data source. Only the tags that were present in the span are included; tags that aren't present are omitted. You can also configure a new name for the tag. This is useful in cases where the tag has dots in the name and the target data source doesn't allow using dots in labels. For example, you can remap `service.name` to `service_name`. If you don’t map any tags here, you can still use any tag in the query, for example: `method="${__span.tags.method}"`. You can learn more about custom query variables [here](/docs/grafana/<GRAFANA_VERSION>/datasources/tempo/configure-tempo-data-source/#custom-query-variables).
These tags can be used in the custom query with `${__tags}` variable. This variable interpolates the mapped tags as list in an appropriate syntax for the data source. Only tags present in the span are included. Tags that aren't present are omitted.
You can also configure a name for the tag. Tag names are useful where the tag has dots in the name and the target data source doesn't allow using dots in labels. For example, you can remap `service.name` to `service_name`. If you don’t map any tags here, you can still use any tag in the query, for example: `method="${__span.tags.method}"`. Learn more about [custom query variables](/docs/grafana/<GRAFANA_VERSION>/datasources/tempo/configure-tempo-data-source/#custom-query-variables).
1. Select one or more profile types to use in the query. Select the drop-down and choose options from the menu.
1. Switch on **Use custom query** to enter a custom query.
1. Specify a custom query to be used to query profile data. You can use various variables to make that query relevant for current span. The link is shown only if all the variables are interpolated with non-empty values to prevent creating an invalid query. You can interpolate the configured tags using the `$__tags` keyword.
1. Specify a custom query to be used to query profile data. You can use various variables to make that query relevant for current span. The link shows only if all the variables are interpolated with non-empty values to prevent creating an invalid query. You can interpolate the configured tags using the `$__tags` keyword.
1. Select **Save and Test**.
## Configuration options
The following table describes options for configuring your **Trace to profiles** settings:
| Data source | Defines the target data source. You can select a Pyroscope \[profiling\] data source. |
| Tags | Defines the tags to use in the profile query. Default: `cluster`, `hostname`, `namespace`, `pod`, `service.name`, `service.namespace`. You can change the tag name for example to remove dots from the name if they're not allowed in the target data source. For example, map `http.status` to `http_status`. |
| Profile type | Defines the profile type that used in the query. |
| Use custom query | Toggles use of custom query with interpolation. |
| Query | Input to write custom query. Use variable interpolation to customize it with variables from span. |
- Receive firing and resolved alert notifications in a public webhook.
Check out [Part 2](http://grafana.com/tutorials/alerting-get-started-pt2/) if you want to learn more about alerts and notification routing.
<!-- INTERACTIVE ignore START -->
{{<admonitiontype="tip">}}
Before you dive in, remember that you can [explore advanced topics like alert instances and notification routing](http://grafana.com/tutorials/alerting-get-started-pt2/) in the second part of this guide.
{{</admonition>}}
<!-- INTERACTIVE ignore END -->
{{<docs/ignore>}}
> Before you dive in, remember that you can [explore advanced topics like alert instances and notification routing](http://grafana.com/tutorials/alerting-get-started-pt2/) in the second part of this guide.
{{< /docs/ignore >}}
<!-- INTERACTIVE page intro.md END -->
@ -265,16 +279,20 @@ By incrementing the threshold, the condition is no longer met, and after the eva
## Learn more
Your learning journey continues in [Part 2](http://grafana.com/tutorials/alerting-get-started-pt2/) where you will learn about alert instances and notification routing.
<!-- INTERACTIVE ignore START -->
{{<admonitiontype="tip">}}
Advance your skills by exploring [alert instances and notification routing](http://grafana.com/tutorials/alerting-get-started-pt2/) in Part 2 of your learning journey.
## Summary
{{</admonition>}}
In this tutorial, you have learned how to set up a contact point, create an alert, and send alert notifications to a public Webhook. By following these steps, you’ve gained a foundational understanding of how to leverage Grafana Alerting capabilities to monitor and respond to events of interest in your data.
<!-- INTERACTIVE ignore END -->
Feel free to experiment with different [contact points](https://grafana.com/docs/grafana/latest/alerting/configure-notifications/manage-contact-points/) to customize your alert notifications and discover the configuration that best suits your needs.
{{<docs/ignore>}}
If you run into any problems, you are welcome to post questions in our [Grafana Community forum](https://community.grafana.com/).
Advance your skills by exploring [alert instances and notification routing](http://grafana.com/tutorials/alerting-get-started-pt2/) in Part 2 of your learning journey.
@ -389,6 +389,22 @@ Let's see how we can configure this.
{{<figuresrc="/media/tutorials/grafana-alert-on-dashboard.png"alt="A panel in a Grafana dashboard with alerting and annotations configured"caption="Displaying Grafana Alerts on a dashboard">}}
<!-- INTERACTIVE ignore START -->
{{<admonitiontype="tip">}}
Check out our [advanced alerting tutorial](http://grafana.com/tutorials/alerting-get-started-pt2/) for more insights and tips.
{{</admonition>}}
<!-- INTERACTIVE ignore END -->
{{<docs/ignore>}}
> Check out our [advanced alerting tutorial](http://grafana.com/tutorials/alerting-get-started-pt2/) for more insights and tips.
{{< /docs/ignore >}}
## Summary
In this tutorial you learned about fundamental features of Grafana. To do so, we ran several Docker containers on your local machine. When you are ready to clean up this local tutorial environment, run the following command:
This directory contains two apps - `myorg-componentconsumer-app` and `myorg-componentexposer-app` which is nested inside `myorg-componentconsumer-app`.
`myorg-componentconsumer-app` exposes a simple React component using the [`exposeComponent`](https://grafana.com/developers/plugin-tools/reference/ui-extensions#exposecomponent) api. `myorg-componentconsumer-app` in turn, consumes this compoment using the [`https://grafana.com/developers/plugin-tools/reference/ui-extensions#useplugincomponent`](https://grafana.com/developers/plugin-tools/reference/ui-extensions#useplugincomponent) hook.
To test this app:
```sh
# start e2e test instance (it will install this plugin)
PORT=3000 ./scripts/grafana-server/start-server
# run Playwright tests using Playwright VSCode extension or with the following script