@ -179,7 +179,7 @@ The Ruler's Prometheus compatibility further accentuates the marriage between me
### Black box monitoring
We don't always control the source code of applications we run. Load balancers and a myriad of other components, both open source and closed third-party, support our applications while they don't expose the metrics we want. Some don't expose any metrics at all. Loki's alerting and recording rules can produce metrics and alert on the state of the system, bringing the components into our observability stack by using the logs. This is an incredibly powerful way to introduce advanced observability into legacy architectures.
We don't always control the source code of applications we run. Load balancers and a myriad of other components, both open source and closed third-party, support our applications while they don't expose the metrics we want. Some don't expose any metrics at all. The Loki alerting and recording rules can produce metrics and alert on the state of the system, bringing the components into our observability stack by using the logs. This is an incredibly powerful way to introduce advanced observability into legacy architectures.
@ -27,7 +27,7 @@ You can find more detailed information about all of the storage options in the [
## Single Store
Single Store refers to using object storage as the storage medium for both Loki's index as well as its data ("chunks"). There are two supported modes:
Single Store refers to using object storage as the storage medium for both the Loki index as well as its data ("chunks"). There are two supported modes:
### TSDB (recommended)
@ -83,7 +83,7 @@ You may use any substitutable services, such as those that implement the S3 API
### Cassandra (deprecated)
Cassandra is a popular database and one of Loki's possible chunk stores and is production safe.
Cassandra is a popular database and one of the possible chunk stores for Loki and is production safe.
{{<collapsetitle="Title of hidden content">}}
This storage type for chunks is deprecated and may be removed in future major versions of Loki.
@ -9,7 +9,7 @@ description: Provides an overview of the steps for implementing Grafana Loki to
{{<youtubeid="1uk8LtQqsZQ">}}
Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream.
Loki is a horizontally scalable, highly available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream.
Because all Loki implementations are unique, the installation process is
different for every customer. But there are some steps in the process that
@ -26,13 +26,13 @@ To collect logs and view your log data generally involves the following steps:
1. Deploy [Grafana Alloy](https://grafana.com/docs/alloy/latest/) to collect logs from your applications.
1. On Kubernetes, deploy the Grafana Flow using the Helm chart. Configure Grafana Alloy to scrape logs from your Kubernetes cluster, and add your Loki endpoint details. See the following section for an example Grafana Alloy configuration file.
1. Add [labels](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/labels/) to your logs following our [best practices](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/labels/bp-labels/). Most Loki users start by adding labels which describe where the logs are coming from (region, cluster, environment, etc.).
1. Deploy [Grafana](https://grafana.com/docs/grafana/latest/setup-grafana/) or [Grafana Cloud](https://grafana.com/docs/grafana-cloud/quickstart/) and configure a [Loki datasource](https://grafana.com/docs/grafana/latest/datasources/loki/configure-loki-data-source/).
1. Deploy [Grafana](https://grafana.com/docs/grafana/latest/setup-grafana/) or [Grafana Cloud](https://grafana.com/docs/grafana-cloud/quickstart/) and configure a [Loki datasource](https://grafana.com/docs/grafana/latest/datasources/loki/configure-loki-data-source/).
1. Select the [Explore feature](https://grafana.com/docs/grafana/latest/explore/) in the Grafana main menu. To [view logs in Explore](https://grafana.com/docs/grafana/latest/explore/logs-integration/):
1. Pick a time range.
1. Choose the Loki datasource.
1. Choose the Loki datasource.
1. Use [LogQL](https://grafana.com/docs/loki/<LOKI_VERSION>/query/) in the [query editor](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/), use the Builder view to explore your labels, or select from sample pre-configured queries using the **Kick start your query** button.
**Next steps:** Learn more about Loki’s query language, [LogQL](https://grafana.com/docs/loki/<LOKI_VERSION>/query/).
**Next steps:** Learn more about the Loki query language, [LogQL](https://grafana.com/docs/loki/<LOKI_VERSION>/query/).
## Example Grafana Alloy and Agent configuration files to ship Kubernetes Pod logs to Loki
description: Describes the Grafana Loki architecture.
weight: 400
aliases:
- ../architecture/
@ -10,8 +10,8 @@ aliases:
# Loki architecture
Grafana Loki has a microservices-based architecture and is designed to run as a horizontally scalable, distributed system.
The system has multiple components that can run separately and in parallel.
Grafana Loki's design compiles the code for all components into a single binary or Docker image.
The system has multiple components that can run separately and in parallel. The
Grafana Loki design compiles the code for all components into a single binary or Docker image.
The `-target` command-line flag controls which component(s) that binary will behave as.
To get started easily, run Grafana Loki in "single binary" mode with all components running simultaneously in one process, or in "simple scalable deployment" mode, which groups components into read, write, and backend parts.
@ -20,7 +20,7 @@ Grafana Loki is designed to easily redeploy a cluster under a different mode as
For more information, refer to [Deployment modes]({{< relref "./deployment-modes" >}}) and [Components]({{< relref "./components" >}}).
@ -123,7 +123,7 @@ Now instead of a regex, we could do this:
Hopefully now you are starting to see the power of labels. By using a single label, you can query many streams. By combining several different labels, you can create very flexible log queries.
Labels are the index to Loki's log data. They are used to find the compressed log content, which is stored separately as chunks. Every unique combination of label and values defines a stream, and logs for a stream are batched up, compressed, and stored as chunks.
Labels are the index to Loki log data. They are used to find the compressed log content, which is stored separately as chunks. Every unique combination of label and values defines a stream, and logs for a stream are batched up, compressed, and stored as chunks.
For Loki to be efficient and cost-effective, we have to use labels responsibly. The next section will explore this in more detail.
@ -32,7 +32,7 @@ A typical Loki-based logging stack consists of 3 components:
- **Scalability** - Loki is designed for scalability, and can scale from as small as running on a Raspberry Pi to ingesting petabytes a day.
In its most common deployment, “simple scalable mode”, Loki decouples requests into separate read and write paths, so that you can independently scale them, which leads to flexible large-scale installations that can quickly adapt to meet your workload at any given time.
If needed, each of Loki's components can also be run as microservices designed to run natively within Kubernetes.
If needed, each of the Loki components can also be run as microservices designed to run natively within Kubernetes.
- **Multi-tenancy** - Loki allows multiple tenants to share a single Loki instance. With multi-tenancy, the data and requests of each tenant is completely isolated from the others.
Multi-tenancy is [configured]({{< relref "../operations/multi-tenancy" >}}) by assigning a tenant ID in the agent.
@ -44,7 +44,7 @@ Similarly, the Loki index, because it indexes only the set of labels, is signifi
By leveraging object storage as the only data storage mechanism, Loki inherits the reliability and stability of the underlying object store. It also capitalizes on both the cost efficiency and operational simplicity of object storage over other storage mechanisms like locally attached solid state drives (SSD) and hard disk drives (HDD).
The compressed chunks, smaller index, and use of low-cost object storage, make Loki less expensive to operate.
- **LogQL, Loki's query language** - [LogQL]({{< relref "../query" >}}) is the query language for Loki. Users who are already familiar with the Prometheus query language, [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/), will find LogQL familiar and flexible for generating queries against the logs.
- **LogQL, the Loki query language** - [LogQL]({{< relref "../query" >}}) is the query language for Loki. Users who are already familiar with the Prometheus query language, [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/), will find LogQL familiar and flexible for generating queries against the logs.
The language also facilitates the generation of metrics from log data,
a powerful feature that goes well beyond log aggregation.
@ -97,7 +97,7 @@ Once you have collected logs, you will want to view them. You can view your log
1. Use Grafana to query the Loki data source.
The test environment includes [Grafana](https://grafana.com/docs/grafana/latest/), which you can use to query and observe the sample logs generated by the flog application. You can access the Grafana cluster by navigating to [http://localhost:3000](http://localhost:3000). The Grafana instance provided with this demo has a Loki [datasource](https://grafana.com/docs/grafana/latest/datasources/loki/) already configured.
The test environment includes [Grafana](https://grafana.com/docs/grafana/latest/), which you can use to query and observe the sample logs generated by the flog application. You can access the Grafana cluster by navigating to [http://localhost:3000](http://localhost:3000). The Grafana instance provided with this demo has a Loki [datasource](https://grafana.com/docs/grafana/latest/datasources/loki/) already configured.
@ -16,7 +16,7 @@ Loki exposes the following observability data about itself:
- **Metrics**: Loki provides a `/metrics` endpoint that exports information about Loki in Prometheus format. These metrics provide aggregated metrics of the health of your Loki cluster, allowing you to observe query response times, etc etc.
- **Logs**: Loki emits a detailed log line `metrics.go` for every query, which shows query duration, number of lines returned, query throughput, the specific LogQL that was executed, chunks searched, and much more. You can use these log lines to improve and optimize your query performance.
You can also scrape Loki's logs and metrics and push them to separate instances of Loki and Mimir to provide information about the health of your Loki system (a process known as "meta-monitoring").
You can also scrape the Loki logs and metrics and push them to separate instances of Loki and Mimir to provide information about the health of your Loki system (a process known as "meta-monitoring").
The Loki [mixin](https://github.com/grafana/loki/blob/main/production/loki-mixin) is an opinionated set of dashboards, alerts and recording rules to monitor your Loki cluster. The mixin provides a comprehensive package for monitoring Loki in production. You can install the mixin into a Grafana instance.
@ -59,7 +59,7 @@ For an example, see [Collect and forward Prometheus metrics](https://grafana.com
## Configure Grafana
In your Grafana instance, you'll need to [create a Prometheus datasource](https://grafana.com/docs/grafana/latest/datasources/prometheus/configure-prometheus-data-source/) to visualize the metrics scraped from your Loki cluster.
In your Grafana instance, you'll need to [create a Prometheus datasource](https://grafana.com/docs/grafana/latest/datasources/prometheus/configure-prometheus-data-source/) to visualize the metrics scraped from your Loki cluster.
@ -129,7 +129,7 @@ This validation error is returned when a stream is submitted without any labels.
The `too_far_behind` and `out_of_order` reasons are identical. Loki clusters with `unordered_writes=true` (the default value as of Loki v2.4) use `reason=too_far_behind`. Loki clusters with `unordered_writes=false` use `reason=out_of_order`.
This validation error is returned when a stream is submitted out of order. More details can be found [here](/docs/loki/<LOKI_VERSION>/configuration/#accept-out-of-order-writes) about Loki's ordering constraints.
This validation error is returned when a stream is submitted out of order. More details can be found [here](/docs/loki/<LOKI_VERSION>/configuration/#accept-out-of-order-writes) about the Loki ordering constraints.
The `unordered_writes` config value can be modified globally in the [`limits_config`](/docs/loki/<LOKI_VERSION>/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/<LOKI_VERSION>/configuration/#runtime-configuration-file) file, whereas `max_chunk_age` is a global configuration.
@ -32,7 +32,7 @@ You can use the Prometheus metric `loki_ingester_wal_disk_full_failures_total` t
### Backpressure
The WAL also includes a backpressure mechanism to allow a large WAL to be replayed within a smaller memory bound. This is helpful after bad scenarios (i.e. an outage) when a WAL has grown past the point it may be recovered in memory. In this case, the ingester will track the amount of data being replayed and once it's passed the `ingester.wal-replay-memory-ceiling` threshold, will flush to storage. When this happens, it's likely that Loki's attempt to deduplicate chunks via content addressable storage will suffer. We deemed this efficiency loss an acceptable tradeoff considering how it simplifies operation and that it should not occur during regular operation (rollouts, rescheduling) where the WAL can be replayed without triggering this threshold.
The WAL also includes a backpressure mechanism to allow a large WAL to be replayed within a smaller memory bound. This is helpful after bad scenarios (i.e. an outage) when a WAL has grown past the point it may be recovered in memory. In this case, the ingester will track the amount of data being replayed and once it's passed the `ingester.wal-replay-memory-ceiling` threshold, will flush to storage. When this happens, it's likely that the Loki attempt to deduplicate chunks via content addressable storage will suffer. We deemed this efficiency loss an acceptable tradeoff considering how it simplifies operation and that it should not occur during regular operation (rollouts, rescheduling) where the WAL can be replayed without triggering this threshold.
### Metrics
@ -106,7 +106,7 @@ Then you may recreate the (updated) StatefulSet and one-by-one start deleting th
#### Scaling Down Using `/flush_shutdown` Endpoint and Lifecycle Hook
1. **StatefulSets for Ordered Scaling Down**: Loki's ingesters should be scaled down one by one, which is efficiently handled by Kubernetes StatefulSets. This ensures an ordered and reliable scaling process, as described in the [Deployment and Scaling Guarantees](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees) documentation.
1. **StatefulSets for Ordered Scaling Down**: The Loki ingesters should be scaled down one by one, which is efficiently handled by Kubernetes StatefulSets. This ensures an ordered and reliable scaling process, as described in the [Deployment and Scaling Guarantees](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees) documentation.
2. **Using PreStop Lifecycle Hook**: During the Pod scaling down process, the PreStop [lifecycle hook](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) triggers the `/flush_shutdown` endpoint on the ingester. This action flushes the chunks and removes the ingester from the ring, allowing it to register as unready and become eligible for deletion.
@ -114,7 +114,7 @@ Then you may recreate the (updated) StatefulSet and one-by-one start deleting th
4. **Cleaning Persistent Volumes**: Persistent volumes are automatically cleaned up by leveraging the [enableStatefulSetAutoDeletePVC](https://kubernetes.io/blog/2021/12/16/kubernetes-1-23-statefulset-pvc-auto-deletion/) feature in Kubernetes.
By following the above steps, you can ensure a smooth scaling down process for Loki's ingesters while maintaining data integrity and minimizing potential disruptions.
By following the above steps, you can ensure a smooth scaling down process for the Loki ingesters while maintaining data integrity and minimizing potential disruptions.
Loki's zone aware ingesters are used by Grafana Labs in order to allow for easier rollouts of large Loki deployments. You can think of them as three logical zones, however with some extra Kubernetes configuration you could deploy them in separate zones.
The Loki zone aware ingesters are used by Grafana Labs in order to allow for easier rollouts of large Loki deployments. You can think of them as three logical zones, however with some extra Kubernetes configuration you could deploy them in separate zones.
By default, an incoming log stream's logs are replicated to 3 random ingesters. Except in the case of some replica scaling up or down, a given stream will always be replicated to the same 3 ingesters. This means that if one of those ingesters is restarted no data is lost. However two or more ingesters restarting can result in data loss and also impacts the systems ability to ingest logs because of an unhealthy ring status.
- **mixins:** Add missing log datasource on loki-deletion ([#13011](https://github.com/grafana/loki/issues/13011)) ([1948899](https://github.com/grafana/loki/commit/1948899999107e7f27f4b9faace64942abcdb41f)).
- **mixins:** Add missing log datasource on loki-deletion ([#13011](https://github.com/grafana/loki/issues/13011)) ([1948899](https://github.com/grafana/loki/commit/1948899999107e7f27f4b9faace64942abcdb41f)).
- **mixins:** Align loki-writes mixins with loki-reads ([#13022](https://github.com/grafana/loki/issues/13022)) ([757b776](https://github.com/grafana/loki/commit/757b776de39bf0fc0c6d1dd74e4a245d7a99023a)).
- **mixins:** Remove unnecessary disk panels for SSD read path ([#13014](https://github.com/grafana/loki/issues/13014)) ([8d9fb68](https://github.com/grafana/loki/commit/8d9fb68ae5d4f26ddc2ae184a1cb6a3b2a2c2127)).
- **mixins:** Upgrade old plugin for the loki-operational dashboard. ([#13016](https://github.com/grafana/loki/issues/13016)) ([d3c9cec](https://github.com/grafana/loki/commit/d3c9cec22891b45ed1cb93a9eacc5dad6a117fc5)).
@ -278,7 +278,7 @@ For more information on the `otelcol.processor.batch` configuration, see the [Op
### Write OpenTelemetry logs to Loki
Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint.
Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to the Loki native OTLP endpoint.
Finally, add the following configuration to the `config.alloy` file:
Alloy natively supports receiving logs in the OpenTelemetry format. This allows you to send logs from applications instrumented with OpenTelemetry to Alloy, which can then be sent to Loki for storage and visualization in Grafana. In this example, we will make use of 3 Alloy components to achieve this:
- **OpenTelemetry Receiver:** This component will receive logs in the OpenTelemetry format via HTTP and gRPC.
- **OpenTelemetry Processor:** This component will accept telemetry data from other `otelcol.*` components and place them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data.
- **OpenTelemetry Exporter:** This component will accept telemetry data from other `otelcol.*` components and write them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint.
- **OpenTelemetry Exporter:** This component will accept telemetry data from other `otelcol.*` components and write them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to the Loki native OTLP endpoint.
<!-- INTERACTIVE ignore START -->
@ -167,7 +167,7 @@ For more information on the `otelcol.processor.batch` configuration, see the [Op
### Export logs to Loki using a OpenTelemetry Exporter
Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other `otelcol` components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint.
Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other `otelcol` components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to the Loki native OTLP endpoint.
Now add the following configuration to the `config.alloy` file:
The Docker image `grafana/fluent-plugin-loki:main` contains [default configuration files](https://github.com/grafana/loki/tree/main/clients/cmd/fluentd/docker/conf). By default, fluentd containers use that default configuration. You can instead specify your `fluentd.conf` configuration file with a `FLUENTD_CONF` environment variable.
This image also uses `LOKI_URL`, `LOKI_USERNAME`, and `LOKI_PASSWORD` environment variables to specify the Loki's endpoint, user, and password (you can leave the USERNAME and PASSWORD blank if they're not used).
This image also uses `LOKI_URL`, `LOKI_USERNAME`, and `LOKI_PASSWORD` environment variables to specify the the Loki endpoint, user, and password (you can leave the USERNAME and PASSWORD blank if they're not used).
This image starts an instance of Fluentd that forwards incoming logs to the specified Loki URL. As an alternate, containerized applications can also use [docker driver plugin]({{< relref "../docker-driver" >}}) to ship logs without needing Fluentd.
@ -103,5 +103,5 @@ Taking the above-ingested log line, let us look at how the querying experience w
## What do you need to do to switch from LokiExporter to native OTel ingestion format?
- Point your OpenTelemetry Collector to Loki's native OTel ingestion endpoint as explained [here](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/otel/#loki-configuration).
- Point your OpenTelemetry Collector to the Loki native OTel ingestion endpoint as explained [here](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/otel/#loki-configuration).
- Rewrite your LogQL queries in various places, including dashboards, alerts, starred queries in Grafana Explore, etc. to query OTel logs as per the new format.
@ -14,7 +14,7 @@ The `limit` stage is a rate-limiting stage that throttles logs based on several
## Limit stage schema
This pipeline stage places limits on the rate or burst quantity of log lines that Promtail pushes to Loki.
The concept of having distinct burst and rate limits mirrors the approach to limits that can be set for Loki's distributor component: `ingestion_rate_mb` and `ingestion_burst_size_mb`, as defined in [limits_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#limits_config).
The concept of having distinct burst and rate limits mirrors the approach to limits that can be set for the Loki distributor component: `ingestion_rate_mb` and `ingestion_burst_size_mb`, as defined in [limits_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#limits_config).
@ -29,7 +29,7 @@ When you enable istio-injection on the namespace where Loki is running, you need
### Query frontend service
Make the following modifications to the file for Loki's Query Frontend service.
Make the following modifications to the file for the Loki Query Frontend service.
1. Change the name of `grpc` port to `grpclb`. This is used by the grpc load balancing strategy which relies on SRV records. Otherwise the `querier` will not be able to reach the `query-frontend`. See https://github.com/grafana/loki/blob/0116aa61c86fa983ddcbbd5e30a2141d2e89081a/production/ksonnet/loki/common.libsonnet#L19
and
@ -67,7 +67,7 @@ spec:
### Querier service
Make the following modifications to the file for Loki's Querier service.
Make the following modifications to the file for the Loki Querier service.
Set the `appProtocol` of the `grpc` service to `tcp`
@ -103,7 +103,7 @@ spec:
### Ingester service and Ingester headless service
Make the following modifications to the file for Loki's Query Ingester and Ingester Headless service.
Make the following modifications to the file for the Loki Query Ingester and Ingester Headless service.
Set the `appProtocol` of the `grpc` port to `tcp`
@ -137,7 +137,7 @@ spec:
### Distributor service
Make the following modifications to the file for Loki's Distributor service.
Make the following modifications to the file for the Loki Distributor service.
@ -49,7 +49,7 @@ Revise the YAML contents of `environments/loki/main.jsonnet`, updating these var
- Update the S3 or GCS variable values, depending on your object storage type. See [storage_config](/docs/loki/<LOKI_VERSION>/configuration/#storage_config) for more configuration details.
- Remove from the configuration the S3 or GCS object storage variables that are not part of your setup.
- Update the Promtail configuration `container_root_path` variable's value to reflect your root path for the Docker daemon. Run `docker info | grep "Root Dir"` to acquire your root path.
- Update the `from` value in the Loki `schema_config` section to no more than 14 days prior to the current date. The `from` date represents the first day for which the `schema_config` section is valid. For example, if today is `2021-01-15`, set `from` to `2021-01-01`. This recommendation is based on Loki's default acceptance of log lines up to 14 days in the past. The `reject_old_samples_max_age` configuration variable controls the acceptance range.
- Update the `from` value in the Loki `schema_config` section to no more than 14 days prior to the current date. The `from` date represents the first day for which the `schema_config` section is valid. For example, if today is `2021-01-15`, set `from` to `2021-01-01`. This recommendation is based on the Loki default acceptance of log lines up to 14 days in the past. The `reject_old_samples_max_age` configuration variable controls the acceptance range.
@ -504,7 +504,7 @@ only in 2.8 and forward releases does the zero value disable retention.
The metrics.go log line emitted for every query had an entry called `subqueries` which was intended to represent the amount a query was parallelized on execution.
In the current form it only displayed the count of subqueries generated with Loki's split by time logic and did not include counts for shards.
In the current form it only displayed the count of subqueries generated with the Loki split by time logic and did not include counts for shards.
There wasn't a clean way to update subqueries to include sharding information and there is value in knowing the difference between the subqueries generated when we split by time vs sharding factors, especially now that TSDB can do dynamic sharding.
@ -33,7 +33,7 @@ Modern Grafana versions after 6.3 have built-in support for Grafana Loki and [Lo
1. To see the logs, click <kbd>Explore</kbd> on the sidebar, select the Loki
data source in the top-left dropdown, and then choose a log stream using the
<kbd>Log labels</kbd> button.
1. Learn more about querying by reading about Loki's query language [LogQL]({{< relref "../query/_index.md" >}}).
1. Learn more about querying by reading about the Loki query language [LogQL]({{< relref "../query/_index.md" >}}).
If you would like to see an example of this live, you can try [Grafana Play's Explore feature](https://play.grafana.org/explore?schemaVersion=1&panes=%7B%22v1d%22:%7B%22datasource%22:%22ac4000ca-1959-45f5-aa45-2bd0898f7026%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22expr%22:%22%7Bagent%3D%5C%22promtail%5C%22%7D%20%7C%3D%20%60%60%22,%22queryType%22:%22range%22,%22datasource%22:%7B%22type%22:%22loki%22,%22uid%22:%22ac4000ca-1959-45f5-aa45-2bd0898f7026%22%7D,%22editorMode%22:%22builder%22%7D%5D,%22range%22:%7B%22from%22:%22now-1h%22,%22to%22:%22now%22%7D%7D%7D&orgId=1)
@ -43,7 +43,7 @@ search and filter for logs with Loki.
## Using Grafana Dashboards
Because Loki can be used as a built-in data source above, we can use LogQL queries based on that datasource
Because Loki can be used as a built-in data source above, we can use LogQL queries based on that datasource
to build complex visualizations that persist on Grafana dashboards.
{{<docs/playtitle="Loki Example Grafana Dashboard"url="https://play.grafana.org/d/T512JVH7z/">}}