@ -16,4 +16,4 @@ Unlike other logging systems, Loki is built around the idea of only indexing met
Log data itself is then compressed and stored in chunks in object stores such as Amazon Simple Storage Service (S3) or Google Cloud Storage (GCS), or even locally on the filesystem.
A small index and highly compressed chunks simplifies the operation and significantly lowers the cost of Loki.
For more information, see the [Loki overview]({{< relref "./get-started/overview" >}}).
For more information, see the [Loki overview](get-started/overview/).
@ -84,7 +84,7 @@ We support [Prometheus-compatible](https://prometheus.io/docs/prometheus/latest/
> Querying the precomputed result will then often be much faster than executing the original expression every time it is needed. This is especially useful for dashboards, which need to query the same expression repeatedly every time they refresh.
Loki allows you to run [metric queries]({{< relref "../query/metric_queries" >}}) over your logs, which means
Loki allows you to run [metric queries](../query/metric_queries/) over your logs, which means
that you can derive a numeric aggregation from your logs, like calculating the number of requests over time from your NGINX access log.
### Example
@ -171,7 +171,7 @@ Further configuration options can be found under [ruler](https://grafana.com/doc
### Operations
Please refer to the [Recording Rules]({{< relref "../operations/recording-rules" >}}) page.
Please refer to the [Recording Rules](../operations/recording-rules/) page.
## Use cases
@ -275,7 +275,7 @@ jobs:
One option to scale the Ruler is by scaling it horizontally. However, with multiple Ruler instances running they will need to coordinate to determine which instance will evaluate which rule. Similar to the ingesters, the Rulers establish a hash ring to divide up the responsibilities of evaluating rules.
The possible configurations are listed fully in the [configuration documentation]({{< relref "../configure" >}}), but in order to shard rules across multiple Rulers, the rules API must be enabled via flag (`-ruler.enable-api`) or config file parameter. Secondly, the Ruler requires its own ring to be configured. From there the Rulers will shard and handle the division of rules automatically. Unlike ingesters, Rulers do not hand over responsibility: all rules are re-sharded randomly every time a Ruler is added to or removed from the ring.
The possible configurations are listed fully in the [configuration documentation](../configure/), but in order to shard rules across multiple Rulers, the rules API must be enabled via flag (`-ruler.enable-api`) or config file parameter. Secondly, the Ruler requires its own ring to be configured. From there the Rulers will shard and handle the division of rules automatically. Unlike ingesters, Rulers do not hand over responsibility: all rules are re-sharded randomly every time a Ruler is added to or removed from the ring.
A full sharding-enabled Ruler example is:
@ -300,7 +300,7 @@ ruler:
The Ruler supports the following types of storage: `azure`, `gcs`, `s3`, `swift`, `cos` and `local`. Most kinds of storage work with the sharded Ruler configuration in an obvious way, that is, configure all Rulers to use the same backend.
The local implementation reads the rule files off of the local filesystem. This is a read-only backend that does not support the creation and deletion of rules through the [Ruler API]({{< relref "../reference/api#ruler" >}}). Despite the fact that it reads the local filesystem this method can still be used in a sharded Ruler configuration if the operator takes care to load the same rules to every Ruler. For instance, this could be accomplished by mounting a [Kubernetes ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) onto every Ruler pod.
The local implementation reads the rule files off of the local filesystem. This is a read-only backend that does not support the creation and deletion of rules through the [Ruler API](../reference/api/#ruler). Despite the fact that it reads the local filesystem this method can still be used in a sharded Ruler configuration if the operator takes care to load the same rules to every Ruler. For instance, this could be accomplished by mounting a [Kubernetes ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) onto every Ruler pod.
A typical local configuration might look something like:
@ -53,4 +53,4 @@ Inspired by Python's [PEP](https://peps.python.org/pep-0001/) and Kafka's [KIP](
Google Docs were considered for this, but they are less useful because:
- they would need to be owned by the Grafana Labs organisation, so that they remain viewable even if the author closes their account
- we already have previous [design documents]({{< relref "../design-documents" >}}) in our documentation and, in a recent ([5th Jan 2023](https://docs.google.com/document/d/1MNjiHQxwFukm2J4NJRWyRgRIiK7VpokYyATzJ5ce-O8/edit#heading=h.78vexgrrtw5a)) community call, the community expressed a preference for this type of approach
- we already have previous [design documents](../../design-documents/) in our documentation and, in a recent ([5th Jan 2023](https://docs.google.com/document/d/1MNjiHQxwFukm2J4NJRWyRgRIiK7VpokYyATzJ5ce-O8/edit#heading=h.78vexgrrtw5a)) community call, the community expressed a preference for this type of approach
Note that the labels strings and lengths within the `structuredMetadata` section are stored compressed.
### Block Format
@ -95,7 +95,7 @@ Symbols store references to the actual strings containing label names and values
### Single Store
Loki stores all data in a single object storage backend. This mode of operation became generally available with Loki 2.0 and is fast, cost-effective, and simple, not to mention where all current and future development lies. This mode uses an adapter called [`boltdb_shipper`]({{< relref "../operations/storage/boltdb-shipper" >}}) to store the `index` in object storage (the same way we store `chunks`).
Loki stores all data in a single object storage backend. This mode of operation became generally available with Loki 2.0 and is fast, cost-effective, and simple, not to mention where all current and future development lies. This mode uses an adapter called [`boltdb_shipper`](../../operations/storage/boltdb-shipper/) to store the `index` in object storage (the same way we store `chunks`).
### Deprecated: Multi-store
@ -116,7 +116,7 @@ maintenance tasks. It consists of:
> Unlike the other core components of Loki, the chunk store is not a separate
> service, job, or process, but rather a library embedded in the two services
> that need to access Loki data: the [ingester]({{< relref "./components#ingester" >}}) and [querier]({{< relref "./components#querier" >}}).
> that need to access Loki data: the [ingester](../components/#ingester) and [querier](../components/#querier).
The chunk store relies on a unified interface to the
"[NoSQL](https://en.wikipedia.org/wiki/NoSQL)" stores (DynamoDB, Bigtable, and
@ -159,7 +159,7 @@ Logs from each unique set of labels are built up into "chunks" in memory and
then flushed to the backing storage backend.
If an ingester process crashes or exits abruptly, all the data that has not yet
been flushed could be lost. Loki is usually configured with a [Write Ahead Log]({{< relref "../operations/storage/wal" >}}) which can be _replayed_ on restart as well as with a `replication_factor` (usually 3) of each log to mitigate this risk.
been flushed could be lost. Loki is usually configured with a [Write Ahead Log](../../operations/storage/wal/) which can be _replayed_ on restart as well as with a `replication_factor` (usually 3) of each log to mitigate this risk.
When not configured to accept out-of-order writes,
all lines pushed to Loki for a given stream (unique combination of
@ -175,7 +175,7 @@ nanosecond timestamps:
different content, the log line is accepted. This means it is possible to
have two different log lines for the same timestamp.
### Handoff - Deprecated in favor of the [WAL]({{< relref "../operations/storage/wal" >}})
### Handoff - Deprecated in favor of the [WAL](../../operations/storage/wal/)
By default, when an ingester is shutting down and tries to leave the hash ring,
it will wait to see if a new ingester tries to enter before flushing and will
@ -229,7 +229,7 @@ Caching log (filter, regexp) queries are under active development.
## Querier
The **querier** service handles queries using the [LogQL]({{< relref "../query" >}}) query
The **querier** service handles queries using the [LogQL](../../query/) query
language, fetching logs both from the ingesters and from long-term storage.
Queriers query all ingesters for in-memory data before falling back to
@ -238,5 +238,5 @@ factor, it is possible that the querier may receive duplicate data. To resolve
this, the querier internally **deduplicates** data that has the same nanosecond
timestamp, label set, and log message.
At read path, [replication factor]({{< relref "#replication-factor" >}}) also plays a role here. For example with `replication-factor` of `3`, we require that two queries to be running.
At read path, [replication factor](#replication-factor) also plays a role here. For example with `replication-factor` of `3`, we require that two queries to be running.
@ -16,7 +16,7 @@ Because Loki decouples the data it stores from the software which ingests and qu
## Simple Scalable
The simple scalable deployment mode, is the preferred way to deploy Loki for most installations. The simple scalable deployment is the default configuration installed by the [Loki Helm Chart]({{< relref "../setup/install/helm" >}}). This deployment mode is the easiest way to deploy Loki at scale. It strikes a balance between deploying in [monolithic mode](#monolithic-mode) or deploying each component as a [separate microservice](#microservices-mode).
The simple scalable deployment mode, is the preferred way to deploy Loki for most installations. The simple scalable deployment is the default configuration installed by the [Loki Helm Chart](../../setup/install/helm/). This deployment mode is the easiest way to deploy Loki at scale. It strikes a balance between deploying in [monolithic mode](#monolithic-mode) or deploying each component as a [separate microservice](#microservices-mode).
{{% admonition type="note" %}}
This deployment mode is sometimes referred to by the acronym SSD for simple scalable deployment, not to be confused with solid state drives. Loki uses an object store.
@ -21,7 +21,7 @@ If you are familiar with Prometheus, the term used there is series; however, Pro
{{% admonition type="note" %}}
Structured metadata do not define a stream, but are metadata attached to a log line.
See [structured metadata]({{< relref "./structured-metadata" >}}) for more information.
See [structured metadata](structured-metadata/) for more information.
{{% /admonition %}}
## Format
@ -151,7 +151,7 @@ The two previous examples use statically defined labels with a single value; how
__path__: /var/log/apache.log
```
This regex matches every component of the log line and extracts the value of each component into a capture group. Inside the pipeline code, this data is placed in a temporary data structure that allows using it for several purposes during the processing of that log line (at which point that temp data is discarded). Much more detail about this can be found in the [Promtail pipelines]({{< relref "../../send-data/promtail/pipelines" >}}) documentation.
This regex matches every component of the log line and extracts the value of each component into a capture group. Inside the pipeline code, this data is placed in a temporary data structure that allows using it for several purposes during the processing of that log line (at which point that temp data is discarded). Much more detail about this can be found in the [Promtail pipelines](../../send-data/promtail/pipelines/) documentation.
From that regex, we will be using two of the capture groups to dynamically set two labels based on content from the log line itself:
@ -207,7 +207,7 @@ Now let's talk about Loki, where the index is typically an order of magnitude sm
Loki will effectively keep your static costs as low as possible (index size and memory requirements as well as static log storage) and make the query performance something you can control at runtime with horizontal scaling.
To see how this works, let's look back at our example of querying your access log data for a specific IP address. We don't want to use a label to store the IP address. Instead we use a [filter expression]({{< relref "../../query/log_queries#line-filter-expression" >}}) to query for it:
To see how this works, let's look back at our example of querying your access log data for a specific IP address. We don't want to use a label to store the IP address. Instead we use a [filter expression](../../query/log_queries/#line-filter-expression) to query for it:
@ -21,7 +21,7 @@ Too many label value combinations leads to too many streams. The penalties for t
To avoid those issues, don't add a label for something until you know you need it! Use filter expressions (`|= "text"`, `|~ "regex"`, …) and brute force those logs. It works -- and it's fast.
If you often parse a label from a log line at query time, the label has a high cardinality, and extracting that label is expensive in terms of performance; consider extracting the label on the client side
attaching it as [structured metadata]({{< relref "./structured-metadata" >}}) to log lines .
attaching it as [structured metadata](../structured-metadata/) to log lines .
From early on, we have set a label dynamically using Promtail pipelines for `level`. This seemed intuitive for us as we often wanted to only show logs for `level="error"`; however, we are re-evaluating this now as writing a query. `{app="loki"} |= "level=error"` is proving to be just as fast for many of our applications as `{app="loki",level="error"}`.
@ -41,11 +41,11 @@ Try to keep values bounded to as small a set as possible. We don't have perfect
## Be aware of dynamic labels applied by clients
Loki has several client options: [Promtail]({{< relref "../../send-data/promtail" >}}) (which also supports systemd journal ingestion and TCP-based syslog ingestion), [Fluentd]({{< relref "../../send-data/fluentd" >}}), [Fluent Bit]({{< relref "../../send-data/fluentbit" >}}), a [Docker plugin](/blog/2019/07/15/lokis-path-to-ga-docker-logging-driver-plugin-support-for-systemd/), and more!
Loki has several client options: [Promtail](../../../send-data/promtail/) (which also supports systemd journal ingestion and TCP-based syslog ingestion), [Fluentd](../../../send-data/fluentd/), [Fluent Bit](../../../send-data/fluentbit/), a [Docker plugin](/blog/2019/07/15/lokis-path-to-ga-docker-logging-driver-plugin-support-for-systemd/), and more!
Each of these come with ways to configure what labels are applied to create log streams. But be aware of what dynamic labels might be applied.
Use the Loki series API to get an idea of what your log streams look like and see if there might be ways to reduce streams and cardinality.
Series information can be queried through the [Series API]({{< relref "../../reference/api" >}}), or you can use [logcli]({{< relref "../../query" >}}).
Series information can be queried through the [Series API](../../../reference/api/), or you can use [logcli](../../../query/).
In Loki 1.6.0 and newer the logcli series command added the `--analyze-labels` flag specifically for debugging high cardinality labels:
@ -6,7 +6,7 @@ description: Describes how to enable structure metadata for logs and how to quer
# What is structured metadata
{{% admonition type="warning" %}}
Structured metadata was added to chunk format V4 which is used if the schema version is greater or equal to `13`. (See [Schema Config]({{< relref "../../storage#schema-config" >}}) for more details about schema versions. )
Structured metadata was added to chunk format V4 which is used if the schema version is greater or equal to `13`. (See [Schema Config](../../../storage/#schema-config) for more details about schema versions. )
{{% /admonition %}}
Selecting proper, low cardinality labels is critical to operating and querying Loki effectively. Some metadata, especially infrastructure related metadata, can be difficult to embed in log lines, and is too high cardinality to effectively store as indexed labels (and therefore reducing performance of the index).
@ -29,18 +29,18 @@ It is an antipattern to extract information that already exists in your log line
## Attaching structured metadata to log lines
You have the option to attach structured metadata to log lines in the push payload along with each log line and the timestamp.
For more information on how to push logs to Loki via the HTTP endpoint, refer to the [HTTP API documentation]({{< relref "../../reference/api#push-log-entries-to-loki" >}}).
For more information on how to push logs to Loki via the HTTP endpoint, refer to the [HTTP API documentation](../../../reference/api/#push-log-entries-to-loki).
Alternatively, you can use the Grafana Agent or Promtail to extract and attach structured metadata to your log lines.
See the [Promtail: Structured metadata stage]({{< relref "../../send-data/promtail/stages/structured_metadata" >}}) for more information.
See the [Promtail: Structured metadata stage](../../../send-data/promtail/stages/structured_metadata/) for more information.
With Loki version 1.2.0, support for structured metadata has been added to the Logstash output plugin. For more information, see [logstash]({{< relref "../../send-data/logstash/_index.md" >}}).
With Loki version 1.2.0, support for structured metadata has been added to the Logstash output plugin. For more information, see [logstash](../../../send-data/logstash/).
## Querying structured metadata
Structured metadata is extracted automatically for each returned log line and added to the labels returned for the query.
You can use labels of structured metadata to filter log line using a [label filter expression]({{< relref "../../query/log_queries#label-filter-expression" >}}).
You can use labels of structured metadata to filter log line using a [label filter expression](../../../query/log_queries/#label-filter-expression).
For example, if you have a label `pod` attached to some of your log lines as structured metadata, you can filter log lines using:
@ -54,7 +54,7 @@ Of course, you can filter by multiple labels of structured metadata at the same
Note that since structured metadata is extracted automatically to the results labels, some metric queries might return an error like `maximum of series (50000) reached for a single query`. You can use the [Keep]({{< relref "../../query/log_queries#keep-labels-expression" >}}) and [Drop]({{< relref "../../query/log_queries#drop-labels-expression" >}}) stages to filter out labels that you don't need.
Note that since structured metadata is extracted automatically to the results labels, some metric queries might return an error like `maximum of series (50000) reached for a single query`. You can use the [Keep](../../../query/log_queries/#keep-labels-expression) and [Drop](../../../query/log_queries/#drop-labels-expression) stages to filter out labels that you don't need.
@ -24,9 +24,9 @@ A typical Loki-based logging stack consists of 3 components:
- **Agent** - An agent or client, for example Promtail, which is distributed with Loki, or the Grafana Agent. The agent scrapes logs, turns the logs into streams by adding labels, and pushes the streams to Loki through an HTTP API.
- **Loki** - The main server, responsible for ingesting and storing logs and processing queries. It can be deployed in three different configurations, for more information see [deployment modes]({{< relref "../get-started/deployment-modes" >}}).
- **Loki** - The main server, responsible for ingesting and storing logs and processing queries. It can be deployed in three different configurations, for more information see [deployment modes](../deployment-modes/).
- **[Grafana](https://github.com/grafana/grafana)** for querying and displaying log data. You can also query logs from the command line, using [LogCLI]({{< relref "../query/logcli" >}}) or using the Loki API directly.
- **[Grafana](https://github.com/grafana/grafana)** for querying and displaying log data. You can also query logs from the command line, using [LogCLI](../../query/logcli/) or using the Loki API directly.
## Loki features
@ -35,7 +35,7 @@ In its most common deployment, “simple scalable mode”, Loki decouples reques
If needed, each of Loki's components can also be run as microservices designed to run natively within Kubernetes.
- **Multi-tenancy** - Loki allows multiple tenants to share a single Loki instance. With multi-tenancy, the data and requests of each tenant is completely isolated from the others.
Multi-tenancy is [configured]({{< relref "../operations/multi-tenancy" >}}) by assigning a tenant ID in the agent.
Multi-tenancy is [configured](../../operations/multi-tenancy/) by assigning a tenant ID in the agent.
- **Third-party integrations** - Several third-party agents (clients) have support for Loki, via plugins. This lets you keep your existing observability setup while also shipping logs to Loki.
@ -44,10 +44,10 @@ Similarly, the Loki index, because it indexes only the set of labels, is signifi
By leveraging object storage as the only data storage mechanism, Loki inherits the reliability and stability of the underlying object store. It also capitalizes on both the cost efficiency and operational simplicity of object storage over other storage mechanisms like locally attached solid state drives (SSD) and hard disk drives (HDD).
The compressed chunks, smaller index, and use of low-cost object storage, make Loki less expensive to operate.
- **LogQL, Loki's query language** - [LogQL]({{< relref "../query" >}}) is the query language for Loki. Users who are already familiar with the Prometheus query language, [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/), will find LogQL familiar and flexible for generating queries against the logs.
- **LogQL, Loki's query language** - [LogQL](../../query/) is the query language for Loki. Users who are already familiar with the Prometheus query language, [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/), will find LogQL familiar and flexible for generating queries against the logs.
The language also facilitates the generation of metrics from log data,
a powerful feature that goes well beyond log aggregation.
- **Alerting** - Loki includes a component called the [ruler]({{< relref "../alert" >}}), which can continually evaluate queries against your logs, and perform an action based on the result. This allows you to monitor your logs for anomalies or events. Loki integrates with [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), or the [alert manager](/docs/grafana/latest/alerting) within Grafana.
- **Alerting** - Loki includes a component called the [ruler](../../alert/), which can continually evaluate queries against your logs, and perform an action based on the result. This allows you to monitor your logs for anomalies or events. Loki integrates with [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), or the [alert manager](/docs/grafana/latest/alerting) within Grafana.
- **Grafana integration** - Loki integrates with Grafana, Mimir, and Tempo, providing a complete observability stack, and seamless correlation between logs, metrics and traces.
Grafana Loki does not come with any included authentication layer. Operators are
expected to run an authenticating reverse proxy in front of your services.
The simple scalable [deployment mode]({{< relref "../get-started/deployment-modes" >}}) requires a reverse proxy to be deployed in front of Loki, to direct client API requests to either the read or write nodes. The Loki Helm chart includes a default reverse proxy configuration, using Nginx.
The simple scalable [deployment mode](../../get-started/deployment-modes/) requires a reverse proxy to be deployed in front of Loki, to direct client API requests to either the read or write nodes. The Loki Helm chart includes a default reverse proxy configuration, using Nginx.
A list of open-source reverse proxies you can use:
@ -21,7 +21,7 @@ A list of open-source reverse proxies you can use:
Note that when using Loki in multi-tenant mode, Loki requires the HTTP header
`X-Scope-OrgID` to be set to a string identifying the tenant; the responsibility
of populating this value should be handled by the authenticating reverse proxy.
For more information, read the [multi-tenancy]({{< relref "./multi-tenancy" >}}) documentation.
For more information, read the [multi-tenancy](../multi-tenancy/) documentation.
For information on authenticating Promtail, see the documentation for [how to
The Query frontend has an in-memory queue that can be moved out into a separate process similar to the
[Grafana Mimir query-scheduler](/docs/mimir/latest/operators-guide/architecture/components/query-scheduler/). This allows running multiple query frontends.
To run with the Query Scheduler, the frontend needs to be passed the scheduler's address via `-frontend.scheduler-address` and the querier processes needs to be started with `-querier.scheduler-address` set to the same address. Both options can also be defined via the [configuration file]({{< relref "../configure/_index.md" >}}).
To run with the Query Scheduler, the frontend needs to be passed the scheduler's address via `-frontend.scheduler-address` and the querier processes needs to be started with `-querier.scheduler-address` set to the same address. Both options can also be defined via the [configuration file](../../configure/).
It is not valid to start the querier with both a configured frontend and a scheduler address.
When using IBM Cloud Object Storage (COS) as object storage, IAM `Writer` role is needed.
See the [IBM Cloud Object Storage section]({{< relref "../../storage#ibm-cloud-object-storage-cos" >}}) on the storage page for a detailed setup guide.
See the [IBM Cloud Object Storage section](../../storage/#ibm-cloud-object-storage-cos) on the storage page for a detailed setup guide.
@ -18,7 +18,7 @@ A folder is created for every tenant all the chunks for one tenant are stored in
If Loki is run in single-tenant mode, all the chunks are put in a folder named `fake` which is the synthesized tenant name used for single tenant mode.
See [multi-tenancy]({{< relref "../multi-tenancy" >}}) for more information.
See [multi-tenancy](../../multi-tenancy/) for more information.
@ -11,11 +11,11 @@ Log entries that fall within a specified time window and match an optional line
Log entry deletion is supported _only_ when the BoltDB Shipper is configured for the index store.
The compactor component exposes REST [endpoints]({{< relref "../../reference/api#compactor" >}}) that process delete requests.
The compactor component exposes REST [endpoints](../../../reference/api/#compactor) that process delete requests.
Hitting the endpoint specifies the streams and the time window.
The deletion of the log entries takes place after a configurable cancellation time period expires.
Log entry deletion relies on configuration of the custom logs retention workflow as defined for the [compactor]({{< relref "./retention#compactor" >}}). The compactor looks at unprocessed requests which are past their cancellation period to decide whether a chunk is to be deleted or not.
Log entry deletion relies on configuration of the custom logs retention workflow as defined for the [compactor](../retention/#compactor). The compactor looks at unprocessed requests which are past their cancellation period to decide whether a chunk is to be deleted or not.
@ -8,13 +8,13 @@ Retention in Grafana Loki is achieved either through the [Table Manager](#table-
By default, when `table_manager.retention_deletes_enabled` or `compactor.retention_enabled` flags are not set, then logs sent to Loki live forever.
Retention through the [Table Manager]({{< relref "./table-manager" >}}) is achieved by relying on the object store TTL feature, and will work for both [boltdb-shipper]({{< relref "./boltdb-shipper" >}}) store and chunk/index store. However retention through the [Compactor]({{< relref "./boltdb-shipper#compactor" >}}) is supported only with the [boltdb-shipper]({{< relref "./boltdb-shipper" >}}) and tsdb store.
Retention through the [Table Manager](../table-manager/) is achieved by relying on the object store TTL feature, and will work for both [boltdb-shipper](../boltdb-shipper/) store and chunk/index store. However retention through the [Compactor](../boltdb-shipper/#compactor) is supported only with the [boltdb-shipper](../boltdb-shipper/) and tsdb store.
The Compactor retention will become the default and have long term support. It supports more granular retention policies on per tenant and per stream use cases.
## Compactor
The [Compactor]({{< relref "./boltdb-shipper#compactor" >}}) can deduplicate index entries. It can also apply granular retention. When applying retention with the Compactor, the [Table Manager]({{< relref "./table-manager" >}}) is unnecessary.
The [Compactor](../boltdb-shipper/#compactor) can deduplicate index entries. It can also apply granular retention. When applying retention with the Compactor, the [Table Manager](../table-manager/) is unnecessary.
> Run the Compactor as a singleton (a single instance).
@ -88,7 +88,7 @@ The index period must be 24h.
#### Configuring the retention period
Retention period is configured within the [`limits_config`]({{< relref "../../configure#limits_config" >}}) configuration section.
Retention period is configured within the [`limits_config`](../../../configure/#limits_config) configuration section.
There are two ways of setting retention policies:
@ -164,7 +164,7 @@ The example configurations will set these rules:
In order to enable the retention support, the Table Manager needs to be
configured to enable deletions and a retention period. Please refer to the
section of the Loki configuration reference for all available options.
Alternatively, the `table-manager.retention-period` and
`table-manager.retention-deletes-enabled` command line flags can be used. The
@ -172,12 +172,12 @@ provided retention period needs to be a duration represented as a string that
can be parsed using the Prometheus common model [ParseDuration](https://pkg.go.dev/github.com/prometheus/common/model#ParseDuration). Examples: `7d`, `1w`, `168h`.
> **WARNING**: The retention period must be a multiple of the index and chunks table
`period`, configured in the [`period_config`]({{< relref "../../configure#period_config" >}})
block. See the [Table Manager]({{< relref "./table-manager#retention" >}}) documentation for
`period`, configured in the [`period_config`](../../../configure/#period_config)
block. See the [Table Manager](../table-manager/#retention) documentation for
more information.
> **NOTE**: To avoid querying of data beyond the retention period,
`max_look_back_period` config in [`chunk_store_config`]({{< relref "../../configure#chunk_store_config" >}}) must be set to a value less than or equal to
`max_look_back_period` config in [`chunk_store_config`](../../../configure/#chunk_store_config) must be set to a value less than or equal to
what is set in `table_manager.retention_period`.
When using S3 or GCS, the bucket storing the chunks needs to have the expiry
@ -197,7 +197,7 @@ intact; you will still be able to see related labels but will be unable to
retrieve the deleted log content.
For further details on the Table Manager internals, refer to the
Starting with Loki v2.8, TSDB is the Loki index. It is heavily inspired by the Prometheus's TSDB [sub-project](https://github.com/prometheus/prometheus/tree/main/tsdb). For a deeper explanation you can read Owen's [blog post](https://lokidex.com/posts/tsdb/). The short version is that this new index is more efficient, faster, and more scalable. It also resides in object storage like the [boltdb-shipper]({{< relref "./boltdb-shipper" >}}) index which preceded it.
Starting with Loki v2.8, TSDB is the Loki index. It is heavily inspired by the Prometheus's TSDB [sub-project](https://github.com/prometheus/prometheus/tree/main/tsdb). For a deeper explanation you can read Owen's [blog post](https://lokidex.com/posts/tsdb/). The short version is that this new index is more efficient, faster, and more scalable. It also resides in object storage like the [boltdb-shipper](../boltdb-shipper/) index which preceded it.
## Example Configuration
@ -77,7 +77,7 @@ We've added a user per-tenant limit called `tsdb_max_query_parallelism` in the `
Previously we would statically shard queries based on the index row shards configured [here](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#period_config).
TSDB does Dynamic Query Sharding based on how much data a query is going to be processing.
We additionally store size(KB) and number of lines for each chunk in the TSDB index which is then used by the [Query Frontend]({{< relref "../../get-started/components#query-frontend" >}}) for planning the query.
We additionally store size(KB) and number of lines for each chunk in the TSDB index which is then used by the [Query Frontend](../../../get-started/components/#query-frontend) for planning the query.
Based on our experience from operating many Loki clusters, we have configured TSDB to aim for processing 300-600 MBs of data per query shard.
This means with TSDB we will be running more, smaller queries.
@ -85,7 +85,7 @@ When scaling down, we must ensure existing data on the leaving ingesters are flu
Consider you have 4 ingesters `ingester-0 ingester-1 ingester-2 ingester-3` and you want to scale down to 2 ingesters, the ingesters which will be shut down according to StatefulSet rules are `ingester-3` and then `ingester-2`.
Hence before actually scaling down in Kubernetes, port forward those ingesters and hit the [`/ingester/flush_shutdown`]({{< relref "../../reference/api#post-ingesterflush_shutdown" >}}) endpoint. This will flush the chunks and remove itself from the ring, after which it will register as unready and may be deleted.
Hence before actually scaling down in Kubernetes, port forward those ingesters and hit the [`/ingester/flush_shutdown`](../../../reference/api/#post-ingesterflush_shutdown) endpoint. This will flush the chunks and remove itself from the ring, after which it will register as unready and may be deleted.
After hitting the endpoint for `ingester-2 ingester-3`, scale down the ingesters to 2.
If you have a reverse proxy in front of Loki, that is, between Loki and Grafana, then check any configured timeouts, such as an NGINX proxy read timeout.
- Other causes. To determine if the issue is related to Loki itself or another system such as Grafana or a client-side error,
attempt to run a [LogCLI]({{< relref "../query/logcli" >}}) query in as direct a manner as you can. For example, if running on virtual machines, run the query on the local machine. If running in a Kubernetes cluster, then port forward the Loki HTTP port, and attempt to run the query there. If you do not get a timeout, then consider these causes:
attempt to run a [LogCLI](../../query/logcli/) query in as direct a manner as you can. For example, if running on virtual machines, run the query on the local machine. If running in a Kubernetes cluster, then port forward the Loki HTTP port, and attempt to run the query there. If you do not get a timeout, then consider these causes:
- Adjust the [Grafana dataproxy timeout](/docs/grafana/latest/administration/configuration/#dataproxy). Configure Grafana with a large enough dataproxy timeout.
- Check timeouts for reverse proxies or load balancers between your client and Grafana. Queries to Grafana are made from the your local browser with Grafana serving as a proxy (a dataproxy). Therefore, connections from your client to Grafana must have their timeout configured as well.
Line filter expressions are the fastest way to filter logs once the
log stream selectors have been applied.
Line filter expressions have support matching IP addresses. See [Matching IP addresses]({{< relref "../ip" >}}) for details.
Line filter expressions have support matching IP addresses. See [Matching IP addresses](../ip/) for details.
### Removing color codes
@ -240,7 +240,7 @@ Using Duration, Number and Bytes will convert the label value prior to comparisi
For instance, `logfmt | duration > 1m and bytes_consumed > 20MB`
If the conversion of the label value fails, the log line is not filtered and an `__error__` label is added. To filters those errors see the [pipeline errors]({{< relref "..#pipeline-errors" >}}) section.
If the conversion of the label value fails, the log line is not filtered and an `__error__` label is added. To filters those errors see the [pipeline errors](../#pipeline-errors) section.
You can chain multiple predicates using `and` and `or` which respectively express the `and` and `or` binary operations. `and` can be equivalently expressed by a comma, a space or another pipe. Label filters can be place anywhere in a log pipeline.
@ -271,11 +271,11 @@ To evaluate the logical `and` first, use parenthesis, as in this example:
> Label filter expressions are the only expression allowed after the unwrap expression. This is mainly to allow filtering errors from the metric extraction.
Label filter expressions have support matching IP addresses. See [Matching IP addresses]({{< relref "../ip" >}}) for details.
Label filter expressions have support matching IP addresses. See [Matching IP addresses](../ip/) for details.
### Parser expression
Parser expression can parse and extract labels from the log content. Those extracted labels can then be used for filtering using [label filter expressions](#label-filter-expression) or for [metric aggregations]({{< relref "../metric_queries" >}}).
Parser expression can parse and extract labels from the log content. Those extracted labels can then be used for filtering using [label filter expressions](#label-filter-expression) or for [metric aggregations](../metric_queries/).
Extracted label keys are automatically sanitized by all parsers, to follow Prometheus metric name convention.(They can only contain ASCII letters and digits, as well as underscores and colons. They cannot start with a digit.)
@ -295,7 +295,7 @@ If an extracted label key name already exists in the original log stream, the ex
Loki supports [JSON](#json), [logfmt](#logfmt), [pattern](#pattern), [regexp](#regular-expression) and [unpack](#unpack) parsers.
It's easier to use the predefined parsers `json` and `logfmt` when you can. If you can't, the `pattern` and `regexp` parsers can be used for log lines with an unusual structure. The `pattern` parser is easier and faster to write; it also outperforms the `regexp` parser.
Multiple parsers can be used by a single log pipeline. This is useful for parsing complex logs. There are examples in [Multiple parsers]({{< relref "../query_examples#examples-that-use-multiple-parsers" >}}).
Multiple parsers can be used by a single log pipeline. This is useful for parsing complex logs. There are examples in [Multiple parsers](../query_examples/#examples-that-use-multiple-parsers).
#### JSON
@ -553,7 +553,7 @@ those labels:
#### unpack
The `unpack` parser parses a JSON log line, unpacking all embedded labels from Promtail's [`pack` stage]({{< relref "../../send-data/promtail/stages/pack.md" >}}).
The `unpack` parser parses a JSON log line, unpacking all embedded labels from Promtail's [`pack` stage](../../send-data/promtail/stages/pack/).
**A special property `_entry` will also be used to replace the original log line**.
For example, using `| unpack` with the log line:
@ -595,7 +595,7 @@ If we have the following labels `ip=1.1.1.1`, `status=200` and `duration=3000`(m
The above query will give us the `line` as `1.1.1.1 200 3`
See [template functions]({{< relref "../template_functions" >}}) to learn about available functions in the template format.
See [template functions](../template_functions/) to learn about available functions in the template format.
Unwrapped ranges uses extracted labels as sample values instead of log lines. However to select which label will be used within the aggregation, the log query must end with an unwrap expression and optionally a label filter expression to discard [errors]({{< relref ".#pipeline-errors" >}}).
Unwrapped ranges uses extracted labels as sample values instead of log lines. However to select which label will be used within the aggregation, the log query must end with an unwrap expression and optionally a label filter expression to discard [errors](./#pipeline-errors).
The unwrap expression is noted `| unwrap label_identifier` where the label identifier is the label name to use for extracting sample values.
@ -104,7 +104,7 @@ Which can be used to aggregate over distinct labels dimensions by including a `w
`without` removes the listed labels from the result vector, while all other labels are preserved the output. `by` does the opposite and drops labels that are not listed in the `by` clause, even if their label values are identical between all elements of the vector.
See [Unwrap examples]({{< relref "./query_examples#unwrap-examples" >}}) for query examples that use the unwrap expression.
See [Unwrap examples](../query_examples/#unwrap-examples) for query examples that use the unwrap expression.
## Built-in aggregation operators
@ -135,7 +135,7 @@ The aggregation operators can either be used to aggregate over all label values
The `without` clause removes the listed labels from the resulting vector, keeping all others.
The `by` clause does the opposite, dropping labels that are not listed in the clause, even if their label values are identical between all elements of the vector.
See [vector aggregation examples]({{< relref "./query_examples#vector-aggregation-examples" >}}) for query examples that use vector aggregation expressions.
See [vector aggregation examples](../query_examples/#vector-aggregation-examples) for query examples that use vector aggregation expressions.
set the `X-Scope-OrgID` header to identify the tenant you want to query.
Here is the same example query for the single tenant called `Tenant1`:
@ -274,7 +274,7 @@ GET /loki/api/v1/query_range
`/loki/api/v1/query_range` is used to do a query over a range of time and
accepts the following query parameters in the URL:
- `query`: The [LogQL]({{< relref "../query" >}}) query to perform
- `query`: The [LogQL](../../query/) query to perform
- `limit`: The max number of entries to return. It defaults to `100`. Only applies to query types which produce a stream(log lines) response.
- `start`: The start time for the query as a nanosecond Unix epoch or another [supported format](#timestamp-formats). Defaults to one hour ago. Loki returns results with timestamp greater or equal to this value.
- `end`: The end time for the query as a nanosecond Unix epoch or another [supported format](#timestamp-formats). Defaults to now. Loki returns results with timestamp lower than this value.
@ -529,7 +529,7 @@ GET /loki/api/v1/tail
`/loki/api/v1/tail` is a WebSocket endpoint that will stream log messages based on
a query. It accepts the following query parameters in the URL:
- `query`: The [LogQL]({{< relref "../query" >}}) query to perform
- `query`: The [LogQL](../../query/) query to perform
- `delay_for`: The number of seconds to delay retrieving logs to let slow
loggers catch up. Defaults to 0 and cannot be larger than 5.
- `limit`: The max number of entries to return. It defaults to `100`.
@ -596,7 +596,7 @@ JSON post body can be sent in the following format:
}
```
You can optionally attach [structured metadata]({{< relref "../get-started/labels/structured-metadata" >}}) to each log line by adding a JSON object to the end of the log line array.
You can optionally attach [structured metadata](../../get-started/labels/structured-metadata/) to each log line by adding a JSON object to the end of the log line array.
The JSON object must be a valid JSON object with string keys and string values. The JSON object should not contain any nested object.
The JSON object must be set immediately after the log line. Here is an example of a log entry with some structured metadata attached:
@ -714,7 +714,7 @@ GET /metrics
```
`/metrics` returns exposed Prometheus metrics. See
@ -900,7 +900,7 @@ The other way to change aggregations is with the `aggregateBy` parameter. The de
URL query parameters:
- `query`: The [LogQL]({{< relref "../query" >}}) matchers to check (i.e. `{job="foo", env=~".+"}`). This parameter is required.
- `query`: The [LogQL](../../query/) matchers to check (i.e. `{job="foo", env=~".+"}`). This parameter is required.
- `start=<nanosecond Unix epoch>`: Start timestamp. This parameter is required.
- `end=<nanosecond Unix epoch>`: End timestamp. This parameter is required.
- `limit`: How many metric series to return. The parameter is optional, the default is 100.
@ -1134,7 +1134,7 @@ PUT /loki/api/v1/delete
```
Create a new delete request for the authenticated tenant.
The [log entry deletion]({{< relref "../operations/storage/logs-deletion" >}}) documentation has configuration details.
The [log entry deletion](../../operations/storage/logs-deletion/) documentation has configuration details.
Log entry deletion is supported _only_ when the BoltDB Shipper is configured for the index store.
@ -1174,7 +1174,7 @@ GET /loki/api/v1/delete
```
List the existing delete requests for the authenticated tenant.
The [log entry deletion]({{< relref "../operations/storage/logs-deletion" >}}) documentation has configuration details.
The [log entry deletion](../../operations/storage/logs-deletion/) documentation has configuration details.
Log entry deletion is supported _only_ when the BoltDB Shipper is configured for the index store.
@ -1211,7 +1211,7 @@ DELETE /loki/api/v1/delete
```
Remove a delete request for the authenticated tenant.
The [log entry deletion]({{< relref "../operations/storage/logs-deletion" >}}) documentation has configuration details.
The [log entry deletion](../../operations/storage/logs-deletion/) documentation has configuration details.
Loki allows cancellation of delete requests until the requests are picked up for processing. It is controlled by the `delete_request_cancel_period` YAML configuration or the equivalent command line option when invoking Loki. To cancel a delete request that has been picked up for processing or is partially complete, pass the `force=true` query parameter to the API.
@ -16,15 +16,15 @@ Some parts of the Loki repo will remain Apache-2.0 licensed (mainly clients and
## Features and enhancements
* Loki now has the ability to apply [custom retention]({{< relref "../operations/storage/retention" >}}) based on stream selectors! This will allow much finer control over log retention all of which is now handled by Loki, no longer requiring the use of object store configs for retention.
* Coming along hand in hand with storing logs for longer durations is the ability to [delete log streams]({{< relref "../operations/storage/logs-deletion" >}}). The initial implementation lets you submit delete request jobs which will be processed after 24 hours.
* A very exciting new LogQL parser has been introduced: the [pattern parser]({{< relref "../query/log_queries#parser-expression" >}}). Much simpler and faster than regexp for log lines that have a little bit of structure to them such as the [Common Log Format](https://en.wikipedia.org/wiki/Common_Log_Format). This is now Loki's fastest parser so try it out on any of your log lines!
* Extending on the work of Alerting Rules, Loki now accepts [recording rules]({{< relref "../alert#recording-rules" >}}). This lets you turn your logs into metrics and push them to Prometheus or any Prometheus compatible remote_write endpoint.
* LogQL can understand [IP addresses]({{< relref "../query/ip" >}})! This enables filtering on IP addresses and subnet ranges.
* Loki now has the ability to apply [custom retention](../../operations/storage/retention/) based on stream selectors! This will allow much finer control over log retention all of which is now handled by Loki, no longer requiring the use of object store configs for retention.
* Coming along hand in hand with storing logs for longer durations is the ability to [delete log streams](../../operations/storage/logs-deletion/). The initial implementation lets you submit delete request jobs which will be processed after 24 hours.
* A very exciting new LogQL parser has been introduced: the [pattern parser](../../query/log_queries/#parser-expression). Much simpler and faster than regexp for log lines that have a little bit of structure to them such as the [Common Log Format](https://en.wikipedia.org/wiki/Common_Log_Format). This is now Loki's fastest parser so try it out on any of your log lines!
* Extending on the work of Alerting Rules, Loki now accepts [recording rules](../../alert/#recording-rules). This lets you turn your logs into metrics and push them to Prometheus or any Prometheus compatible remote_write endpoint.
* LogQL can understand [IP addresses](../../query/ip/)! This enables filtering on IP addresses and subnet ranges.
For those of you running Loki as microservices, the following features will improve performance operations significantly for many operations.
* We created an [index gateway]({{< relref "../operations/storage/boltdb-shipper#index-gateway" >}}) which takes on the task of downloading the boltdb-shipper index files allowing you to run your queriers without any local disk requirements, this is really helpful in Kubernetes environments where you can return your queriers from Statefulsets back to Deployments and save a lot of PVC costs and operational headaches.
* We created an [index gateway](../../operations/storage/boltdb-shipper/#index-gateway) which takes on the task of downloading the boltdb-shipper index files allowing you to run your queriers without any local disk requirements, this is really helpful in Kubernetes environments where you can return your queriers from Statefulsets back to Deployments and save a lot of PVC costs and operational headaches.
* Ingester queriers [are now shardable](https://github.com/grafana/loki/pull/3852), this is a significant performance boost for high volume log streams when querying recent data.
* Instant queries can now be [split and sharded](https://github.com/grafana/loki/pull/3984) making them just as fast as range queries.
@ -42,7 +42,7 @@ Lastly several useful additions to the LogQL query language have been included:
## Upgrade considerations
The path from 2.2.1 to 2.3.0 should be smooth, as always, read the [Upgrade Guide]({{< relref "../setup/upgrade#230" >}}) for important upgrade guidance.
The path from 2.2.1 to 2.3.0 should be smooth, as always, read the [Upgrade Guide](../../setup/upgrade/#230) for important upgrade guidance.
* [**Loki no longer requires logs to be sent in perfect chronological order.**](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#accept-out-of-order-writes) Support for out of order logs is one of the most highly requested features for Loki. The strict ordering constraint has been removed.
* Scaling Loki is now easier with a hybrid deployment mode that falls between our single binary and our microservices. The [Simple scalable deployment]({{< relref "../get-started/deployment-modes" >}}) scales Loki with new `read` and `write` targets. Where previously you would have needed Kubernetes and the microservices approach to start tapping into Loki’s potential, it’s now possible to do this in a simpler way.
* Scaling Loki is now easier with a hybrid deployment mode that falls between our single binary and our microservices. The [Simple scalable deployment](../../get-started/deployment-modes/) scales Loki with new `read` and `write` targets. Where previously you would have needed Kubernetes and the microservices approach to start tapping into Loki’s potential, it’s now possible to do this in a simpler way.
* The new [`common` section](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#common) results in a 70% smaller Loki configuration. Pair that with updated defaults and Loki comes out of the box with more appropriate defaults and limits. Check out the [example local configuration](https://github.com/grafana/loki/blob/main/cmd/loki/loki-local-config.yaml) as the new reference for running Loki.
* [**Recording rules**]({{< relref "../alert#recording-rules" >}}) are no longer an experimental feature. We've given them a more resilient implementation which leverages the existing write ahead log code in Prometheus.
* The new [**Promtail Kafka Consumer**]({{< relref "../send-data/promtail/scraping#kafka" >}}) can easily get your logs out of Kafka and into Loki.
* There are **nice LogQL enhancements**, thanks to the amazing Loki community. LogQL now has [group_left and group_right]({{< relref "../query#many-to-one-and-one-to-many-vector-matches" >}}). And, the `label_format` and `line_format` functions now support [working with dates and times]({{< relref "../query/template_functions#now" >}}).
* Another great community contribution allows Promtail to [**accept ndjson and plaintext log files over HTTP**]({{< relref "../send-data/promtail/configuration#loki_push_api" >}}).
* [**Recording rules**](../../alert/#recording-rules) are no longer an experimental feature. We've given them a more resilient implementation which leverages the existing write ahead log code in Prometheus.
* The new [**Promtail Kafka Consumer**](../../send-data/promtail/scraping/#kafka) can easily get your logs out of Kafka and into Loki.
* There are **nice LogQL enhancements**, thanks to the amazing Loki community. LogQL now has [group_left and group_right](../../query/#many-to-one-and-one-to-many-vector-matches). And, the `label_format` and `line_format` functions now support [working with dates and times](../../query/template_functions/#now).
* Another great community contribution allows Promtail to [**accept ndjson and plaintext log files over HTTP**](../../send-data/promtail/configuration/#loki_push_api).
All in all, about 260 PR’s went into Loki 2.4, and we thank everyone for helping us make the best Loki yet.
@ -27,7 +27,7 @@ For a full list of all changes, look at the [CHANGELOG](https://github.com/grafa
## Upgrade Considerations
Please read the [upgrade guide]({{< relref "../setup/upgrade#240" >}}) before updating Loki.
Please read the [upgrade guide](../../setup/upgrade/#240) before updating Loki.
We made a lot of changes to Loki’s configuration as part of this release.
We have tried our best to make sure changes are compatible with existing configurations, however some changes to default limits may impact users who didn't have values explicitly set for these limits in their configuration files.
@ -25,7 +25,7 @@ For a full list of all changes, look at the [CHANGELOG](https://github.com/grafa
## Upgrade Considerations
As always, please read the [upgrade guide]({{< relref "../setup/upgrade#250" >}}) before upgrading Loki.
As always, please read the [upgrade guide](../../setup/upgrade/#250) before upgrading Loki.
### Changes to the config `split_queries_by_interval`
The most likely impact many people will see is Loki failing to start because of a change in the YAML configuration for `split_queries_by_interval`. It was previously possible to define this value in two places.
@ -10,16 +10,16 @@ Grafana Labs is excited to announce the release of Loki 2.6. Here's a summary of
## Features and enhancements
- **Query multiple tenants at once.** We've introduced cross-tenant query federation, which allows you to issue one query to multiple tenants and get a single, consolidated result. This is great for scenarios where you need a global view of logs within your multi-tenant cluster. For more information on how to enable this feature, see [Multi-Tenancy]({{< relref "../operations/multi-tenancy.md" >}}).
- **Filter out and delete certain log lines from query results.** This is particularly useful in cases where users may accidentally write sensitive information to Loki that they do not want exposed. Users craft a LogQL query that selects the specific lines they're interested in, and then can choose to either filter out those lines from query results, or permanently delete them from Loki's storage. For more information, see [Logs Deletion]({{< relref "../operations/storage/logs-deletion.md" >}}).
- **Query multiple tenants at once.** We've introduced cross-tenant query federation, which allows you to issue one query to multiple tenants and get a single, consolidated result. This is great for scenarios where you need a global view of logs within your multi-tenant cluster. For more information on how to enable this feature, see [Multi-Tenancy](../../operations/multi-tenancy/).
- **Filter out and delete certain log lines from query results.** This is particularly useful in cases where users may accidentally write sensitive information to Loki that they do not want exposed. Users craft a LogQL query that selects the specific lines they're interested in, and then can choose to either filter out those lines from query results, or permanently delete them from Loki's storage. For more information, see [Logs Deletion](../../operations/storage/logs-deletion/).
- **Improved query performance on instant queries.** Loki now splits instant queries with a large time range (for example, `sum(rate({app="foo"}[6h]))`) into several smaller sub-queries and executes them in parallel. Users don't need to take any action to enjoy this performance improvement; however, they can adjust the number of sub-queries generated by modifying the `split_queries_by_interval` configuration parameter, which currently defaults to `30m`.
- **Support Baidu AI Cloud as a storage backend.** Loki users can now use Baidu Object Storage (BOS) as their storage backend. See [bos_storage_config]({{< relref "../configure/_index.md#bos_storage_config" >}}) for details.
- **Support Baidu AI Cloud as a storage backend.** Loki users can now use Baidu Object Storage (BOS) as their storage backend. See [bos_storage_config](../../configure/#bos_storage_config) for details.
For a full list of all changes, look at the [CHANGELOG](https://github.com/grafana/loki/blob/main/CHANGELOG.md).
## Upgrade Considerations
As always, please read the [upgrade guide]({{< relref "../setup/upgrade#260" >}}) before upgrading Loki.
As always, please read the [upgrade guide](../../setup/upgrade/#260) before upgrading Loki.
## Bug fixes
@ -40,4 +40,4 @@ A summary of some of the more important fixes:
- [PR 6152](https://github.com/grafana/loki/pull/6152) Fixed a scenario where live tailing of logs could cause unbounded ingester memory growth.
- [PR 5685](https://github.com/grafana/loki/pull/5685) Fixed a bug in Loki's push request parser that allowed users to send arbitrary non-string data as a log line. We now test that the pushed values are valid strings and return an error if values are not valid strings.
- [PR 5799](https://github.com/grafana/loki/pull/5799) Fixed incorrect deduplication logic for cases where multiple log entries with the same timestamp exist.
- [PR 5888](https://github.com/grafana/loki/pull/5888) Fixed a bug in the [common configuration]({{< relref "../configure/_index.md#common" >}}) where the `instance_interface_names` setting was getting overwritten by the default ring configuration.
- [PR 5888](https://github.com/grafana/loki/pull/5888) Fixed a bug in the [common configuration](../../configure/#common) where the `instance_interface_names` setting was getting overwritten by the default ring configuration.
@ -14,7 +14,7 @@ Grafana Labs is excited to announce the release of Loki 2.7. Here's a summary of
- **Better Support for Azure Blob Storage** thanks to the ability to use Azure's Service Principal Credentials.
- **Logs can now be pushed from the Loki canary** so you don't have to rely on a scraping service to use the canary.
- **Additional `label_format` fields**`__timestamp__` and `__line__`.
- **`fifocache` has been renamed** The in-memory `fifocache` has been renamed to `embedded-cache`. Check [upgrade guide]({{< relref "../setup/upgrade#270" >}}) for more details
- **`fifocache` has been renamed** The in-memory `fifocache` has been renamed to `embedded-cache`. Check [upgrade guide](../../setup/upgrade/#270) for more details
- **New HTTP endpoint for Ingester shutdown** that will also delete the ring token.
- **Faster label queries** thanks to new parallization.
- **Introducing Stream Sharding** an experimental new feature to help deal with very large streams.
@ -30,7 +30,7 @@ For a full list of all, look at the [CHANGELOG](https://github.com/grafana/loki/
## Upgrade Considerations
As always, please read the [upgrade guide]({{< relref "../setup/upgrade#270" >}}) before upgrading Loki.
As always, please read the [upgrade guide](../../setup/upgrade/#270) before upgrading Loki.
@ -17,10 +17,10 @@ While all clients can be used simultaneously to cover multiple use cases, which
The following clients are developed and supported (for those customers who have purchased a support contract) by Grafana Labs for sending logs to Loki:
- [Grafana Agent](/docs/agent/latest/) - The Grafana Agent is the recommended client for the Grafana stack. It can collect telemetry data for metrics, logs, traces, and continuous profiles and is fully compatible with the Prometheus, OpenTelemetry, and Grafana open source ecosystems.
- [Promtail]({{< relref "./promtail" >}}) - Promtail is the client of choice when you're running Kubernetes, as you can configure it to automatically scrape logs from pods running on the same node that Promtail runs on. Promtail and Prometheus running together in Kubernetes enables powerful debugging: if Prometheus and Promtail use the same labels, users can use tools like Grafana to switch between metrics and logs based on the label set.
- [Promtail](promtail/) - Promtail is the client of choice when you're running Kubernetes, as you can configure it to automatically scrape logs from pods running on the same node that Promtail runs on. Promtail and Prometheus running together in Kubernetes enables powerful debugging: if Prometheus and Promtail use the same labels, users can use tools like Grafana to switch between metrics and logs based on the label set.
Promtail is also the client of choice on bare-metal since it can be configured to tail logs from all files given a host path. It is the easiest way to send logs to Loki from plain-text files (for example, things that log to `/var/log/*.log`).
Lastly, Promtail works well if you want to extract metrics from logs such as counting the occurrences of a particular message.
- [xk6-loki extension](https://github.com/grafana/xk6-loki) - The k6-loki extension lets you perform [load testing on Loki]({{< relref "./k6" >}}).
- [xk6-loki extension](https://github.com/grafana/xk6-loki) - The k6-loki extension lets you perform [load testing on Loki](k6/).
## Third-party clients
@ -32,14 +32,14 @@ Grafana Labs cannot provide support for third-party clients. Once an issue has b
The following are popular third-party Loki clients:
- [Docker Driver]({{< relref "./docker-driver" >}}) - When using Docker and not Kubernetes, the Docker logging driver for Loki should
- [Docker Driver](docker-driver/) - When using Docker and not Kubernetes, the Docker logging driver for Loki should
be used as it automatically adds labels appropriate to the running container.
- [Fluent Bit]({{< relref "./fluentbit" >}}) - The Fluent Bit plugin is ideal when you already have Fluentd deployed
- [Fluent Bit](fluentbit/) - The Fluent Bit plugin is ideal when you already have Fluentd deployed
and you already have configured `Parser` and `Filter` plugins.
- [Fluentd]({{< relref "./fluentd" >}}) - The Fluentd plugin is ideal when you already have Fluentd deployed
- [Fluentd](fluentd/) - The Fluentd plugin is ideal when you already have Fluentd deployed
and you already have configured `Parser` and `Filter` plugins. Fluentd also works well for extracting metrics from logs when using itsPrometheus plugin.
- [Lambda Promtail]({{< relref "./lambda-promtail" >}}) - This is a workflow combining the Promtail push-api [scrape config]({{< relref "./promtail/configuration#loki_push_api" >}}) and the [lambda-promtail]({{< relref "./lambda-promtail" >}}) AWS Lambda function which pipes logs from Cloudwatch to Loki. This is a good choice if you're looking to try out Loki in a low-footprint way or if you wish to monitor AWS lambda logs in Loki
- [Logstash]({{< relref "./logstash" >}}) - If you are already using logstash and/or beats, this will be the easiest way to start.
- [Lambda Promtail](lambda-promtail/) - This is a workflow combining the Promtail push-api [scrape config](promtail/configuration/#loki_push_api) and the [lambda-promtail](lambda-promtail/) AWS Lambda function which pipes logs from Cloudwatch to Loki. This is a good choice if you're looking to try out Loki in a low-footprint way or if you wish to monitor AWS lambda logs in Loki
- [Logstash](logstash/) - If you are already using logstash and/or beats, this will be the easiest way to start.
By adding our output plugin you can quickly try Loki without doing big configuration changes.
These third-party clients also enable sending logs to Loki:
If you have any questions or issues using the Docker plugin, open an issue in
the [Loki repository](https://github.com/grafana/loki/issues).
@ -41,7 +41,7 @@ ID NAME DESCRIPTION ENABLED
ac720b8fcfdb loki Loki Logging Driver true
```
Once you have successfully installed the plugin you can [configure]({{< relref "./configuration" >}}) it.
Once you have successfully installed the plugin you can [configure](configuration/) it.
## Upgrade the Docker driver client
@ -73,4 +73,4 @@ The driver keeps all logs in memory and will drop log entries if Loki is not rea
The wait time can be lowered by setting `loki-retries=2`, `loki-max-backoff=800ms`, `loki-timeout=1s` and `keep-file=true`. This way the daemon will be locked only for a short time and the logs will be persisted locally when the Loki client is unable to re-connect.
To avoid this issue, use the Promtail [Docker target]({{< relref "../../send-data/promtail/configuration#docker" >}}) or [Docker service discovery]({{< relref "../../send-data/promtail/configuration#docker_sd_configs" >}}).
To avoid this issue, use the Promtail [Docker target](../promtail/configuration/#docker) or [Docker service discovery](../promtail/configuration/#docker_sd_configs).
@ -13,7 +13,7 @@ each container will use the default driver unless configured otherwise.
## Installation
Before configuring the plugin, [install or upgrade the Grafana Loki Docker Driver Client]({{< relref "../docker-driver" >}})
Before configuring the plugin, [install or upgrade the Grafana Loki Docker Driver Client](../)
## Change the logging driver for a container
@ -108,7 +108,7 @@ Once deployed, the Grafana service will send its logs to Loki.
## Labels
Loki can receive a set of labels along with log line. These labels are used to index log entries and query back logs using [LogQL stream selector]({{< relref "../../query/log_queries#log-stream-selector" >}}).
Loki can receive a set of labels along with log line. These labels are used to index log entries and query back logs using [LogQL stream selector](../../../query/log_queries/#log-stream-selector).
By default, the Docker driver will add the following labels to each log line:
@ -211,8 +211,8 @@ To specify additional logging driver options, you can use the --log-opt NAME=VAL
| `loki-min-backoff` | No | `500ms` | The minimum amount of time to wait before retrying a batch. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-max-backoff` | No | `5m` | The maximum amount of time to wait before retrying a batch. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-retries` | No | `10` | The maximum amount of retries for a log batch. Setting it to `0` will retry indefinitely. |
| `loki-pipeline-stage-file` | No | | The location of a pipeline stage configuration file ([example](https://github.com/grafana/loki/blob/main/clients/cmd/docker-driver/pipeline-example.yaml)). Pipeline stages allows to parse log lines to extract more labels, [see associated documentation]({{< relref "../../send-data/promtail/stages" >}}). |
| `loki-pipeline-stages` | No | | The pipeline stage configuration provided as a string [see pipeline stages](#pipeline-stages) and [associated documentation]({{< relref "../../send-data/promtail/stages" >}}). |
| `loki-pipeline-stage-file` | No | | The location of a pipeline stage configuration file ([example](https://github.com/grafana/loki/blob/main/clients/cmd/docker-driver/pipeline-example.yaml)). Pipeline stages allows to parse log lines to extract more labels, [see associated documentation](../../promtail/stages/). |
| `loki-pipeline-stages` | No | | The pipeline stage configuration provided as a string [see pipeline stages](#pipeline-stages) and [associated documentation](../../promtail/stages/). |
| `loki-relabel-config` | No | | A [Prometheus relabeling configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) allowing you to rename labels [see relabeling](#relabeling). |
| `loki-tenant-id` | No | | Set the tenant id (http header`X-Scope-OrgID`) when sending logs to Loki. It can be overridden by a pipeline stage. |
| `loki-tls-ca-file` | No | | Set the path to a custom certificate authority. |
@ -122,7 +122,7 @@ If you also want to host your Loki instance inside the cluster install the [offi
### AWS Elastic Container Service (ECS)
You can use fluent-bit Loki Docker image as a Firelens log router in AWS ECS.
For more information about this see our [AWS documentation]({{< relref "../promtail/cloud/ecs" >}})
For more information about this see our [AWS documentation](../promtail/cloud/ecs/)
### Local
@ -170,7 +170,7 @@ You can also adapt your plugins.conf, removing the need to change the command li
### Labels
Labels are used to [query logs]({{< relref "../../query" >}}) `{container_name="nginx", cluster="us-west1"}`, they are usually metadata about the workload producing the log stream (`instance`, `container_name`, `region`, `cluster`, `level`). In Loki labels are indexed consequently you should be cautious when choosing them (high cardinality label values can have performance drastic impact).
Labels are used to [query logs](../../query/) `{container_name="nginx", cluster="us-west1"}`, they are usually metadata about the workload producing the log stream (`instance`, `container_name`, `region`, `cluster`, `level`). In Loki labels are indexed consequently you should be cautious when choosing them (high cardinality label values can have performance drastic impact).
You can use `Labels`, `RemoveKeys` , `LabelKeys` and `LabelMapPath` to how the output plugin will perform labels extraction.
This image also uses `LOKI_URL`, `LOKI_USERNAME`, and `LOKI_PASSWORD` environment variables to specify the Loki's endpoint, user, and password (you can leave the USERNAME and PASSWORD blank if they're not used).
This image starts an instance of Fluentd that forwards incoming logs to the specified Loki URL. As an alternate, containerized applications can also use [docker driver plugin]({{< relref "../docker-driver" >}}) to ship logs without needing Fluentd.
This image starts an instance of Fluentd that forwards incoming logs to the specified Loki URL. As an alternate, containerized applications can also use [docker driver plugin](../docker-driver/) to ship logs without needing Fluentd.
Grafana Loki includes [Terraform](https://www.terraform.io/) and [CloudFormation](https://aws.amazon.com/cloudformation/) for shipping Cloudwatch, Cloudtrail, VPC Flow Logs and loadbalancer logs to Loki via a [lambda function](https://aws.amazon.com/lambda/). This is done via [lambda-promtail](https://github.com/grafana/loki/blob/main/tools/lambda-promtail) which processes cloudwatch events and propagates them to Loki (or a Promtail instance) via the push-api [scrape config]({{< relref "../../send-data/promtail/configuration#loki_push_api" >}}).
Grafana Loki includes [Terraform](https://www.terraform.io/) and [CloudFormation](https://aws.amazon.com/cloudformation/) for shipping Cloudwatch, Cloudtrail, VPC Flow Logs and loadbalancer logs to Loki via a [lambda function](https://aws.amazon.com/lambda/). This is done via [lambda-promtail](https://github.com/grafana/loki/blob/main/tools/lambda-promtail) which processes cloudwatch events and propagates them to Loki (or a Promtail instance) via the push-api [scrape config](../promtail/configuration/#loki_push_api).
## Deployment
@ -89,7 +89,7 @@ To modify an existing CloudFormation stack, use [update-stack](https://docs.aws.
### Ephemeral Jobs
This workflow is intended to be an effective approach for monitoring ephemeral jobs such as those run on AWS Lambda which are otherwise hard/impossible to monitor via one of the other Loki [clients]({{< relref ".." >}}).
This workflow is intended to be an effective approach for monitoring ephemeral jobs such as those run on AWS Lambda which are otherwise hard/impossible to monitor via one of the other Loki [clients](../).
Ephemeral jobs can quite easily run afoul of cardinality best practices. During high request load, an AWS lambda function might balloon in concurrency, creating many log streams in Cloudwatch. For this reason lambda-promtail defaults to **not** keeping the log stream value as a label when propagating the logs to Loki. This is only possible because new versions of Loki no longer have an ingestion ordering constraint on logs within a single stream.
@ -126,7 +126,7 @@ Triggering lambda-promtail through SQS allows handling on-failure recovery of th
## Propagated Labels
Incoming logs can have seven special labels assigned to them which can be used in [relabeling]({{< relref "../../send-data/promtail/configuration#relabel_configs" >}}) or later stages in a Promtail [pipeline]({{< relref "../../send-data/promtail/pipelines" >}}):
Incoming logs can have seven special labels assigned to them which can be used in [relabeling](../promtail/configuration/#relabel_configs) or later stages in a Promtail [pipeline](../promtail/pipelines/):
- `__aws_log_type`: Where this log came from (Cloudwatch, Kinesis or S3).
- `__aws_cloudwatch_log_group`: The associated Cloudwatch Log Group for this log.
@ -38,7 +38,7 @@ Kubernetes API server while `static` usually covers all other use cases.
Just like Prometheus, `promtail` is configured using a `scrape_configs` stanza.
`relabel_configs` allows for fine-grained control of what to ingest, what to
drop, and the final metadata to attach to the log line. Refer to the docs for
[configuring Promtail]({{< relref "./configuration" >}}) for more details.
[configuring Promtail](configuration/) for more details.
### Support for compressed files
@ -106,7 +106,7 @@ Important details are:
## Loki Push API
Promtail can also be configured to receive logs from another Promtail or any Loki client by exposing the [Loki Push API]({{< relref "../../reference/api#push-log-entries-to-loki" >}}) with the [loki_push_api]({{< relref "./configuration#loki_push_api" >}}) scrape config.
Promtail can also be configured to receive logs from another Promtail or any Loki client by exposing the [Loki Push API](../../reference/api/#push-log-entries-to-loki) with the [loki_push_api](configuration/#loki_push_api) scrape config.
There are a few instances where this might be helpful:
@ -116,12 +116,12 @@ There are a few instances where this might be helpful:
## Receiving logs From Syslog
When the [Syslog Target]({{< relref "./configuration#syslog" >}}) is being used, logs
When the [Syslog Target](configuration/#syslog) is being used, logs
can be written with the syslog protocol to the configured port.
## AWS
If you need to run Promtail on Amazon Web Services EC2 instances, you can use our [detailed tutorial]({{< relref "./cloud/ec2" >}}).
If you need to run Promtail on Amazon Web Services EC2 instances, you can use our [detailed tutorial](cloud/ec2/).
## Labeling and parsing
@ -134,7 +134,7 @@ To allow more sophisticated filtering afterwards, Promtail allows to set labels
not only from service discovery, but also based on the contents of each log
line. The `pipeline_stages` can be used to add or update labels, correct the
timestamp, or re-write log lines entirely. Refer to the documentation for
[pipelines]({{< relref "./pipelines" >}}) for more details.
[pipelines](pipelines/) for more details.
## Shipping
@ -160,7 +160,7 @@ This endpoint returns 200 when Promtail is up and running, and there's at least
### `GET /metrics`
This endpoint returns Promtail metrics for Prometheus. Refer to
[Observing Grafana Loki]({{< relref "../../operations/observability" >}}) for the list
[Observing Grafana Loki](../../operations/observability/) for the list
Sending logs from cloud services to Grafana Loki is a little different depending on the AWS service you are using. The following tutorials walk you through configuring cloud services to send logs to Loki.
In this tutorial we're going to setup [Promtail]({{< relref "../../../../send-data/promtail" >}}) on an AWS EC2 instance and configure it to sends all its logs to a Grafana Loki instance.
In this tutorial we're going to setup [Promtail](../../) on an AWS EC2 instance and configure it to sends all its logs to a Grafana Loki instance.
First let's make sure we're running as root by using `sudo -s`.
Next we'll download, install and give executable right to [Promtail]({{< relref "../../../../send-data/promtail" >}}).
Next we'll download, install and give executable right to [Promtail](../../).
```bash
mkdir /opt/promtail && cd /opt/promtail
@ -91,7 +91,7 @@ unzip "promtail-linux-amd64.zip"
chmod a+x "promtail-linux-amd64"
```
Now we're going to download the [Promtail configuration]({{< relref "../../../../send-data/promtail" >}}) file below and edit it, don't worry we will explain what those means.
Now we're going to download the [Promtail configuration](../../) file below and edit it, don't worry we will explain what those means.
The file is also available as a gist at [cyriltovena/promtail-ec2.yaml][config gist].
```bash
@ -134,11 +134,11 @@ scrape_configs:
target_label: __host__
```
The **server** section indicates Promtail to bind his http server to 3100. Promtail serves HTTP pages for [troubleshooting]({{< relref "../../../../send-data/promtail/troubleshooting" >}}) service discovery and targets.
The **server** section indicates Promtail to bind his http server to 3100. Promtail serves HTTP pages for [troubleshooting](../../troubleshooting/) service discovery and targets.
The **clients** section allow you to target your loki instance, if you're using GrafanaCloud simply replace `<user id>` and `<api secret>` with your credentials. Otherwise just replace the whole URL with your custom Loki instance.(e.g `http://my-loki-instance.my-org.com/loki/api/v1/push`)
[Promtail]({{< relref "../../../../send-data/promtail" >}}) uses the same [Prometheus **scrape_configs**][prometheus scrape config]. This means if you already own a Prometheus instance the config will be very similar and easy to grasp.
[Promtail](../../) uses the same [Prometheus **scrape_configs**][prometheus scrape config]. This means if you already own a Prometheus instance the config will be very similar and easy to grasp.
Since we're running on AWS EC2 we want to uses EC2 service discovery, this will allows us to scrape metadata about the current instance (and even your custom tags) and attach those to our logs. This way managing and querying on logs will be much easier.
You can now verify in Grafana that Loki has correctly received your instance logs by using the [LogQL]({{< relref "../../../../query" >}}) query `{zone="us-east-2"}`.
You can now verify in Grafana that Loki has correctly received your instance logs by using the [LogQL](../../../../query/) query `{zone="us-east-2"}`.
![Grafana Loki logs][ec2 logs]
@ -259,7 +259,7 @@ Note that you can use [relabeling][relabeling] to convert systemd labels to matc
That's it, save the config and you can `reboot` the machine (or simply restart the service `systemctl restart promtail.service`).
Let's head back to Grafana and verify that your Promtail logs are available in Grafana by using the [LogQL]({{< relref "../../../../query" >}}) query `{unit="promtail.service"}` in Explore. Finally make sure to checkout [live tailing][live tailing] to see logs appearing as they are ingested in Loki.
Let's head back to Grafana and verify that your Promtail logs are available in Grafana by using the [LogQL](../../../../query/) query `{unit="promtail.service"}` in Explore. Finally make sure to checkout [live tailing][live tailing] to see logs appearing as they are ingested in Loki.
@ -45,7 +45,7 @@ Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.8-eks-fd1
## Adding Promtail DaemonSet
To ship all your pods logs we're going to set up [Promtail]({{< relref "../../../../send-data/promtail" >}}) as a DaemonSet in our cluster. This means it will run on each nodes of the cluster, we will then configure it to find the logs of your containers on the host.
To ship all your pods logs we're going to set up [Promtail](../../) as a DaemonSet in our cluster. This means it will run on each nodes of the cluster, we will then configure it to find the logs of your containers on the host.
What's nice about Promtail is that it uses the same [service discovery as Prometheus][prometheus conf], you should make sure the `scrape_configs` of Promtail matches the Prometheus one. Not only this is simpler to configure, but this also means Metrics and Logs will have the same metadata (labels) attached by the Prometheus service discovery. When querying Grafana you will be able to correlate metrics and logs very quickly, you can read more about this on our [blogpost][correlate].
@ -236,7 +236,7 @@ We need a service account with the following permissions:
This enables Promtail to read log entries from the pubsub subscription created before.
You can find an example for Promtail scrape config for `gcplog` [here]({{< relref "../../scraping#gcp-log-scraping" >}})
You can find an example for Promtail scrape config for `gcplog` [here](../../scraping/#gcp-log-scraping)
If you are scraping logs from multiple GCP projects, then this serviceaccount should have above permissions in all the projects you are tyring to scrape.
@ -40,8 +40,8 @@ defined by the schema below. Brackets indicate that a parameter is optional. For
non-list parameters the value is set to the specified default.
For more detailed information on configuring how to discover and scrape logs from
targets, see [Scraping]({{< relref "./scraping" >}}). For more information on transforming logs
from scraped targets, see [Pipelines]({{< relref "./pipelines" >}}).
targets, see [Scraping](../scraping/). For more information on transforming logs
from scraped targets, see [Pipelines](../pipelines/).
## Reload at runtime
@ -458,7 +458,7 @@ docker_sd_configs:
### pipeline_stages
[Pipeline]({{< relref "./pipelines" >}}) stages are used to transform log entries and their labels. The pipeline is executed after the discovery process finishes. The `pipeline_stages` object consists of a list of stages which correspond to the items listed below.
[Pipeline](../pipelines/) stages are used to transform log entries and their labels. The pipeline is executed after the discovery process finishes. The `pipeline_stages` object consists of a list of stages which correspond to the items listed below.
In most cases, you extract data from logs with `regex` or `json` stages. The extracted data is transformed into a temporary map object. The data can then be used by Promtail e.g. as values for `labels` or as an `output`. Additionally any other stage aside from `docker` and `cri` can access the extracted data.
@ -604,7 +604,7 @@ template:
#### match
The match stage conditionally executes a set of stages when a log entry matches
a configurable [LogQL]({{< relref "../../query" >}}) stream selector.
a configurable [LogQL](../../../query/) stream selector.
```yaml
match:
@ -874,8 +874,8 @@ Promtail needs to wait for the next message to catch multi-line messages,
therefore delays between messages can occur.
See recommended output configurations for
[syslog-ng]({{< relref "./scraping#syslog-ng-output-configuration" >}}) and
[rsyslog]({{< relref "./scraping#rsyslog-output-configuration" >}}). Both configurations enable
[syslog-ng](../scraping/#syslog-ng-output-configuration) and
[rsyslog](../scraping/#rsyslog-output-configuration). Both configurations enable
IETF Syslog with octet-counting.
You may need to increase the open files limit for the Promtail process
@ -929,7 +929,7 @@ max_message_length: <int>
### loki_push_api
The `loki_push_api` block configures Promtail to expose a [Loki push API]({{< relref "../../reference/api#push-log-entries-to-loki" >}}) server.
The `loki_push_api` block configures Promtail to expose a [Loki push API](../../../reference/api/#push-log-entries-to-loki) server.
Each job configured with a `loki_push_api` will expose this API and will require a separate port.
@ -1247,7 +1247,7 @@ Each GELF message received will be encoded in JSON as the log line. For example:
{"version":"1.1","host":"example.org","short_message":"A short message","timestamp":1231231123,"level":5,"_some_extra":"extra"}
```
You can leverage [pipeline stages]({{< relref "./stages" >}}) with the GELF target,
You can leverage [pipeline stages](../stages/) with the GELF target,
if for example, you want to parse the log line and extract more labels or change the log line format.
```yaml
@ -1404,7 +1404,7 @@ All Cloudflare logs are in JSON. Here is an example:
}
```
You can leverage [pipeline stages]({{< relref "./stages" >}}) if, for example, you want to parse the JSON log line and extract more labels or change the log line format.
You can leverage [pipeline stages](../stages/) if, for example, you want to parse the JSON log line and extract more labels or change the log line format.
### heroku_drain
@ -2109,7 +2109,7 @@ The `tracing` block configures tracing for Jaeger. Currently, limited to configu
## Example Docker Config
It's fairly difficult to tail Docker files on a standalone machine because they are in different locations for every OS. We recommend the [Docker logging driver]({{< relref "../../send-data/docker-driver" >}}) for local Docker installs or Docker Compose.
It's fairly difficult to tail Docker files on a standalone machine because they are in different locations for every OS. We recommend the [Docker logging driver](../../docker-driver/) for local Docker installs or Docker Compose.
If running in a Kubernetes environment, you should look at the defined configs which are in [helm](https://github.com/grafana/helm-charts/blob/main/charts/promtail/templates/configmap.yaml) and [jsonnet](https://github.com/grafana/loki/blob/main/production/ksonnet/promtail/scrape_config.libsonnet), these leverage the prometheus service discovery libraries (and give Promtail its name) for automatically finding and tailing pods. The jsonnet config explains with comments what each section is for.
@ -87,7 +87,7 @@ Here, the `create` mode works as explained in (2) above. The `create` mode is op
### Kubernetes
[Kubernetes Service Discovery in Promtail]({{< relref "../scraping#kubernetes-discovery" >}}) also uses file-based scraping. Meaning, logs from your pods are stored on the nodes and Promtail scrapes the pod logs from the node files.
[Kubernetes Service Discovery in Promtail](../scraping/#kubernetes-discovery) also uses file-based scraping. Meaning, logs from your pods are stored on the nodes and Promtail scrapes the pod logs from the node files.
You can [configure](https://kubernetes.io/docs/concepts/cluster-administration/logging/#log-rotation) the `kubelet` process running on each node to manage log rotation via two configuration settings.
@ -144,4 +144,4 @@ If neither `kubelet` nor `CRI` is configured for rotating logs, then the `logrot
Promtail uses `polling` to watch for file changes. A `polling` mechanism combined with a [copy and truncate](#copy-and-truncate) log rotation may result in losing some logs. As explained earlier in this topic, this happens when the file is truncated before Promtail reads all the log lines from such a file.
Therefore, for a long-term solution, we strongly recommend changing the log rotation strategy to [rename and create](#rename-and-create). Alternatively, as a workaround in the short term, you can tweak the promtail client's `batchsize` [config]({{< relref "../configuration#clients" >}}) to set higher values (like 5M or 8M). This gives Promtail more room to read loglines without frequently waiting for push responses from the Loki server.
Therefore, for a long-term solution, we strongly recommend changing the log rotation strategy to [rename and create](#rename-and-create). Alternatively, as a workaround in the short term, you can tweak the promtail client's `batchsize` [config](../configuration/#clients) to set higher values (like 5M or 8M). This gives Promtail more room to read loglines without frequently waiting for push responses from the Loki server.
See [Relabeling](#relabeling) for more information. For more information on how to configure the service discovery see the [Kubernetes Service Discovery configuration]({{< relref "./configuration#kubernetes_sd_config" >}}).
See [Relabeling](#relabeling) for more information. For more information on how to configure the service discovery see the [Kubernetes Service Discovery configuration](../configuration/#kubernetes_sd_config).
## Journal Scraping (Linux Only)
@ -197,9 +197,9 @@ You can relabel default labels via [Relabeling](#relabeling) if required.
Providing a path to a bookmark is mandatory, it will be used to persist the last event processed and allow
resuming the target without skipping logs.
Read the [configuration]({{< relref "./configuration#windows_events" >}}) section for more information.
Read the [configuration](../configuration/#windows_events) section for more information.
See the [eventlogmessage]({{< relref "./stages/eventlogmessage" >}}) stage for extracting
See the [eventlogmessage](../stages/eventlogmessage/) stage for extracting
data from the `message`.
## GCP Log scraping
@ -232,7 +232,7 @@ Here `project_id` and `subscription` are the only required fields.
- `project_id` is the GCP project id.
- `subscription` is the GCP pubsub subscription where Promtail can consume log entries from.
Before using `gcplog` target, GCP should be [configured]({{< relref "./cloud/gcp" >}}) with pubsub subscription to receive logs from.
Before using `gcplog` target, GCP should be [configured](../cloud/gcp/) with pubsub subscription to receive logs from.
It also supports `relabeling` and `pipeline` stages just like other targets.
@ -268,7 +268,7 @@ section. This server exposes the single endpoint `POST /gcp/api/v1/push`, respon
For Google's PubSub to be able to send logs, **Promtail server must be publicly accessible, and support HTTPS**. For that, Promtail can be deployed
as part of a larger orchestration service like Kubernetes, which can handle HTTPS traffic through an ingress, or it can be hosted behind
a proxy/gateway, offloading the HTTPS to that component and routing the request to Promtail. Once that's solved, GCP can be [configured]({{< relref "./cloud/gcp" >}})
a proxy/gateway, offloading the HTTPS to that component and routing the request to Promtail. Once that's solved, GCP can be [configured](../cloud/gcp/)
to send logs to Promtail.
It also supports `relabeling` and `pipeline` stages.
@ -378,7 +378,7 @@ Targets can be configured using the `azure_event_hubs` stanza:
```
Only `fully_qualified_namespace`, `connection_string` and `event_hubs` are required fields.
Read the [configuration]({{< relref "./configuration#azure-event-hubs" >}}) section for more information.
Read the [configuration](../configuration/#azure-event-hubs) section for more information.
## Kafka
@ -417,7 +417,7 @@ scrape_configs:
```
Only the `brokers` and `topics` are required.
Read the [configuration]({{< relref "./configuration#kafka" >}}) section for more information.
Read the [configuration](../configuration/#kafka) section for more information.
## GELF
@ -467,7 +467,7 @@ scrape_configs:
```
Only `api_token` and `zone_id` are required.
Refer to the [Cloudfare]({{< relref "./configuration#cloudflare" >}}) configuration section for details.
Refer to the [Cloudfare](../configuration/#cloudflare) configuration section for details.
## Heroku Drain
Promtail supports receiving logs from a Heroku application by using a [Heroku HTTPS Drain](https://devcenter.heroku.com/articles/log-drains#https-drains).
@ -494,7 +494,7 @@ Configuration is specified in a`heroku_drain` block within the Promtail `scrape_
```
Within the `scrape_configs` configuration for a Heroku Drain target, the `job_name` must be a Prometheus-compatible [metric name](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).
The [server]({{< relref "./configuration#server" >}}) section configures the HTTP server created for receiving logs.
The [server](../configuration/#server) section configures the HTTP server created for receiving logs.
`labels` defines a static set of label values added to each received log entry. `use_incoming_timestamp` can be used to pass
the timestamp received from Heroku.
@ -598,5 +598,5 @@ clients:
- [ <client_option> ]
```
Refer to [`client_config`]({{< relref "./configuration#clients" >}}) from the Promtail
Refer to [`client_config`](../configuration/#clients) from the Promtail
Configuration reference for all available options.
**NOTE** For `older_than` to work, you must be using the [timestamp]({{< relref "./timestamp" >}}) stage to set the timestamp from the ingested log line _before_ applying the `drop` stage.
**NOTE** For `older_than` to work, you must be using the [timestamp](../timestamp/) stage to set the timestamp from the ingested log line _before_ applying the `drop` stage.
Promtail can be configured to print log stream entries instead of sending them to Loki.
This can be used in combination with [piping data](#pipe-data-to-promtail) to debug or troubleshoot Promtail log parsing.
In dry run mode, Promtail still support reading from a [positions]({{< relref "../configuration#positions" >}}) file however no update will be made to the targeted file, this is to ensure you can easily retry the same set of lines.
In dry run mode, Promtail still support reading from a [positions](../configuration/#positions) file however no update will be made to the targeted file, this is to ensure you can easily retry the same set of lines.
To start Promtail in dry run mode use the flag `--dry-run` as shown in the example below:
@ -79,9 +79,9 @@ This will add labels `k1` and `k2` with respective values `v1` and `v2`.
In pipe mode Promtail also support file configuration using `--config.file`, however do note that positions config is not used and
only **the first scrape config is used**.
[`static_configs:`]({{< relref "../configuration" >}}) can be used to provide static labels, although the targets property is ignored.
[`static_configs:`](../configuration/) can be used to provide static labels, although the targets property is ignored.
If you don't provide any [`scrape_config:`]({{< relref "../configuration#scrape_configs" >}}) a default one is used which will automatically adds the following default labels: `{job="stdin",hostname="<detected_hostname>"}`.
If you don't provide any [`scrape_config:`](../configuration/#scrape_configs) a default one is used which will automatically adds the following default labels: `{job="stdin",hostname="<detected_hostname>"}`.
For example you could use this config below to parse and add the label `level` on all your piped logs:
@ -25,7 +25,7 @@ This chart includes dashboards for monitoring Loki. These require the scrape con
## Canary
This chart installs the [canary]({{< relref "../../../operations/loki-canary" >}}) and its alerts by default. This is another tool to verify the Loki deployment is in a healthy state. It can be disabled with `monitoring.lokiCanary.enabled=false`.
This chart installs the [canary](../../../../operations/loki-canary/) and its alerts by default. This is another tool to verify the Loki deployment is in a healthy state. It can be disabled with `monitoring.lokiCanary.enabled=false`.
The [scalable]({{< relref "../install-scalable" >}}) installation requires a managed object store such as AWS S3 or Google Cloud Storage or a self-hosted store such as Minio. The [single binary]({{< relref "../install-monolithic" >}}) installation can only use the filesystem for storage.
The [scalable](../install-scalable/) installation requires a managed object store such as AWS S3 or Google Cloud Storage or a self-hosted store such as Minio. The [single binary](../install-monolithic/) installation can only use the filesystem for storage.
This guide assumes Loki will be installed in one of the modes above and that a `values.yaml ` has been created.
@ -38,7 +38,7 @@ This guide assumes Loki will be installed in one of the modes above and that a `
**To grant access to S3 via an IAM role without providing credentials:**
1. Provision an IAM role, policy and S3 bucket as described in [Storage]({{< relref "../../../../storage#aws-deployment-s3-single-store" >}}).
1. Provision an IAM role, policy and S3 bucket as described in [Storage](../../../../storage/#aws-deployment-s3-single-store).
- If the Terraform module was used note the annotation emitted by `terraform output -raw annotation`.
1. Add the IAM role annotation to the service account in `values.yaml`:
This Helm Chart installation runs the Grafana Loki *single binary* within a Kubernetes cluster.
If you set the `singleBinary.replicas` value to 1, this chart configures Loki to run the `all` target in a [monolithic mode]({{< relref "../../../../get-started/deployment-modes#monolithic-mode" >}}), designed to work with a filesystem storage. It will also configure meta-monitoring of metrics and logs.
If you set the `singleBinary.replicas` value to 1, this chart configures Loki to run the `all` target in a [monolithic mode](../../../../get-started/deployment-modes/#monolithic-mode), designed to work with a filesystem storage. It will also configure meta-monitoring of metrics and logs.
If you set the `singleBinary.replicas` value to 2 or more, this chart configures Loki to run a *single binary* in a replicated, highly available mode. When running replicas of a single binary, you must configure object storage.
This Helm Chart installation runs the Grafana Loki cluster within a Kubernetes cluster.
If object storge is configured, this chart configures Loki to run `read` and `write` targets in a [scalable mode]({{< relref "../../../../get-started/deployment-modes#simple-scalable" >}}), highly available architecture (3 replicas of each) designed to work with AWS S3 object storage. It will also configure meta-monitoring of metrics and logs.
If object storge is configured, this chart configures Loki to run `read` and `write` targets in a [scalable mode](../../../../get-started/deployment-modes/#simple-scalable), highly available architecture (3 replicas of each) designed to work with AWS S3 object storage. It will also configure meta-monitoring of metrics and logs.
It is not possible to run the scalable mode with the `filesystem` storage.
@ -24,7 +24,7 @@ It is not possible to run the scalable mode with the `filesystem` storage.
- Helm 3 or above. See [Installing Helm](https://helm.sh/docs/intro/install/).
- A running Kubernetes cluster.
- A Prometheus operator installation in case meta-monitoring should be used.
- Optionally a Memcached deployment for better performance. Consult the [caching section]({{< relref "../../../../operations/caching" >}}) on how to configure Memcached.
- Optionally a Memcached deployment for better performance. Consult the [caching section](../../../../operations/caching/) on how to configure Memcached.
**To deploy Loki in scalable mode:**
@ -61,7 +61,7 @@ It is not possible to run the scalable mode with the `filesystem` storage.
insecure: false
```
Consult the [Reference]({{< relref "../reference" >}}) for configuring other storage providers.
Consult the [Reference](../reference/) for configuring other storage providers.
- If you're just trying things, you can use the following configuration instead, that sets MinIO as storage:
@ -34,7 +34,7 @@ The configuration specifies running Loki as a single binary.
1. Navigate to the [release page](https://github.com/grafana/loki/releases/).
2. Scroll down to the Assets section under the version that you want to install.
3. Download the Loki and Promtail .zip files that correspond to your system.
**Note:** Do not download LogCLI or Loki Canary at this time. `LogCLI` allows you to run Loki queries in a command line interface. [Loki Canary]({{< relref "../../operations/loki-canary" >}}) is a tool to audit Loki performance.
**Note:** Do not download LogCLI or Loki Canary at this time. `LogCLI` allows you to run Loki queries in a command line interface. [Loki Canary](../../../operations/loki-canary/) is a tool to audit Loki performance.
4. Unzip the package contents into the same directory. This is where the two programs will run.
5. In the command line, change directory (`cd` on most systems) to the directory with Loki and Promtail. Copy and paste the commands below into your command line to download generic configuration files.
**Note:** Use the corresponding Git refs that match your downloaded Loki version to get the correct configuration file. For example, if you are using Loki version 2.6.1, you need to use the `https://raw.githubusercontent.com/grafana/loki/v2.9.4/cmd/loki/loki-local-config.yaml` URL to download the configuration file that corresponds to the Loki version you aim to run.
@ -59,7 +59,7 @@ The configuration specifies running Loki as a single binary.
Loki runs and displays Loki logs in your command line and on http://localhost:3100/metrics.
The next step will be running an agent to send logs to Loki.
To do so with Promtail, refer to the [Promtail configuration]({{< relref "../../send-data/promtail" >}}).
To do so with Promtail, refer to the [Promtail configuration](../../../send-data/promtail/).
This section contains instructions for migrating from one Loki implementation to another.
- [Migrate]({{< relref "./migrate-from-distributed" >}}) from the `Loki-distributed` Helm chart to the `loki` Helm chart.
- [Migrate]({{< relref "./migrate-to-three-scalable-targets" >}}) from the two target Helm chart to the three target scalable configuration Helm chart.
- [Migrate](migrate-from-distributed/) from the `Loki-distributed` Helm chart to the `loki` Helm chart.
- [Migrate](migrate-to-three-scalable-targets/) from the two target Helm chart to the three target scalable configuration Helm chart.
[TSDB]({{< relref "../../../operations/storage/tsdb" >}}) is the recommended index type for Loki and is where the current development lies.
If you are running Loki with [boltb-shipper]({{< relref "../../../operations/storage/boltdb-shipper" >}}) or any of the [legacy index types](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/storage/#index-storage) that have been deprecated,
[TSDB](../../../operations/storage/tsdb/) is the recommended index type for Loki and is where the current development lies.
If you are running Loki with [boltb-shipper](../../../operations/storage/boltdb-shipper/) or any of the [legacy index types](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/storage/#index-storage) that have been deprecated,
we strongly recommend migrating to TSDB.
@ -68,7 +68,7 @@ storage_config:
### Run compactor
We strongly recommended running the [compactor]({{< relref "../../../operations/storage/retention#compactor" >}}) when using TSDB index. It is responsible for running compaction and retention on TSDB index.
We strongly recommended running the [compactor](../../../operations/storage/retention/#compactor) when using TSDB index. It is responsible for running compaction and retention on TSDB index.
Not running index compaction will result in sub-optimal query performance.
Please refer to the [compactor section]({{< relref "../../../operations/storage/retention#compactor" >}}) for more information and configuration examples.
Please refer to the [compactor section](../../../operations/storage/retention/#compactor) for more information and configuration examples.
See the [retention docs]({{< relref "../../operations/storage/retention" >}}) for more info.
See the [retention docs](../../operations/storage/retention/) for more info.
#### Log messages on startup: proto: duplicate proto type registered:
@ -927,7 +927,7 @@ If you happen to have `results_cache.max_freshness` set, use `limits_config.max_
### Promtail config removed
The long deprecated `entry_parser` config in Promtail has been removed, use [pipeline_stages]({{< relref "../../send-data/promtail/configuration#pipeline_stages" >}}) instead.
The long deprecated `entry_parser` config in Promtail has been removed, use [pipeline_stages](../../send-data/promtail/configuration/#pipeline_stages) instead.
### Upgrading schema to use boltdb-shipper and/or v11 schema
@ -961,7 +961,7 @@ schema_config:
④ Make sure this matches your existing config (e.g. maybe you were using gcs for your object_store)
⑤ 24h is required for boltdb-shipper
There are more examples on the [Storage description page]({{< relref "../../storage/_index.md#examples" >}}) including the information you need to setup the `storage` section for boltdb-shipper.
There are more examples on the [Storage description page](../../storage/#examples) including the information you need to setup the `storage` section for boltdb-shipper.
In the [cache_config]({{< relref "../../configure#cache_config" >}}), `defaul_validity` has changed to `default_validity`.
In the [cache_config](../../configure/#cache_config), `defaul_validity` has changed to `default_validity`.
If you configured your schema via arguments and not a config file, this is no longer supported. This is not something we had ever provided as an option via docs and is unlikely anyone is doing, but worth mentioning.
@ -16,11 +16,11 @@ object storage (or filesystem) for chunk data and NoSQL/Key-Value databases for
Loki 2.0 brings an index mechanism named 'boltdb-shipper' and is what we now call [Single Store](#single-store).
This type only requires one store, the object store, for both the index and chunks.
More detailed information can be found on the [operations page]({{< relref "../operations/storage/boltdb-shipper.md" >}}).
More detailed information can be found on the [operations page](../operations/storage/boltdb-shipper/).
Loki 2.8 adds TSDB as a new mode for the Single Store and is now the recommended way to persist data in Loki.
Some more storage details can also be found in the [operations section]({{< relref "../operations/storage/_index.md" >}}).
Some more storage details can also be found in the [operations section](../operations/storage/).
## Single Store
@ -28,7 +28,7 @@ Single Store refers to using object storage as the storage medium for both Loki'
### TSDB (recommended)
Starting in Loki 2.8, the [TSDB index store]({{< relref "../operations/storage/tsdb" >}}) improves query performance, reduces TCO and has the same feature parity as "boltdb-shipper". TSDB is the recommended index store for Loki 2.8 and newer.
Starting in Loki 2.8, the [TSDB index store](../operations/storage/tsdb/) improves query performance, reduces TCO and has the same feature parity as "boltdb-shipper". TSDB is the recommended index store for Loki 2.8 and newer.
### BoltDB (deprecated)
@ -88,7 +88,7 @@ Cassandra is a popular database and one of Loki's possible chunk stores and is p
### Cassandra (deprecated)
Cassandra can also be utilized for the index store and aside from the [boltdb-shipper]({{< relref "../operations/storage/boltdb-shipper" >}}), it's the only non-cloud offering that can be used for the index that's horizontally scalable and has configurable replication. It's a good candidate when you already run Cassandra, are running on-prem, or do not wish to use a managed cloud offering.
Cassandra can also be utilized for the index store and aside from the [boltdb-shipper](../operations/storage/boltdb-shipper/), it's the only non-cloud offering that can be used for the index that's horizontally scalable and has configurable replication. It's a good candidate when you already run Cassandra, are running on-prem, or do not wish to use a managed cloud offering.
> **Note:** This storage type for indexes is deprecated and may be removed in future major versions of Loki.
@ -110,7 +110,7 @@ DynamoDB is susceptible to rate limiting, particularly due to overconsuming what
### BoltDB (deprecated)
BoltDB is an embedded database on disk. It is not replicated and thus cannot be used for high availability or clustered Loki deployments, but is commonly paired with a `filesystem` chunk store for proof of concept deployments, trying out Loki, and development. The [boltdb-shipper]({{< relref "../operations/storage/boltdb-shipper" >}}) aims to support clustered deployments using `boltdb` as an index.
BoltDB is an embedded database on disk. It is not replicated and thus cannot be used for high availability or clustered Loki deployments, but is commonly paired with a `filesystem` chunk store for proof of concept deployments, trying out Loki, and development. The [boltdb-shipper](../operations/storage/boltdb-shipper/) aims to support clustered deployments using `boltdb` as an index.
> **Note:** This storage type for indexes is deprecated and may be removed in future major versions of Loki.
@ -152,7 +152,7 @@ table_manager:
retention_period: 2520h
```
For more information, see the [table manager]({{< relref "../operations/storage/table-manager" >}}) documentation.
For more information, see the [table manager](../operations/storage/table-manager/) documentation.
### Provisioning
@ -171,13 +171,13 @@ table_manager:
inactive_read_throughput: <int> | Default = 300
```
Note, there are a few other DynamoDB provisioning options including DynamoDB autoscaling and on-demand capacity. See the [provisioning configuration]({{< relref "../configure#table_manager" >}}) in the `table_manager` block documentation for more information.
Note, there are a few other DynamoDB provisioning options including DynamoDB autoscaling and on-demand capacity. See the [provisioning configuration](../configure/#table_manager) in the `table_manager` block documentation for more information.
## Upgrading Schemas
When a new schema is released and you want to gain the advantages it provides, you can! Loki can transparently query & merge data from across schema boundaries so there is no disruption of service and upgrading is easy.
First, you'll want to create a new [period_config]({{< relref "../configure#period_config" >}}) entry in your [schema_config]({{< relref "../configure#schema_config" >}}). The important thing to remember here is to set this at some point in the _future_ and then roll out the config file changes to Loki. This allows the table manager to create the required table in advance of writes and ensures that existing data isn't queried as if it adheres to the new schema.
First, you'll want to create a new [period_config](../configure/#period_config) entry in your [schema_config](../configure/#schema_config). The important thing to remember here is to set this at some point in the _future_ and then roll out the config file changes to Loki. This allows the table manager to create the required table in advance of writes and ensures that existing data isn't queried as if it adheres to the new schema.
As an example, let's say it's 2020-07-14 and we want to start using the `v11` schema on the 20th:
@ -208,7 +208,7 @@ With the exception of the `filesystem` chunk store, Loki will not delete old chu
We're interested in adding targeted deletion in future Loki releases (think tenant or stream level granularity) and may include other strategies as well.
For more information, see the [retention configuration]({{< relref "../operations/storage/retention" >}}) documentation.
For more information, see the [retention configuration](../operations/storage/retention/) documentation.
@ -33,7 +33,7 @@ Modern Grafana versions after 6.3 have built-in support for Grafana Loki and [Lo
1. To see the logs, click <kbd>Explore</kbd> on the sidebar, select the Loki
data source in the top-left dropdown, and then choose a log stream using the
<kbd>Log labels</kbd> button.
1. Learn more about querying by reading about Loki's query language [LogQL]({{< relref "../query/_index.md" >}}).
1. Learn more about querying by reading about Loki's query language [LogQL](../../query/).
If you would like to see an example of this live, you can try [Grafana Play's Explore feature](https://play.grafana.org/explore?schemaVersion=1&panes=%7B%22v1d%22:%7B%22datasource%22:%22ac4000ca-1959-45f5-aa45-2bd0898f7026%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22expr%22:%22%7Bagent%3D%5C%22promtail%5C%22%7D%20%7C%3D%20%60%60%22,%22queryType%22:%22range%22,%22datasource%22:%7B%22type%22:%22loki%22,%22uid%22:%22ac4000ca-1959-45f5-aa45-2bd0898f7026%22%7D,%22editorMode%22:%22builder%22%7D%5D,%22range%22:%7B%22from%22:%22now-1h%22,%22to%22:%22now%22%7D%7D%7D&orgId=1)