@ -4,6 +4,8 @@ description: Grafana Loki is a set of open source components that can be compose
aliases:
- /docs/loki/
weight: 100
cascade:
GRAFANA_VERSION: latest
hero:
title: Grafana Loki
level: 1
@ -41,7 +43,7 @@ cards:
## Overview
Unlike other logging systems, Loki is built around the idea of only indexing metadata about your logs' labels (just like Prometheus labels).
Log data itself is then compressed and stored in chunks in object stores such as Amazon Simple Storage Service (S3) or Google Cloud Storage (GCS), or even locally on the filesystem.
Log data itself is then compressed and stored in chunks in object stores such as Amazon Simple Storage Service (S3) or Google Cloud Storage (GCS), or even locally on the filesystem.
@ -83,7 +83,7 @@ We support [Prometheus-compatible](https://prometheus.io/docs/prometheus/latest/
> Querying the precomputed result will then often be much faster than executing the original expression every time it is needed. This is especially useful for dashboards, which need to query the same expression repeatedly every time they refresh.
Loki allows you to run [metric queries]({{< relref "../query/metric_queries" >}}) over your logs, which means
Loki allows you to run [metric queries](../query/metric_queries/) over your logs, which means
that you can derive a numeric aggregation from your logs, like calculating the number of requests over time from your NGINX access log.
### Example
@ -167,7 +167,7 @@ Further configuration options can be found under [ruler](https://grafana.com/doc
### Operations
Please refer to the [Recording Rules]({{< relref "../operations/recording-rules" >}}) page.
Please refer to the [Recording Rules](../operations/recording-rules/) page.
## Use cases
@ -308,7 +308,7 @@ The [Cortex rules action](https://github.com/grafana/cortex-rules-action) introd
One option to scale the Ruler is by scaling it horizontally. However, with multiple Ruler instances running they will need to coordinate to determine which instance will evaluate which rule. Similar to the ingesters, the Rulers establish a hash ring to divide up the responsibilities of evaluating rules.
The possible configurations are listed fully in the [configuration documentation]({{< relref "../configure" >}}), but in order to shard rules across multiple Rulers, the rules API must be enabled via flag (`-ruler.enable-api`) or config file parameter. Secondly, the Ruler requires its own ring to be configured. From there the Rulers will shard and handle the division of rules automatically. Unlike ingesters, Rulers do not hand over responsibility: all rules are re-sharded randomly every time a Ruler is added to or removed from the ring.
The possible configurations are listed fully in the [configuration documentation](../configure/), but in order to shard rules across multiple Rulers, the rules API must be enabled via flag (`-ruler.enable-api`) or config file parameter. Secondly, the Ruler requires its own ring to be configured. From there the Rulers will shard and handle the division of rules automatically. Unlike ingesters, Rulers do not hand over responsibility: all rules are re-sharded randomly every time a Ruler is added to or removed from the ring.
@ -53,4 +53,4 @@ Inspired by Python's [PEP](https://peps.python.org/pep-0001/) and Kafka's [KIP](
Google Docs were considered for this, but they are less useful because:
- they would need to be owned by the Grafana Labs organisation, so that they remain viewable even if the author closes their account
- we already have previous [design documents]({{< relref "../design-documents" >}}) in our documentation and, in a recent ([5th Jan 2023](https://docs.google.com/document/d/1MNjiHQxwFukm2J4NJRWyRgRIiK7VpokYyATzJ5ce-O8/edit#heading=h.78vexgrrtw5a)) community call, the community expressed a preference for this type of approach
- we already have previous [design documents](../../design-documents/) in our documentation and, in a recent ([5th Jan 2023](https://docs.google.com/document/d/1MNjiHQxwFukm2J4NJRWyRgRIiK7VpokYyATzJ5ce-O8/edit#heading=h.78vexgrrtw5a)) community call, the community expressed a preference for this type of approach
This document is a series of instructions for core Grafana Loki maintainers to be able
to publish a new [Grafana Loki](https://github.com/grafana/loki) release.
The general process for releasing a new version of Grafana Loki is to merge the release PR for that version. Every commit to branches matching the pattern `release-[0-9]+.[0-9]+.x` will trigger a [prepare patch release]({{< relref "./prepare-release.md" >}}) workflow. This workflow will build release candidates for that patch, automatically generate release notes based on the commits since the last release, and update the long-running PR for that release. To publish the release, merge the PR.
The general process for releasing a new version of Grafana Loki is to merge the release PR for that version. Every commit to branches matching the pattern `release-[0-9]+.[0-9]+.x` will trigger a [prepare patch release](prepare-release/) workflow. This workflow will build release candidates for that patch, automatically generate release notes based on the commits since the last release, and update the long-running PR for that release. To publish the release, merge the PR.
Every commit to branches matching the pattern `k[0-9]+` will trigger a [prepare minor release]({{< relref "./prepare-release.md" >}}) workflow. This follows the same process as a patch release, but prepares a minor release instead. To publish the minor release, merge the PR.
Every commit to branches matching the pattern `k[0-9]+` will trigger a [prepare minor release](prepare-release/) workflow. This follows the same process as a patch release, but prepares a minor release instead. To publish the minor release, merge the PR.
Releasing a new major version requires a [custom major release workflow]({{< relref "./major-release.md" >}}) to be created to run of the branch we want to release from. Once that workflow is created, the steps for releasing a new major are the same as a minor or patch release.
Releasing a new major version requires a [custom major release workflow](major-release/) to be created to run of the branch we want to release from. Once that workflow is created, the steps for releasing a new major are the same as a minor or patch release.
@ -9,7 +9,7 @@ branch is then used for all the Stable Releases, and all Patch Releases for that
## Before you begin
1. Determine the [VERSION_PREFIX]({{< relref "./concepts/version" >}}).
1. Determine the [VERSION_PREFIX](../concepts/version/).
1. Announce about the upcoming release in `#loki-releases` internal slack channel.
1. Skip this announcement for a patch release. Create an issue to communicate beginning of the release process with the community. Example issue [here](https://github.com/grafana/loki/issues/10468).
@ -17,11 +17,11 @@ All the steps are performed on `release-VERSION_PREFIX` branch.
$ OLD_VERSION=X.Y.Z ./tools/diff-config.sh
```
1. Record configurations that are modified (either renamed or had its default value changed) in the [upgrade guide]({{< relref "./prepare-upgrade-guide" >}}).
1. Record configurations that are modified (either renamed or had its default value changed) in the [upgrade guide](../prepare-upgrade-guide/).
1. Check if any metrics have changed.
```
$ OLD_VERSION=X.Y.Z ./tools/diff-metrics.sh
```
1. Record metrics whose names have been modified in the [upgrade guide]({{< relref "./prepare-upgrade-guide" >}}).
1. Record metrics whose names have been modified in the [upgrade guide](../prepare-upgrade-guide/).
@ -5,7 +5,7 @@ description: Describes the process to create a workflow for a major release of G
# Prepare Major Release
A major release follows the same process as [minor and patch releases]({{< relref "./prepare-release.md" >}}), but requires a custom workflow to be created to run on the branch we want to release from. The reason for this is that we don't do major releases very often, so it is not economical to keep those workflows running all the time.
A major release follows the same process as [minor and patch releases](../prepare-release/), but requires a custom workflow to be created to run on the branch we want to release from. The reason for this is that we don't do major releases very often, so it is not economical to keep those workflows running all the time.
To create a major release workflow, follow the steps below.
@ -13,4 +13,4 @@ Releasing Grafana Loki consists of merging a long-running release PR. Two workfl
## Major releases
Major releases follow the same process as minor and patch releases, but require a custom workflow to be created to run on the branch we want to release from. The reason for this is that we don't do major releases very often, so it is not economical to keep those workflows running all the time.To create a major release workflow, follow the steps in the [major release workflow]({{< relref "./major-release.md" >}}) document.
Major releases follow the same process as minor and patch releases, but require a custom workflow to be created to run on the branch we want to release from. The reason for this is that we don't do major releases very often, so it is not economical to keep those workflows running all the time.To create a major release workflow, follow the steps in the [major release workflow](../major-release/) document.
@ -13,7 +13,7 @@ Grafana Loki is configured in a YAML file (usually referred to as `loki.yaml` )
which contains information on the Loki server and its individual components,
depending on which mode Loki is launched in.
Configuration examples can be found in the [Configuration Examples]({{< relref "./examples/configuration-examples" >}}) document.
Configuration examples can be found in the [Configuration Examples](examples/configuration-examples/) document.
<!-- The shared `configuration.md` file is generated from `/docs/templates/configuration.template`. To make changes to the included content, modify the template file and run `make doc` from root directory to regenerate the shared file. -->
@ -151,13 +156,13 @@ Once you've deployed these, point your Grafana data source to the new frontend s
The query frontend operates in one of two ways:
- Specify `--frontend.downstream-url` or its YAML equivalent, `frontend.downstream_url`. This proxies requests over HTTP to the specified URL.
- Without `--frontend.downstream-url` or its yaml equivalent `frontend.downstream_url`, the query frontend defaults to a pull service. As a pull service, the frontend instantiates per-tenant queues that downstream queriers pull queries from via GRPC. To act as a pull service, queriers need to specify `-querier.frontend-address` or its YAML equivalent `frontend_worker.frontend_address`.
- Without `--frontend.downstream-url` or its yaml equivalent `frontend.downstream_url`, the query frontend defaults to a pull service. As a pull service, the frontend instantiates per-tenant queues that downstream queriers pull queries from via GRPC. To act as a pull service, queriers need to specify `-querier.frontend-address` or its YAML equivalent `frontend_worker.frontend_address`.
Set `ClusterIP=None` for the query frontend pull service.
This causes DNS resolution of each query frontend pod IP address.
It avoids wrongly resolving to the service IP.
Set `ClusterIP=None` for the query frontend pull service.
This causes DNS resolution of each query frontend pod IP address.
It avoids wrongly resolving to the service IP.
Enable `publishNotReadyAddresses=true` on the query frontend pull service.
Doing so eliminates a race condition in which the query frontend address
is needed before the query frontend becomes ready
when at least one querier connects.
Enable `publishNotReadyAddresses=true` on the query frontend pull service.
Doing so eliminates a race condition in which the query frontend address
@ -18,7 +18,7 @@ To get started easily, run Grafana Loki in "single binary" mode with all compone
Grafana Loki is designed to easily redeploy a cluster under a different mode as your needs change, with no configuration changes or minimal configuration changes.
For more information, refer to [Deployment modes]({{< relref "./deployment-modes" >}}) and [Components]({{< relref "./components" >}}).
For more information, refer to [Deployment modes](../deployment-modes/) and [Components](../components/).
@ -28,7 +28,7 @@ Loki stores all data in a single object storage backend, such as Amazon Simple S
This mode uses an adapter called **index shipper** (or short **shipper**) to store index (TSDB or BoltDB) files the same way we store chunk files in object storage.
This mode of operation became generally available with Loki 2.0 and is fast, cost-effective, and simple. It is where all current and future development lies.
Prior to 2.0, Loki had different storage backends for indexes and chunks. For more information, refer to [Legacy storage]({{< relref "../operations/storage/legacy-storage" >}}).
Prior to 2.0, Loki had different storage backends for indexes and chunks. For more information, refer to [Legacy storage](../../operations/storage/legacy-storage/).
### Data format
@ -45,14 +45,14 @@ The diagram above shows the high-level overview of the data that is stored in th
There are two index formats that are currently supported as single store with index shipper:
Time Series Database (or short TSDB) is an [index format](https://github.com/prometheus/prometheus/blob/main/tsdb/docs/format/index.md) originally developed by the maintainers of [Prometheus](https://github.com/prometheus/prometheus) for time series (metric) data.
It is extensible and has many advantages over the deprecated BoltDB index.
New storage features in Loki are solely available when using TSDB.
@ -133,7 +133,7 @@ the hash ring. Each ingester has a state of either `PENDING`, `JOINING`,
another ingester that is `LEAVING`. This only applies for legacy deployment modes.
{{<admonitiontype="note">}}
Handoff is a deprecated behavior mainly used in stateless deployments of ingesters, which is discouraged. Instead, it's recommended using a stateful deployment model together with the [write ahead log]({{< relref "../operations/storage/wal" >}}).
Handoff is a deprecated behavior mainly used in stateless deployments of ingesters, which is discouraged. Instead, it's recommended using a stateful deployment model together with the [write ahead log](../../operations/storage/wal/).
{{</admonition>}}
1. `JOINING` is an Ingester's state when it is currently inserting its tokens
@ -190,7 +190,7 @@ Logs from each unique set of labels are built up into "chunks" in memory and
then flushed to the backing storage backend.
If an ingester process crashes or exits abruptly, all the data that has not yet
been flushed could be lost. Loki is usually configured with a [Write Ahead Log]({{< relref "../operations/storage/wal" >}}) which can be _replayed_ on restart as well as with a `replication_factor` (usually 3) of each log to mitigate this risk.
been flushed could be lost. Loki is usually configured with a [Write Ahead Log](../../operations/storage/wal/) which can be _replayed_ on restart as well as with a `replication_factor` (usually 3) of each log to mitigate this risk.
When not configured to accept out-of-order writes,
all lines pushed to Loki for a given stream (unique combination of
@ -209,7 +209,7 @@ nanosecond timestamps:
### Handoff
{{<admonitiontype="warning">}}
Handoff is deprecated behavior mainly used in stateless deployments of ingesters, which is discouraged. Instead, it's recommended using a stateful deployment model together with the [write ahead log]({{< relref "../operations/storage/wal" >}}).
Handoff is deprecated behavior mainly used in stateless deployments of ingesters, which is discouraged. Instead, it's recommended using a stateful deployment model together with the [write ahead log](../../operations/storage/wal/).
{{</admonition>}}
By default, when an ingester is shutting down and tries to leave the hash ring,
@ -280,7 +280,7 @@ This cache is only applicable when using single store TSDB.
## Query scheduler
The **query scheduler** is an **optional service** providing more [advanced queuing functionality]({{< relref "../operations/query-fairness" >}}) than the [query frontend](#query-frontend).
The **query scheduler** is an **optional service** providing more [advanced queuing functionality](../../operations/query-fairness/) than the [query frontend](#query-frontend).
When using this component in the Loki deployment, query frontend pushes split up queries to the query scheduler which enqueues them in an internal in-memory queue.
There is a queue for each tenant to guarantee the query fairness across all tenants.
The queriers that connect to the query scheduler act as workers that pull their jobs from the queue, execute them, and return them to the query frontend for aggregation. Queriers therefore need to be configured with the query scheduler address (via the `-querier.scheduler-address` CLI flag) in order to allow them to connect to the query scheduler.
@ -290,7 +290,7 @@ Query schedulers are **stateless**. However, due to the in-memory queue, it's re
## Querier
The **querier** service is responsible for executing [Log Query Language (LogQL)]({{< relref "../query" >}}) queries.
The **querier** service is responsible for executing [Log Query Language (LogQL)](../../query/) queries.
The querier can handle HTTP requests from the client directly (in "single binary" mode, or as part of the read path in "simple scalable deployment")
or pull subqueries from the query frontend or query scheduler (in "microservice" mode).
@ -306,7 +306,7 @@ timestamp, label set, and log message.
The **index gateway** service is responsible for handling and serving metadata queries.
Metadata queries are queries that look up data from the index. The index gateway is only used by "shipper stores",
such as [single store TSDB]({{< relref "../operations/storage/tsdb" >}}) or [single store BoltDB]({{< relref "../operations/storage/boltdb-shipper" >}}).
such as [single store TSDB](../../operations/storage/tsdb/) or [single store BoltDB](../../operations/storage/boltdb-shipper/).
The query frontend queries the index gateway for the log volume of queries so it can make a decision on how to shard the queries.
The queriers query the index gateway for chunk references for a given query so they know which chunks to fetch and query.
@ -317,14 +317,14 @@ In `ring` mode, index gateways use a consistent hash ring to distribute and shar
## Compactor
The **compactor** service is used by "shipper stores", such as [single store TSDB]({{< relref "../operations/storage/tsdb" >}})
or [single store BoltDB]({{< relref "../operations/storage/boltdb-shipper" >}}), to compact the multiple index files produced by the ingesters
The **compactor** service is used by "shipper stores", such as [single store TSDB](../../operations/storage/tsdb/)
or [single store BoltDB](../../operations/storage/boltdb-shipper/), to compact the multiple index files produced by the ingesters
and shipped to object storage into single index files per day and tenant. This makes index lookups more efficient.
To do so, the compactor downloads the files from object storage in a regular interval, merges them into a single one,
uploads the newly created index, and cleans up the old files.
Additionally, the compactor is also responsible for [log retention]({{< relref "../operations/storage/retention" >}}) and [log deletion]({{< relref "../operations/storage/logs-deletion" >}}).
Additionally, the compactor is also responsible for [log retention](../../operations/storage/retention/) and [log deletion](../../operations/storage/logs-deletion/).
In a Loki deployment, the compactor service is usually run as a single instance.
@ -30,7 +30,7 @@ Query parallelization is limited by the number of instances and the setting `max
## Simple Scalable
The simple scalable deployment is the default configuration installed by the [Loki Helm Chart]({{< relref "../setup/install/helm" >}}). This deployment mode is the easiest way to deploy Loki at scale. It strikes a balance between deploying in [monolithic mode](#monolithic-mode) or deploying each component as a [separate microservice](#microservices-mode).
The simple scalable deployment is the default configuration installed by the [Loki Helm Chart](../../setup/install/helm/). This deployment mode is the easiest way to deploy Loki at scale. It strikes a balance between deploying in [monolithic mode](#monolithic-mode) or deploying each component as a [separate microservice](#microservices-mode).
{{<admonitiontype="note">}}
This deployment mode is sometimes referred to by the acronym SSD for simple scalable deployment, not to be confused with solid state drives. Loki uses an object store.
@ -351,7 +351,7 @@ The two previous examples use statically defined labels with a single value; how
__path__: /var/log/apache.log
```
This regex matches every component of the log line and extracts the value of each component into a capture group. Inside the pipeline code, this data is placed in a temporary data structure that allows use for several purposes during the processing of that log line (at which point that temp data is discarded). Much more detail about this can be found in the [Promtail pipelines]({{< relref "../../send-data/promtail/pipelines" >}}) documentation.
This regex matches every component of the log line and extracts the value of each component into a capture group. Inside the pipeline code, this data is placed in a temporary data structure that allows use for several purposes during the processing of that log line (at which point that temp data is discarded). Much more detail about this can be found in the [Promtail pipelines](../../send-data/promtail/pipelines/) documentation.
From that regex, we will be using two of the capture groups to dynamically set two labels based on content from the log line itself:
@ -21,7 +21,7 @@ Too many label value combinations leads to too many streams. The penalties for t
To avoid those issues, don't add a label for something until you know you need it! Use filter expressions (`|= "text"`, `|~ "regex"`, …) and brute force those logs. It works -- and it's fast.
If you often parse a label from a log line at query time, the label has a high cardinality, and extracting that label is expensive in terms of performance; consider extracting the label on the client side
attaching it as [structured metadata]({{< relref "./structured-metadata" >}}) to log lines .
attaching it as [structured metadata](../structured-metadata/) to log lines .
From early on, we have set a label dynamically using Promtail pipelines for `level`. This seemed intuitive for us as we often wanted to only show logs for `level="error"`; however, we are re-evaluating this now as writing a query. `{app="loki"} |= "level=error"` is proving to be just as fast for many of our applications as `{app="loki",level="error"}`.
@ -54,7 +54,7 @@ Loki has several client options: [Grafana Alloy](https://grafana.com/docs/alloy/
Each of these come with ways to configure what labels are applied to create log streams. But be aware of what dynamic labels might be applied.
Use the Loki series API to get an idea of what your log streams look like and see if there might be ways to reduce streams and cardinality.
Series information can be queried through the [Series API](https://grafana.com/docs/loki/<LOKI_VERSION>/reference/loki-http-api/), or you can use [logcli]({{< relref "../../query" >}}).
Series information can be queried through the [Series API](https://grafana.com/docs/loki/<LOKI_VERSION>/reference/loki-http-api/), or you can use [logcli](../../../query/).
In Loki 1.6.0 and newer the logcli series command added the `--analyze-labels` flag specifically for debugging high cardinality labels:
@ -24,9 +24,9 @@ A typical Loki-based logging stack consists of 3 components:
- **Agent** - An agent or client, for example Grafana Alloy, or Promtail, which is distributed with Loki. The agent scrapes logs, turns the logs into streams by adding labels, and pushes the streams to Loki through an HTTP API.
- **Loki** - The main server, responsible for ingesting and storing logs and processing queries. It can be deployed in three different configurations, for more information see [deployment modes]({{< relref "../get-started/deployment-modes" >}}).
- **Loki** - The main server, responsible for ingesting and storing logs and processing queries. It can be deployed in three different configurations, for more information see [deployment modes](../deployment-modes/).
- **[Grafana](https://github.com/grafana/grafana)** for querying and displaying log data. You can also query logs from the command line, using [LogCLI]({{< relref "../query/logcli" >}}) or using the Loki API directly.
- **[Grafana](https://github.com/grafana/grafana)** for querying and displaying log data. You can also query logs from the command line, using [LogCLI](../../query/logcli/) or using the Loki API directly.
## Loki features
@ -35,7 +35,7 @@ In its most common deployment, “simple scalable mode”, Loki decouples reques
If needed, each of the Loki components can also be run as microservices designed to run natively within Kubernetes.
- **Multi-tenancy** - Loki allows multiple tenants to share a single Loki instance. With multi-tenancy, the data and requests of each tenant is completely isolated from the others.
Multi-tenancy is [configured]({{< relref "../operations/multi-tenancy" >}}) by assigning a tenant ID in the agent.
Multi-tenancy is [configured](../../operations/multi-tenancy/) by assigning a tenant ID in the agent.
- **Third-party integrations** - Several third-party agents (clients) have support for Loki, via plugins. This lets you keep your existing observability setup while also shipping logs to Loki.
@ -44,10 +44,10 @@ Similarly, the Loki index, because it indexes only the set of labels, is signifi
By leveraging object storage as the only data storage mechanism, Loki inherits the reliability and stability of the underlying object store. It also capitalizes on both the cost efficiency and operational simplicity of object storage over other storage mechanisms like locally attached solid state drives (SSD) and hard disk drives (HDD).
The compressed chunks, smaller index, and use of low-cost object storage, make Loki less expensive to operate.
- **LogQL, the Loki query language** - [LogQL]({{< relref "../query" >}}) is the query language for Loki. Users who are already familiar with the Prometheus query language, [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/), will find LogQL familiar and flexible for generating queries against the logs.
- **LogQL, the Loki query language** - [LogQL](../../query/) is the query language for Loki. Users who are already familiar with the Prometheus query language, [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/), will find LogQL familiar and flexible for generating queries against the logs.
The language also facilitates the generation of metrics from log data,
a powerful feature that goes well beyond log aggregation.
- **Alerting** - Loki includes a component called the [ruler]({{< relref "../alert" >}}), which can continually evaluate queries against your logs, and perform an action based on the result. This allows you to monitor your logs for anomalies or events. Loki integrates with [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), or the [alert manager](/docs/grafana/latest/alerting) within Grafana.
- **Alerting** - Loki includes a component called the [ruler](../../alert/), which can continually evaluate queries against your logs, and perform an action based on the result. This allows you to monitor your logs for anomalies or events. Loki integrates with [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), or the [alert manager](/docs/grafana/latest/alerting) within Grafana.
- **Grafana integration** - Loki integrates with Grafana, Mimir, and Tempo, providing a complete observability stack, and seamless correlation between logs, metrics and traces.
Grafana Loki does not come with any included authentication layer. Operators are
expected to run an authenticating reverse proxy in front of your services.
The simple scalable [deployment mode]({{< relref "../get-started/deployment-modes" >}}) requires a reverse proxy to be deployed in front of Loki, to direct client API requests to either the read or write nodes. The Loki Helm chart includes a default reverse proxy configuration, using Nginx.
The simple scalable [deployment mode](../../get-started/deployment-modes/) requires a reverse proxy to be deployed in front of Loki, to direct client API requests to either the read or write nodes. The Loki Helm chart includes a default reverse proxy configuration, using Nginx.
A list of open-source reverse proxies you can use:
@ -22,7 +22,7 @@ A list of open-source reverse proxies you can use:
When using Loki in multi-tenant mode, Loki requires the HTTP header
`X-Scope-OrgID` to be set to a string identifying the tenant; the responsibility
of populating this value should be handled by the authenticating reverse proxy.
For more information, read the [multi-tenancy]({{< relref "./multi-tenancy" >}}) documentation.{{</admonition>}}
For more information, read the [multi-tenancy](../multi-tenancy/) documentation.{{</admonition>}}
For information on authenticating Promtail, see the documentation for [how to
@ -24,7 +24,7 @@ The Loki [mixin](https://github.com/grafana/loki/blob/main/production/loki-mixin
- To install meta-monitoring using the Loki Helm Chart and a local Loki stack, follow [these directions](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/install/helm/monitor-and-alert/with-local-monitoring/).
- To install the Loki mixin, follow [these directions]({{< relref "./mixins" >}}).
- To install the Loki mixin, follow [these directions](mixins/).
You should also plan separately for infrastructure-level monitoring, to monitor the capacity or throughput of your storage provider, for example, or your networking layer.
@ -15,7 +15,7 @@ It is recommended that Loki operators set up alerts or dashboards with these met
### Terminology
- **sample**: a log line with [structured metadata]({{< relref "../get-started/labels/structured-metadata" >}})
- **sample**: a log line with [structured metadata](../../get-started/labels/structured-metadata/)
- **stream**: samples with a unique combination of labels
- **active stream**: streams that are present in the ingesters - these have recently received log lines within the `chunk_idle_period` period (default: 30m)
@ -30,7 +30,7 @@ maintenance tasks. It consists of:
{{<admonitiontype="note">}}
Unlike the other core components of Loki, the chunk store is not a separate
service, job, or process, but rather a library embedded in the two services
that need to access Loki data: the [ingester]({{< relref "../../get-started/components#ingester" >}}) and [querier]({{< relref "../../get-started/components#querier" >}}).
that need to access Loki data: the [ingester](../../../get-started/components/#ingester) and [querier](../../../get-started/components/#querier).
{{</admonition>}}
The chunk store relies on a unified interface to the
@ -15,7 +15,7 @@ The compactor component exposes REST [endpoints](https://grafana.com/docs/loki/<
Hitting the endpoint specifies the streams and the time window.
The deletion of the log entries takes place after a configurable cancellation time period expires.
Log entry deletion relies on configuration of the custom logs retention workflow as defined for the [compactor]({{< relref "./retention#compactor" >}}). The compactor looks at unprocessed requests which are past their cancellation period to decide whether a chunk is to be deleted or not.
Log entry deletion relies on configuration of the custom logs retention workflow as defined for the [compactor](../retention/#compactor). The compactor looks at unprocessed requests which are past their cancellation period to decide whether a chunk is to be deleted or not.
Starting with Loki v2.8, TSDB is the recommended Loki index. It is heavily inspired by the Prometheus's TSDB [sub-project](https://github.com/prometheus/prometheus/tree/main/tsdb). For a deeper explanation you can read Loki maintainer Owen's [blog post](https://lokidex.com/posts/tsdb/). The short version is that this new index is more efficient, faster, and more scalable. It also resides in object storage like the [boltdb-shipper]({{< relref "./boltdb-shipper" >}}) index which preceded it.
Starting with Loki v2.8, TSDB is the recommended Loki index. It is heavily inspired by the Prometheus's TSDB [sub-project](https://github.com/prometheus/prometheus/tree/main/tsdb). For a deeper explanation you can read Loki maintainer Owen's [blog post](https://lokidex.com/posts/tsdb/). The short version is that this new index is more efficient, faster, and more scalable. It also resides in object storage like the [boltdb-shipper](../boltdb-shipper/) index which preceded it.
## Example Configuration
@ -75,7 +75,7 @@ We've added a user per-tenant limit called `tsdb_max_query_parallelism` in the `
Previously we would statically shard queries based on the index row shards configured [here](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#period_config).
TSDB does Dynamic Query Sharding based on how much data a query is going to be processing.
We additionally store size(KB) and number of lines for each chunk in the TSDB index which is then used by the [Query Frontend]({{< relref "../../get-started/components#query-frontend" >}}) for planning the query.
We additionally store size(KB) and number of lines for each chunk in the TSDB index which is then used by the [Query Frontend](../../../get-started/components/#query-frontend) for planning the query.
Based on our experience from operating many Loki clusters, we have configured TSDB to aim for processing 300-600 MBs of data per query shard.
This means with TSDB we will be running more, smaller queries.
If you have a reverse proxy in front of Loki, that is, between Loki and Grafana, then check any configured timeouts, such as an NGINX proxy read timeout.
- Other causes. To determine if the issue is related to Loki itself or another system such as Grafana or a client-side error,
attempt to run a [LogCLI]({{< relref "../query/logcli" >}}) query in as direct a manner as you can. For example, if running on virtual machines, run the query on the local machine. If running in a Kubernetes cluster, then port forward the Loki HTTP port, and attempt to run the query there. If you do not get a timeout, then consider these causes:
attempt to run a [LogCLI](../../query/logcli/) query in as direct a manner as you can. For example, if running on virtual machines, run the query on the local machine. If running in a Kubernetes cluster, then port forward the Loki HTTP port, and attempt to run the query there. If you do not get a timeout, then consider these causes:
- Adjust the [Grafana dataproxy timeout](/docs/grafana/latest/administration/configuration/#dataproxy). Configure Grafana with a large enough dataproxy timeout.
- Check timeouts for reverse proxies or load balancers between your client and Grafana. Queries to Grafana are made from the your local browser with Grafana serving as a proxy (a dataproxy). Therefore, connections from your client to Grafana must have their timeout configured as well.
Line filter expressions are the fastest way to filter logs once the
log stream selectors have been applied.
Line filter expressions have support matching IP addresses. See [Matching IP addresses]({{< relref "../ip" >}}) for details.
Line filter expressions have support matching IP addresses. See [Matching IP addresses](../ip/) for details.
### Removing color codes
@ -240,7 +240,7 @@ Using Duration, Number and Bytes will convert the label value prior to compariso
For instance, `logfmt | duration > 1m and bytes_consumed > 20MB`
If the conversion of the label value fails, the log line is not filtered and an `__error__` label is added. To filters those errors see the [pipeline errors]({{< relref "..#pipeline-errors" >}}) section.
If the conversion of the label value fails, the log line is not filtered and an `__error__` label is added. To filters those errors see the [pipeline errors](../#pipeline-errors) section.
You can chain multiple predicates using `and` and `or` which respectively express the `and` and `or` binary operations. `and` can be equivalently expressed by a comma, a space or another pipe. Label filters can be place anywhere in a log pipeline.
@ -271,11 +271,11 @@ To evaluate the logical `and` first, use parenthesis, as in this example:
> Label filter expressions are the only expression allowed after the unwrap expression. This is mainly to allow filtering errors from the metric extraction.
Label filter expressions have support matching IP addresses. See [Matching IP addresses]({{< relref "../ip" >}}) for details.
Label filter expressions have support matching IP addresses. See [Matching IP addresses](../ip/) for details.
### Parser expression
Parser expression can parse and extract labels from the log content. Those extracted labels can then be used for filtering using [label filter expressions](#label-filter-expression) or for [metric aggregations]({{< relref "../metric_queries" >}}).
Parser expression can parse and extract labels from the log content. Those extracted labels can then be used for filtering using [label filter expressions](#label-filter-expression) or for [metric aggregations](../metric_queries/).
Extracted label keys are automatically sanitized by all parsers, to follow Prometheus metric name convention.(They can only contain ASCII letters and digits, as well as underscores and colons. They cannot start with a digit.)
@ -295,7 +295,7 @@ If an extracted label key name already exists in the original log stream, the ex
Loki supports [JSON](#json), [logfmt](#logfmt), [pattern](#pattern), [regexp](#regular-expression) and [unpack](#unpack) parsers.
It's easier to use the predefined parsers `json` and `logfmt` when you can. If you can't, the `pattern` and `regexp` parsers can be used for log lines with an unusual structure. The `pattern` parser is easier and faster to write; it also outperforms the `regexp` parser.
Multiple parsers can be used by a single log pipeline. This is useful for parsing complex logs. There are examples in [Multiple parsers]({{< relref "../query_examples#examples-that-use-multiple-parsers" >}}).
Multiple parsers can be used by a single log pipeline. This is useful for parsing complex logs. There are examples in [Multiple parsers](../query_examples/#examples-that-use-multiple-parsers).
#### JSON
@ -555,7 +555,7 @@ those labels:
#### unpack
The `unpack` parser parses a JSON log line, unpacking all embedded labels from Promtail's [`pack` stage]({{< relref "../../send-data/promtail/stages/pack.md" >}}).
The `unpack` parser parses a JSON log line, unpacking all embedded labels from Promtail's [`pack` stage](../../send-data/promtail/stages/pack/).
**A special property `_entry` will also be used to replace the original log line**.
Unwrapped ranges uses extracted labels as sample values instead of log lines. However to select which label will be used within the aggregation, the log query must end with an unwrap expression and optionally a label filter expression to discard [errors]({{< relref ".#pipeline-errors" >}}).
Unwrapped ranges uses extracted labels as sample values instead of log lines. However to select which label will be used within the aggregation, the log query must end with an unwrap expression and optionally a label filter expression to discard [errors](./#pipeline-errors).
The unwrap expression is noted `| unwrap label_identifier` where the label identifier is the label name to use for extracting sample values.
@ -104,7 +104,7 @@ Which can be used to aggregate over distinct labels dimensions by including a `w
`without` removes the listed labels from the result vector, while all other labels are preserved the output. `by` does the opposite and drops labels that are not listed in the `by` clause, even if their label values are identical between all elements of the vector.
See [Unwrap examples]({{< relref "./query_examples#unwrap-examples" >}}) for query examples that use the unwrap expression.
See [Unwrap examples](../query_examples/#unwrap-examples) for query examples that use the unwrap expression.
## Built-in aggregation operators
@ -135,7 +135,7 @@ The aggregation operators can either be used to aggregate over all label values
The `without` clause removes the listed labels from the resulting vector, keeping all others.
The `by` clause does the opposite, dropping labels that are not listed in the clause, even if their label values are identical between all elements of the vector.
See [vector aggregation examples]({{< relref "./query_examples#vector-aggregation-examples" >}}) for query examples that use vector aggregation expressions.
See [vector aggregation examples](../query_examples/#vector-aggregation-examples) for query examples that use vector aggregation expressions.
@ -26,12 +26,12 @@ These endpoints are exposed by the `distributor`, `write`, and `all` components:
- [`POST /loki/api/v1/push`](#ingest-logs)
- [`POST /otlp/v1/logs`](#ingest-logs-using-otlp)
A [list of clients]({{< relref "../send-data" >}}) can be found in the clients documentation.
A [list of clients](../../send-data/) can be found in the clients documentation.
### Query endpoints
{{<admonitiontype="note">}}
Requests sent to the query endpoints must use valid LogQL syntax. For more information, see the [LogQL]({{< relref "../query" >}}) section of the documentation.
Requests sent to the query endpoints must use valid LogQL syntax. For more information, see the [LogQL](../../query/) section of the documentation.
{{</admonition>}}
These HTTP endpoints are exposed by the `querier`, `query-frontend`, `read`, and `all` components:
@ -238,7 +238,7 @@ Alternatively, if the `Content-Type` header is set to `application/json`, a JSON
You can set `Content-Encoding: gzip` request header and post gzipped JSON.
You can optionally attach [structured metadata]({{< relref "../get-started/labels/structured-metadata" >}}) to each log line by adding a JSON object to the end of the log line array.
You can optionally attach [structured metadata](../../get-started/labels/structured-metadata/) to each log line by adding a JSON object to the end of the log line array.
The JSON object must be a valid JSON object with string keys and string values. The JSON object should not contain any nested object.
The JSON object must be set immediately after the log line. Here is an example of a log entry with some structured metadata attached:
@ -290,7 +290,7 @@ This type of query is often referred to as an instant query. Instant queries are
and will return a 400 (Bad Request) in case a log type query is provided.
The endpoint accepts the following query parameters in the URL:
- `query`: The [LogQL]({{< relref "../query" >}}) query to perform. Requests that do not use valid LogQL syntax will return errors.
- `query`: The [LogQL](../../query/) query to perform. Requests that do not use valid LogQL syntax will return errors.
- `limit`: The max number of entries to return. It defaults to `100`. Only applies to query types which produce a stream (log lines) response.
- `time`: The evaluation time for the query as a nanosecond Unix epoch or another [supported format](#timestamps). Defaults to now.
- `direction`: Determines the sort order of logs. Supported values are `forward` or `backward`. Defaults to `backward`.
set the `X-Scope-OrgID` header to identify the tenant you want to query.
Here is the same example query for the single tenant called `Tenant1`:
@ -465,7 +465,7 @@ GET /loki/api/v1/query_range
This type of query is often referred to as a range query. Range queries are used for both log and metric type LogQL queries.
It accepts the following query parameters in the URL:
- `query`: The [LogQL]({{< relref "../query" >}}) query to perform.
- `query`: The [LogQL](../../query/) query to perform.
- `limit`: The max number of entries to return. It defaults to `100`. Only applies to query types which produce a stream (log lines) response.
- `start`: The start time for the query as a nanosecond Unix epoch or another [supported format](#timestamps). Defaults to one hour ago. Loki returns results with timestamp greater or equal to this value.
- `end`: The end time for the query as a nanosecond Unix epoch or another [supported format](#timestamps). Defaults to now. Loki returns results with timestamp lower than this value.
@ -850,7 +850,7 @@ The `/loki/api/v1/index/stats` endpoint can be used to query the index for the n
URL query parameters:
- `query`: The [LogQL]({{< relref "../query" >}}) matchers to check (that is, `{job="foo", env!="dev"}`)
- `query`: The [LogQL](../../query/) matchers to check (that is, `{job="foo", env!="dev"}`)
- `query`: The [LogQL]({{< relref "../query" >}}) matchers to check (that is, `{job="foo", env=~".+"}`). This parameter is required.
- `query`: The [LogQL](../../query/) matchers to check (that is, `{job="foo", env=~".+"}`). This parameter is required.
- `start=<nanosecond Unix epoch>`: Start timestamp. This parameter is required.
- `end=<nanosecond Unix epoch>`: End timestamp. This parameter is required.
- `step=<duration string or float number of seconds>`: Step between samples for occurrences of this pattern. This parameter is optional.
@ -1004,7 +1004,7 @@ gave this response:
```
The result is a list of patterns detected in the logs, with the number of samples for each pattern at each timestamp.
The pattern format is the same as the [LogQL]({{< relref "../query" >}}) pattern filter and parser and can be used in queries for filtering matching logs.
The pattern format is the same as the [LogQL](../../query/) pattern filter and parser and can be used in queries for filtering matching logs.
Each sample is a tuple of timestamp (second) and count.
## Stream logs
@ -1016,7 +1016,7 @@ GET /loki/api/v1/tail
`/loki/api/v1/tail` is a WebSocket endpoint that streams log messages based on a query to the client.
It accepts the following query parameters in the URL:
- `query`: The [LogQL]({{< relref "../query" >}}) query to perform.
- `query`: The [LogQL](../../query/) query to perform.
- `delay_for`: The number of seconds to delay retrieving logs to let slow
loggers catch up. Defaults to 0 and cannot be larger than 5.
- `limit`: The max number of entries to return. It defaults to `100`.
@ -1086,7 +1086,7 @@ GET /metrics
```
`/metrics` returns exposed Prometheus metrics. See
In microservices mode, the `/metrics` endpoint is exposed by all components.
@ -1382,7 +1382,7 @@ PUT /loki/api/v1/delete
```
Create a new delete request for the authenticated tenant.
The [log entry deletion]({{< relref "../operations/storage/logs-deletion" >}}) documentation has configuration details.
The [log entry deletion](../../operations/storage/logs-deletion/) documentation has configuration details.
Log entry deletion is supported _only_ when TSDB or BoltDB Shipper is configured for the index store.
@ -1422,7 +1422,7 @@ GET /loki/api/v1/delete
```
List the existing delete requests for the authenticated tenant.
The [log entry deletion]({{< relref "../operations/storage/logs-deletion" >}}) documentation has configuration details.
The [log entry deletion](../../operations/storage/logs-deletion/) documentation has configuration details.
Log entry deletion is supported _only_ when TSDB or BoltDB Shipper is configured for the index store.
@ -1459,7 +1459,7 @@ DELETE /loki/api/v1/delete
```
Remove a delete request for the authenticated tenant.
The [log entry deletion]({{< relref "../operations/storage/logs-deletion" >}}) documentation has configuration details.
The [log entry deletion](../../operations/storage/logs-deletion/) documentation has configuration details.
Loki allows cancellation of delete requests until the requests are picked up for processing. It is controlled by the `delete_request_cancel_period` YAML configuration or the equivalent command line option when invoking Loki. To cancel a delete request that has been picked up for processing or is partially complete, pass the `force=true` query parameter to the API.
@ -16,15 +16,15 @@ Some parts of the Loki repo will remain Apache-2.0 licensed (mainly clients and
## Features and enhancements
* Loki now has the ability to apply [custom retention]({{< relref "../operations/storage/retention" >}}) based on stream selectors! This will allow much finer control over log retention all of which is now handled by Loki, no longer requiring the use of object store configs for retention.
* Coming along hand in hand with storing logs for longer durations is the ability to [delete log streams]({{< relref "../operations/storage/logs-deletion" >}}). The initial implementation lets you submit delete request jobs which will be processed after 24 hours.
* A very exciting new LogQL parser has been introduced: the [pattern parser]({{< relref "../query/log_queries#parser-expression" >}}). Much simpler and faster than regexp for log lines that have a little bit of structure to them such as the [Common Log Format](https://en.wikipedia.org/wiki/Common_Log_Format). This is now Loki's fastest parser so try it out on any of your log lines!
* Extending on the work of Alerting Rules, Loki now accepts [recording rules]({{< relref "../alert#recording-rules" >}}). This lets you turn your logs into metrics and push them to Prometheus or any Prometheus compatible remote_write endpoint.
* LogQL can understand [IP addresses]({{< relref "../query/ip" >}})! This enables filtering on IP addresses and subnet ranges.
* Loki now has the ability to apply [custom retention](../../operations/storage/retention/) based on stream selectors! This will allow much finer control over log retention all of which is now handled by Loki, no longer requiring the use of object store configs for retention.
* Coming along hand in hand with storing logs for longer durations is the ability to [delete log streams](../../operations/storage/logs-deletion/). The initial implementation lets you submit delete request jobs which will be processed after 24 hours.
* A very exciting new LogQL parser has been introduced: the [pattern parser](../../query/log_queries/#parser-expression). Much simpler and faster than regexp for log lines that have a little bit of structure to them such as the [Common Log Format](https://en.wikipedia.org/wiki/Common_Log_Format). This is now Loki's fastest parser so try it out on any of your log lines!
* Extending on the work of Alerting Rules, Loki now accepts [recording rules](../../alert/#recording-rules). This lets you turn your logs into metrics and push them to Prometheus or any Prometheus compatible remote_write endpoint.
* LogQL can understand [IP addresses](../../query/ip/)! This enables filtering on IP addresses and subnet ranges.
For those of you running Loki as microservices, the following features will improve performance operations significantly for many operations.
* We created an [index gateway]({{< relref "../operations/storage/boltdb-shipper#index-gateway" >}}) which takes on the task of downloading the boltdb-shipper index files allowing you to run your queriers without any local disk requirements, this is really helpful in Kubernetes environments where you can return your queriers from Statefulsets back to Deployments and save a lot of PVC costs and operational headaches.
* We created an [index gateway](../../operations/storage/boltdb-shipper/#index-gateway) which takes on the task of downloading the boltdb-shipper index files allowing you to run your queriers without any local disk requirements, this is really helpful in Kubernetes environments where you can return your queriers from Statefulsets back to Deployments and save a lot of PVC costs and operational headaches.
* Ingester queriers [are now shardable](https://github.com/grafana/loki/pull/3852), this is a significant performance boost for high volume log streams when querying recent data.
* Instant queries can now be [split and sharded](https://github.com/grafana/loki/pull/3984) making them just as fast as range queries.
@ -42,7 +42,7 @@ Lastly several useful additions to the LogQL query language have been included:
## Upgrade considerations
The path from 2.2.1 to 2.3.0 should be smooth, as always, read the [Upgrade Guide]({{< relref "../setup/upgrade#230" >}}) for important upgrade guidance.
The path from 2.2.1 to 2.3.0 should be smooth, as always, read the [Upgrade Guide](../../setup/upgrade/#230) for important upgrade guidance.
* [**Loki no longer requires logs to be sent in perfect chronological order.**](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#accept-out-of-order-writes) Support for out of order logs is one of the most highly requested features for Loki. The strict ordering constraint has been removed.
* Scaling Loki is now easier with a hybrid deployment mode that falls between our single binary and our microservices. The [Simple scalable deployment]({{< relref "../get-started/deployment-modes" >}}) scales Loki with new `read` and `write` targets. Where previously you would have needed Kubernetes and the microservices approach to start tapping into Loki’s potential, it’s now possible to do this in a simpler way.
* Scaling Loki is now easier with a hybrid deployment mode that falls between our single binary and our microservices. The [Simple scalable deployment](../../get-started/deployment-modes/) scales Loki with new `read` and `write` targets. Where previously you would have needed Kubernetes and the microservices approach to start tapping into Loki’s potential, it’s now possible to do this in a simpler way.
* The new [`common` section](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#common) results in a 70% smaller Loki configuration. Pair that with updated defaults and Loki comes out of the box with more appropriate defaults and limits. Check out the [example local configuration](https://github.com/grafana/loki/blob/main/cmd/loki/loki-local-config.yaml) as the new reference for running Loki.
* [**Recording rules**]({{< relref "../alert#recording-rules" >}}) are no longer an experimental feature. We've given them a more resilient implementation which leverages the existing write ahead log code in Prometheus.
* The new [**Promtail Kafka Consumer**]({{< relref "../send-data/promtail/scraping#kafka" >}}) can easily get your logs out of Kafka and into Loki.
* There are **nice LogQL enhancements**, thanks to the amazing Loki community. LogQL now has [group_left and group_right]({{< relref "../query#many-to-one-and-one-to-many-vector-matches" >}}). And, the `label_format` and `line_format` functions now support [working with dates and times]({{< relref "../query/template_functions#now" >}}).
* Another great community contribution allows Promtail to [**accept ndjson and plaintext log files over HTTP**]({{< relref "../send-data/promtail/configuration#loki_push_api" >}}).
* [**Recording rules**](../../alert/#recording-rules) are no longer an experimental feature. We've given them a more resilient implementation which leverages the existing write ahead log code in Prometheus.
* The new [**Promtail Kafka Consumer**](../../send-data/promtail/scraping/#kafka) can easily get your logs out of Kafka and into Loki.
* There are **nice LogQL enhancements**, thanks to the amazing Loki community. LogQL now has [group_left and group_right](../../query/#many-to-one-and-one-to-many-vector-matches). And, the `label_format` and `line_format` functions now support [working with dates and times](../../query/template_functions/#now).
* Another great community contribution allows Promtail to [**accept ndjson and plaintext log files over HTTP**](../../send-data/promtail/configuration/#loki_push_api).
All in all, about 260 PR’s went into Loki 2.4, and we thank everyone for helping us make the best Loki yet.
@ -27,7 +27,7 @@ For a full list of all changes, look at the [CHANGELOG](https://github.com/grafa
## Upgrade Considerations
Please read the [upgrade guide]({{< relref "../setup/upgrade#240" >}}) before updating Loki.
Please read the [upgrade guide](../../setup/upgrade/#240) before updating Loki.
We made a lot of changes to Loki’s configuration as part of this release.
We have tried our best to make sure changes are compatible with existing configurations, however some changes to default limits may impact users who didn't have values explicitly set for these limits in their configuration files.
@ -25,7 +25,7 @@ For a full list of all changes, look at the [CHANGELOG](https://github.com/grafa
## Upgrade Considerations
As always, please read the [upgrade guide]({{< relref "../setup/upgrade#250" >}}) before upgrading Loki.
As always, please read the [upgrade guide](../../setup/upgrade/#250) before upgrading Loki.
### Changes to the config `split_queries_by_interval`
The most likely impact many people will see is Loki failing to start because of a change in the YAML configuration for `split_queries_by_interval`. It was previously possible to define this value in two places.
@ -10,8 +10,8 @@ Grafana Labs is excited to announce the release of Loki 2.6. Here's a summary of
## Features and enhancements
- **Query multiple tenants at once.** We've introduced cross-tenant query federation, which allows you to issue one query to multiple tenants and get a single, consolidated result. This is great for scenarios where you need a global view of logs within your multi-tenant cluster. For more information on how to enable this feature, see [Multi-Tenancy]({{< relref "../operations/multi-tenancy.md" >}}).
- **Filter out and delete certain log lines from query results.** This is particularly useful in cases where users may accidentally write sensitive information to Loki that they do not want exposed. Users craft a LogQL query that selects the specific lines they're interested in, and then can choose to either filter out those lines from query results, or permanently delete them from Loki's storage. For more information, see [Logs Deletion]({{< relref "../operations/storage/logs-deletion.md" >}}).
- **Query multiple tenants at once.** We've introduced cross-tenant query federation, which allows you to issue one query to multiple tenants and get a single, consolidated result. This is great for scenarios where you need a global view of logs within your multi-tenant cluster. For more information on how to enable this feature, see [Multi-Tenancy](../../operations/multi-tenancy/).
- **Filter out and delete certain log lines from query results.** This is particularly useful in cases where users may accidentally write sensitive information to Loki that they do not want exposed. Users craft a LogQL query that selects the specific lines they're interested in, and then can choose to either filter out those lines from query results, or permanently delete them from Loki's storage. For more information, see [Logs Deletion](../../operations/storage/logs-deletion/).
- **Improved query performance on instant queries.** Loki now splits instant queries with a large time range (for example, `sum(rate({app="foo"}[6h]))`) into several smaller sub-queries and executes them in parallel. Users don't need to take any action to enjoy this performance improvement; however, they can adjust the number of sub-queries generated by modifying the `split_queries_by_interval` configuration parameter, which currently defaults to `30m`.
- **Support Baidu AI Cloud as a storage backend.** Loki users can now use Baidu Object Storage (BOS) as their storage backend. See [bos_storage_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/) for details.
@ -19,7 +19,7 @@ For a full list of all changes, look at the [CHANGELOG](https://github.com/grafa
## Upgrade Considerations
As always, please read the [upgrade guide]({{< relref "../setup/upgrade#260" >}}) before upgrading Loki.
As always, please read the [upgrade guide](../../setup/upgrade/#260) before upgrading Loki.
@ -14,7 +14,7 @@ Grafana Labs is excited to announce the release of Loki 2.7. Here's a summary of
- **Better Support for Azure Blob Storage** thanks to the ability to use Azure's Service Principal Credentials.
- **Logs can now be pushed from the Loki canary** so you don't have to rely on a scraping service to use the canary.
- **Additional `label_format` fields**`__timestamp__` and `__line__`.
- **`fifocache` has been renamed** The in-memory `fifocache` has been renamed to `embedded-cache`. Check [upgrade guide]({{< relref "../setup/upgrade#270" >}}) for more details
- **`fifocache` has been renamed** The in-memory `fifocache` has been renamed to `embedded-cache`. Check [upgrade guide](../../setup/upgrade/#270) for more details
- **New HTTP endpoint for Ingester shutdown** that will also delete the ring token.
- **Faster label queries** thanks to new parallization.
- **Introducing Stream Sharding** an experimental new feature to help deal with very large streams.
@ -30,7 +30,7 @@ For a full list of all, look at the [CHANGELOG](https://github.com/grafana/loki/
## Upgrade Considerations
As always, please read the [upgrade guide]({{< relref "../setup/upgrade#270" >}}) before upgrading Loki.
As always, please read the [upgrade guide](../../setup/upgrade/#270) before upgrading Loki.
@ -13,7 +13,7 @@ each container will use the default driver unless configured otherwise.
## Installation
Before configuring the plugin, [install or upgrade the Grafana Loki Docker Driver Client]({{< relref "../docker-driver" >}})
Before configuring the plugin, [install or upgrade the Grafana Loki Docker Driver Client](../)
## Change the logging driver for a container
@ -110,7 +110,7 @@ Stack name and service name for each swarm service and project name and service
## Labels
Loki can receive a set of labels along with log line. These labels are used to index log entries and query back logs using [LogQL stream selector]({{< relref "../../query/log_queries#log-stream-selector" >}}).
Loki can receive a set of labels along with log line. These labels are used to index log entries and query back logs using [LogQL stream selector](../../../query/log_queries/#log-stream-selector).
By default, the Docker driver will add the following labels to each log line:
@ -215,8 +215,8 @@ To specify additional logging driver options, you can use the --log-opt NAME=VAL
| `loki-min-backoff` | No | `500ms` | The minimum amount of time to wait before retrying a batch. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-max-backoff` | No | `5m` | The maximum amount of time to wait before retrying a batch. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-retries` | No | `10` | The maximum amount of retries for a log batch. Setting it to `0` will retry indefinitely. |
| `loki-pipeline-stage-file` | No | | The location of a pipeline stage configuration file ([example](https://github.com/grafana/loki/blob/main/clients/cmd/docker-driver/pipeline-example.yaml)). Pipeline stages allows to parse log lines to extract more labels, [see associated documentation]({{< relref "../../send-data/promtail/stages" >}}). |
| `loki-pipeline-stages` | No | | The pipeline stage configuration provided as a string [see pipeline stages](#pipeline-stages) and [associated documentation]({{< relref "../../send-data/promtail/stages" >}}). |
| `loki-pipeline-stage-file` | No | | The location of a pipeline stage configuration file ([example](https://github.com/grafana/loki/blob/main/clients/cmd/docker-driver/pipeline-example.yaml)). Pipeline stages allows to parse log lines to extract more labels, [see associated documentation](../../promtail/stages/). |
| `loki-pipeline-stages` | No | | The pipeline stage configuration provided as a string [see pipeline stages](#pipeline-stages) and [associated documentation](../../promtail/stages/). |
| `loki-relabel-config` | No | | A [Prometheus relabeling configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) allowing you to rename labels [see relabeling](#relabeling). |
| `loki-tenant-id` | No | | Set the tenant id (http header`X-Scope-OrgID`) when sending logs to Loki. It can be overridden by a pipeline stage. |
| `loki-tls-ca-file` | No | | Set the path to a custom certificate authority. |
This image also uses `LOKI_URL`, `LOKI_USERNAME`, and `LOKI_PASSWORD` environment variables to specify the Loki endpoint, user, and password (you can leave the USERNAME and PASSWORD blank if they're not used).
This image starts an instance of Fluentd that forwards incoming logs to the specified Loki URL. As an alternate, containerized applications can also use [docker driver plugin]({{< relref "../docker-driver" >}}) to ship logs without needing Fluentd.
This image starts an instance of Fluentd that forwards incoming logs to the specified Loki URL. As an alternate, containerized applications can also use [docker driver plugin](../docker-driver/) to ship logs without needing Fluentd.
Grafana Loki includes [Terraform](https://www.terraform.io/) and [CloudFormation](https://aws.amazon.com/cloudformation/) for shipping Cloudwatch, Cloudtrail, VPC Flow Logs and loadbalancer logs to Loki via a [lambda function](https://aws.amazon.com/lambda/). This is done via [lambda-promtail](https://github.com/grafana/loki/blob/main/tools/lambda-promtail) which processes cloudwatch events and propagates them to Loki (or a Promtail instance) via the push-api [scrape config]({{< relref "../../send-data/promtail/configuration#loki_push_api" >}}).
Grafana Loki includes [Terraform](https://www.terraform.io/) and [CloudFormation](https://aws.amazon.com/cloudformation/) for shipping Cloudwatch, Cloudtrail, VPC Flow Logs and loadbalancer logs to Loki via a [lambda function](https://aws.amazon.com/lambda/). This is done via [lambda-promtail](https://github.com/grafana/loki/blob/main/tools/lambda-promtail) which processes cloudwatch events and propagates them to Loki (or a Promtail instance) via the push-api [scrape config](../promtail/configuration/#loki_push_api).
## Deployment
@ -91,7 +91,7 @@ If using CloudFormation to write your infrastructure code, you should consider t
### Ephemeral Jobs
This workflow is intended to be an effective approach for monitoring ephemeral jobs such as those run on AWS Lambda which are otherwise hard/impossible to monitor via one of the other Loki [clients]({{< relref ".." >}}).
This workflow is intended to be an effective approach for monitoring ephemeral jobs such as those run on AWS Lambda which are otherwise hard/impossible to monitor via one of the other Loki [clients](../).
Ephemeral jobs can quite easily run afoul of cardinality best practices. During high request load, an AWS lambda function might balloon in concurrency, creating many log streams in Cloudwatch. For this reason lambda-promtail defaults to **not** keeping the log stream value as a label when propagating the logs to Loki. This is only possible because new versions of Loki no longer have an ingestion ordering constraint on logs within a single stream.
Incoming logs can have seven special labels assigned to them which can be used in [relabeling]({{< relref "../../send-data/promtail/configuration#relabel_configs" >}}) or later stages in a Promtail [pipeline]({{< relref "../../send-data/promtail/pipelines" >}}):
Incoming logs can have seven special labels assigned to them which can be used in [relabeling](../promtail/configuration/#relabel_configs) or later stages in a Promtail [pipeline](../promtail/pipelines/):
- `__aws_log_type`: Where this log came from (Cloudwatch, Kinesis or S3).
- `__aws_cloudwatch_log_group`: The associated Cloudwatch Log Group for this log.
@ -239,7 +239,7 @@ An array of fields which will be mapped to labels and sent to Loki, when this li
#### metadata_fields
An array of fields which will be mapped to [structured metadata]({{< relref "../../get-started/labels/structured-metadata.md" >}}) and sent to Loki for each log line
An array of fields which will be mapped to [structured metadata](../../get-started/labels/structured-metadata/) and sent to Loki for each log line
@ -16,7 +16,7 @@ For ingesting logs to Loki using the OpenTelemetry Collector, you must use the [
## Loki configuration
When logs are ingested by Loki using an OpenTelemetry protocol (OTLP) ingestion endpoint, some of the data is stored as [Structured Metadata]({{< relref "../../get-started/labels/structured-metadata" >}}).
When logs are ingested by Loki using an OpenTelemetry protocol (OTLP) ingestion endpoint, some of the data is stored as [Structured Metadata](../../get-started/labels/structured-metadata/).
You must set `allow_structured_metadata` to `true` within your Loki config file. Otherwise, Loki will reject the log payload as malformed. Note that Structured Metadata is enabled by default in Loki 3.0 and later.
@ -74,7 +74,7 @@ service:
Since the OpenTelemetry protocol differs from the Loki storage model, here is how data in the OpenTelemetry format will be mapped by default to the Loki data model during ingestion, which can be changed as explained later:
- Index labels: Resource attributes map well to index labels in Loki, since both usually identify the source of the logs. The default list of Resource Attributes to store as Index labels can be configured using `default_resource_attributes_as_index_labels` under [distributor's otlp_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#distributor). By default, the following resource attributes will be stored as index labels, while the remaining attributes are stored as [Structured Metadata]({{< relref "../../get-started/labels/structured-metadata" >}}) with each log entry:
- Index labels: Resource attributes map well to index labels in Loki, since both usually identify the source of the logs. The default list of Resource Attributes to store as Index labels can be configured using `default_resource_attributes_as_index_labels` under [distributor's otlp_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#distributor). By default, the following resource attributes will be stored as index labels, while the remaining attributes are stored as [Structured Metadata](../../get-started/labels/structured-metadata/) with each log entry:
- cloud.availability_zone
- cloud.region
- container.name
@ -101,7 +101,7 @@ Since the OpenTelemetry protocol differs from the Loki storage model, here is ho
- LogLine: `LogRecord.Body` holds the body of the log. However, since Loki only supports Log body in string format, we will stringify non-string values using the [AsString method from the OTel collector lib](https://github.com/open-telemetry/opentelemetry-collector/blob/ab3d6c5b64701e690aaa340b0a63f443ff22c1f0/pdata/pcommon/value.go#L353).
- [Structured Metadata]({{< relref "../../get-started/labels/structured-metadata" >}}): Anything which can’t be stored in Index labels and LogLine would be stored as Structured Metadata. Here is a non-exhaustive list of what will be stored in Structured Metadata to give a sense of what it will hold:
- [Structured Metadata](../../get-started/labels/structured-metadata/): Anything which can’t be stored in Index labels and LogLine would be stored as Structured Metadata. Here is a non-exhaustive list of what will be stored in Structured Metadata to give a sense of what it will hold:
- Resource Attributes not stored as Index labels is replicated and stored with each log entry.
- Everything under InstrumentationScope is replicated and stored with each log entry.
- Everything under LogRecord except `LogRecord.Body`, `LogRecord.TimeUnixNano` and sometimes `LogRecord.ObservedTimestamp`.
Sending logs from cloud services to Grafana Loki is a little different depending on the AWS service you are using. The following tutorials walk you through configuring cloud services to send logs to Loki.
In this tutorial we're going to setup [Promtail]({{< relref "../../../../send-data/promtail" >}}) on an AWS EC2 instance and configure it to sends all its logs to a Grafana Loki instance.
In this tutorial we're going to setup [Promtail](../../) on an AWS EC2 instance and configure it to sends all its logs to a Grafana Loki instance.
First let's make sure we're running as root by using `sudo -s`.
Next we'll download, install and give executable right to [Promtail]({{< relref "../../../../send-data/promtail" >}}).
Next we'll download, install and give executable right to [Promtail](../../).
```bash
mkdir /opt/promtail && cd /opt/promtail
@ -97,7 +97,7 @@ unzip "promtail-linux-amd64.zip"
chmod a+x "promtail-linux-amd64"
```
Now we're going to download the [Promtail configuration]({{< relref "../../../../send-data/promtail" >}}) file below and edit it, don't worry we will explain what those means.
Now we're going to download the [Promtail configuration](../../) file below and edit it, don't worry we will explain what those means.
The file is also available as a gist at [cyriltovena/promtail-ec2.yaml][config gist].
```bash
@ -140,11 +140,11 @@ scrape_configs:
target_label: __host__
```
The **server** section indicates Promtail to bind his http server to 3100. Promtail serves HTTP pages for [troubleshooting]({{< relref "../../../../send-data/promtail/troubleshooting" >}}) service discovery and targets.
The **server** section indicates Promtail to bind his http server to 3100. Promtail serves HTTP pages for [troubleshooting](../../troubleshooting/) service discovery and targets.
The **clients** section allow you to target your loki instance, if you're using GrafanaCloud simply replace `<user id>` and `<api secret>` with your credentials. Otherwise just replace the whole URL with your custom Loki instance.(e.g `http://my-loki-instance.my-org.com/loki/api/v1/push`)
[Promtail]({{< relref "../../../../send-data/promtail" >}}) uses the same [Prometheus **scrape_configs**][prometheus scrape config]. This means if you already own a Prometheus instance the config will be very similar and easy to grasp.
[Promtail](../../) uses the same [Prometheus **scrape_configs**][prometheus scrape config]. This means if you already own a Prometheus instance the config will be very similar and easy to grasp.
Since we're running on AWS EC2 we want to uses EC2 service discovery, this will allows us to scrape metadata about the current instance (and even your custom tags) and attach those to our logs. This way managing and querying on logs will be much easier.
You can now verify in Grafana that Loki has correctly received your instance logs by using the [LogQL]({{< relref "../../../../query" >}}) query `{zone="us-east-2"}`.
You can now verify in Grafana that Loki has correctly received your instance logs by using the [LogQL](../../../../query/) query `{zone="us-east-2"}`.
@ -267,7 +267,7 @@ You can download the final config example from our [GitHub repository][final con
That's it, save the config and you can `reboot` the machine (or simply restart the service `systemctl restart promtail.service`).
Let's head back to Grafana and verify that your Promtail logs are available in Grafana by using the [LogQL]({{< relref "../../../../query" >}}) query `{unit="promtail.service"}` in Explore. Finally make sure to checkout [live tailing][live tailing] to see logs appearing as they are ingested in Loki.
Let's head back to Grafana and verify that your Promtail logs are available in Grafana by using the [LogQL](../../../../query/) query `{unit="promtail.service"}` in Explore. Finally make sure to checkout [live tailing][live tailing] to see logs appearing as they are ingested in Loki.
@ -52,7 +52,7 @@ Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.8-eks-fd1
## Adding Promtail DaemonSet
To ship all your pods logs we're going to set up [Promtail]({{< relref "../../../../send-data/promtail" >}}) as a DaemonSet in our cluster. This means it will run on each nodes of the cluster, we will then configure it to find the logs of your containers on the host.
To ship all your pods logs we're going to set up [Promtail](../../) as a DaemonSet in our cluster. This means it will run on each nodes of the cluster, we will then configure it to find the logs of your containers on the host.
What's nice about Promtail is that it uses the same [service discovery as Prometheus][prometheus conf], you should make sure the `scrape_configs` of Promtail matches the Prometheus one. Not only this is simpler to configure, but this also means Metrics and Logs will have the same metadata (labels) attached by the Prometheus service discovery. When querying Grafana you will be able to correlate metrics and logs very quickly, you can read more about this on our [blogpost][correlate].
@ -238,7 +238,7 @@ We need a service account with the following permissions:
This enables Promtail to read log entries from the pubsub subscription created before.
You can find an example for Promtail scrape config for `gcplog` [here]({{< relref "../../scraping#gcp-log-scraping" >}})
You can find an example for Promtail scrape config for `gcplog` [here](../../scraping/#gcp-log-scraping)
If you are scraping logs from multiple GCP projects, then this serviceaccount should have above permissions in all the projects you are tyring to scrape.
@ -42,8 +42,8 @@ defined by the schema below. Brackets indicate that a parameter is optional. For
non-list parameters the value is set to the specified default.
For more detailed information on configuring how to discover and scrape logs from
targets, see [Scraping]({{< relref "./scraping" >}}). For more information on transforming logs
from scraped targets, see [Pipelines]({{< relref "./pipelines" >}}).
targets, see [Scraping](../scraping/). For more information on transforming logs
from scraped targets, see [Pipelines](../pipelines/).
## Reload at runtime
@ -462,7 +462,7 @@ docker_sd_configs:
### pipeline_stages
[Pipeline]({{< relref "./pipelines" >}}) stages are used to transform log entries and their labels. The pipeline is executed after the discovery process finishes. The `pipeline_stages` object consists of a list of stages which correspond to the items listed below.
[Pipeline](../pipelines/) stages are used to transform log entries and their labels. The pipeline is executed after the discovery process finishes. The `pipeline_stages` object consists of a list of stages which correspond to the items listed below.
In most cases, you extract data from logs with `regex` or `json` stages. The extracted data is transformed into a temporary map object. The data can then be used by Promtail e.g. as values for `labels` or as an `output`. Additionally any other stage aside from `docker` and `cri` can access the extracted data.
@ -608,7 +608,7 @@ template:
#### match
The match stage conditionally executes a set of stages when a log entry matches
a configurable [LogQL]({{< relref "../../query" >}}) stream selector.
a configurable [LogQL](../../../query/) stream selector.
```yaml
match:
@ -922,8 +922,8 @@ Promtail needs to wait for the next message to catch multi-line messages,
therefore delays between messages can occur.
See recommended output configurations for
[syslog-ng]({{< relref "./scraping#syslog-ng-output-configuration" >}}) and
[rsyslog]({{< relref "./scraping#rsyslog-output-configuration" >}}). Both configurations enable
[syslog-ng](../scraping/#syslog-ng-output-configuration) and
[rsyslog](../scraping/#rsyslog-output-configuration). Both configurations enable
IETF Syslog with octet-counting.
You may need to increase the open files limit for the Promtail process
@ -1302,7 +1302,7 @@ Each GELF message received will be encoded in JSON as the log line. For example:
{"version":"1.1","host":"example.org","short_message":"A short message","timestamp":1231231123,"level":5,"_some_extra":"extra"}
```
You can leverage [pipeline stages]({{< relref "./stages" >}}) with the GELF target,
You can leverage [pipeline stages](../stages/) with the GELF target,
if for example, you want to parse the log line and extract more labels or change the log line format.
```yaml
@ -1468,7 +1468,7 @@ All Cloudflare logs are in JSON. Here is an example:
}
```
You can leverage [pipeline stages]({{< relref "./stages" >}}) if, for example, you want to parse the JSON log line and extract more labels or change the log line format.
You can leverage [pipeline stages](../stages/) if, for example, you want to parse the JSON log line and extract more labels or change the log line format.
### heroku_drain
@ -2177,7 +2177,7 @@ The `tracing` block configures tracing for Jaeger. Currently, limited to configu
## Example Docker Config
It's fairly difficult to tail Docker files on a standalone machine because they are in different locations for every OS. We recommend the [Docker logging driver]({{< relref "../../send-data/docker-driver" >}}) for local Docker installs or Docker Compose.
It's fairly difficult to tail Docker files on a standalone machine because they are in different locations for every OS. We recommend the [Docker logging driver](../../docker-driver/) for local Docker installs or Docker Compose.
If running in a Kubernetes environment, you should look at the defined configs which are in [helm](https://github.com/grafana/helm-charts/blob/main/charts/promtail/templates/configmap.yaml) and [jsonnet](https://github.com/grafana/loki/blob/main/production/ksonnet/promtail/scrape_config.libsonnet), these leverage the prometheus service discovery libraries (and give Promtail its name) for automatically finding and tailing pods. The jsonnet config explains with comments what each section is for.
@ -96,7 +96,7 @@ Here, the `create` mode works as explained in (2) above. The `create` mode is op
### Kubernetes
[Kubernetes Service Discovery in Promtail]({{< relref "../scraping#kubernetes-discovery" >}}) also uses file-based scraping. Meaning, logs from your pods are stored on the nodes and Promtail scrapes the pod logs from the node files.
[Kubernetes Service Discovery in Promtail](../scraping/#kubernetes-discovery) also uses file-based scraping. Meaning, logs from your pods are stored on the nodes and Promtail scrapes the pod logs from the node files.
You can [configure](https://kubernetes.io/docs/concepts/cluster-administration/logging/#log-rotation) the `kubelet` process running on each node to manage log rotation via two configuration settings.
@ -160,4 +160,4 @@ We recommend using kubelet for log rotation.
Promtail uses `polling` to watch for file changes. A `polling` mechanism combined with a [copy and truncate](#copy-and-truncate) log rotation may result in losing some logs. As explained earlier in this topic, this happens when the file is truncated before Promtail reads all the log lines from such a file.
Therefore, for a long-term solution, we strongly recommend changing the log rotation strategy to [rename and create](#rename-and-create). Alternatively, as a workaround in the short term, you can tweak the promtail client's `batchsize` [config]({{< relref "../configuration#clients" >}}) to set higher values (like 5M or 8M). This gives Promtail more room to read loglines without frequently waiting for push responses from the Loki server.
Therefore, for a long-term solution, we strongly recommend changing the log rotation strategy to [rename and create](#rename-and-create). Alternatively, as a workaround in the short term, you can tweak the promtail client's `batchsize` [config](../configuration/#clients) to set higher values (like 5M or 8M). This gives Promtail more room to read loglines without frequently waiting for push responses from the Loki server.
@ -49,7 +49,7 @@ Targets can be configured using the `azure_event_hubs` stanza:
```
Only `fully_qualified_namespace`, `connection_string` and `event_hubs` are required fields.
Read the [configuration]({{< relref "./configuration#azure-event-hubs" >}}) section for more information.
Read the [configuration](../configuration/#azure-event-hubs) section for more information.
## Cloudflare
@ -68,7 +68,7 @@ scrape_configs:
```
Only `api_token` and `zone_id` are required.
Refer to the [Cloudfare]({{< relref "./configuration#cloudflare" >}}) configuration section for details.
Refer to the [Cloudfare](../configuration/#cloudflare) configuration section for details.
## File Target Discovery
@ -180,7 +180,7 @@ relabel_configs:
target_label: '__host__'
```
See [Relabeling](#relabeling) for more information. For more information on how to configure the service discovery see the [Kubernetes Service Discovery configuration]({{< relref "./configuration#kubernetes_sd_config" >}}).
See [Relabeling](#relabeling) for more information. For more information on how to configure the service discovery see the [Kubernetes Service Discovery configuration](../configuration/#kubernetes_sd_config).
## GCP Log scraping
@ -212,7 +212,7 @@ Here `project_id` and `subscription` are the only required fields.
- `project_id` is the GCP project id.
- `subscription` is the GCP pubsub subscription where Promtail can consume log entries from.
Before using `gcplog` target, GCP should be [configured]({{< relref "./cloud/gcp" >}}) with pubsub subscription to receive logs from.
Before using `gcplog` target, GCP should be [configured](../cloud/gcp/) with pubsub subscription to receive logs from.
It also supports `relabeling` and `pipeline` stages just like other targets.
@ -248,7 +248,7 @@ section. This server exposes the single endpoint `POST /gcp/api/v1/push`, respon
For Google's PubSub to be able to send logs, **Promtail server must be publicly accessible, and support HTTPS**. For that, Promtail can be deployed
as part of a larger orchestration service like Kubernetes, which can handle HTTPS traffic through an ingress, or it can be hosted behind
a proxy/gateway, offloading the HTTPS to that component and routing the request to Promtail. Once that's solved, GCP can be [configured]({{< relref "./cloud/gcp" >}})
a proxy/gateway, offloading the HTTPS to that component and routing the request to Promtail. Once that's solved, GCP can be [configured](../cloud/gcp/)
to send logs to Promtail.
It also supports `relabeling` and `pipeline` stages.
@ -320,7 +320,7 @@ Configuration is specified in a`heroku_drain` block within the Promtail `scrape_
```
Within the `scrape_configs` configuration for a Heroku Drain target, the `job_name` must be a Prometheus-compatible [metric name](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).
The [server]({{< relref "./configuration#server" >}}) section configures the HTTP server created for receiving logs.
The [server](../configuration/#server) section configures the HTTP server created for receiving logs.
`labels` defines a static set of label values added to each received log entry. `use_incoming_timestamp` can be used to pass
the timestamp received from Heroku.
@ -371,7 +371,7 @@ clients:
- [ <client_option> ]
```
Refer to [`client_config`]({{< relref "./configuration#clients" >}}) from the Promtail
Refer to [`client_config`](../configuration/#clients) from the Promtail
Configuration reference for all available options.
## Journal Scraping (Linux Only)
@ -490,7 +490,7 @@ scrape_configs:
```
Only the `brokers` and `topics` are required.
Read the [configuration]({{< relref "./configuration#kafka" >}}) section for more information.
Read the [configuration](../configuration/#kafka) section for more information.
## Relabeling
@ -641,7 +641,7 @@ You can relabel default labels via [Relabeling](#relabeling) if required.
Providing a path to a bookmark is mandatory, it will be used to persist the last event processed and allow
resuming the target without skipping logs.
Read the [configuration]({{< relref "./configuration#windows_events" >}}) section for more information.
Read the [configuration](../configuration/#windows_events) section for more information.
See the [eventlogmessage]({{< relref "./stages/eventlogmessage" >}}) stage for extracting
See the [eventlogmessage](../stages/eventlogmessage/) stage for extracting
For `older_than` to work, you must be using the [timestamp]({{< relref "./timestamp" >}}) stage to set the timestamp from the ingested log line _before_ applying the `drop` stage.
For `older_than` to work, you must be using the [timestamp](../timestamp/) stage to set the timestamp from the ingested log line _before_ applying the `drop` stage.
@ -17,7 +17,7 @@ This document describes known failure modes of Promtail on edge cases and the ad
Promtail can be configured to print log stream entries instead of sending them to Loki.
This can be used in combination with [piping data](#pipe-data-to-promtail) to debug or troubleshoot Promtail log parsing.
In dry run mode, Promtail still support reading from a [positions]({{< relref "../configuration#positions" >}}) file however no update will be made to the targeted file, this is to ensure you can easily retry the same set of lines.
In dry run mode, Promtail still support reading from a [positions](../configuration/#positions) file however no update will be made to the targeted file, this is to ensure you can easily retry the same set of lines.
To start Promtail in dry run mode use the flag `--dry-run` as shown in the example below:
@ -80,9 +80,9 @@ This will add labels `k1` and `k2` with respective values `v1` and `v2`.
In pipe mode Promtail also support file configuration using `--config.file`, however do note that positions config is not used and
only **the first scrape config is used**.
[`static_configs:`]({{< relref "../configuration" >}}) can be used to provide static labels, although the targets property is ignored.
[`static_configs:`](../configuration/) can be used to provide static labels, although the targets property is ignored.
If you don't provide any [`scrape_config:`]({{< relref "../configuration#scrape_configs" >}}) a default one is used which will automatically adds the following default labels: `{job="stdin",hostname="<detected_hostname>"}`.
If you don't provide any [`scrape_config:`](../configuration/#scrape_configs) a default one is used which will automatically adds the following default labels: `{job="stdin",hostname="<detected_hostname>"}`.
For example you could use this config below to parse and add the label `level` on all your piped logs:
By default, the chart installs in [Simple Scalable]({{< relref "./install-scalable" >}}) mode. This is the recommended method for most users. To understand the differences between deployment methods, see the [Loki deployment modes]({{< relref "../../../get-started/deployment-modes" >}}) documentation.
By default, the chart installs in [Simple Scalable](../install-scalable/) mode. This is the recommended method for most users. To understand the differences between deployment methods, see the [Loki deployment modes](../../../../get-started/deployment-modes/) documentation.
## Monitoring Loki
The Loki Helm chart does not deploy self-monitoring by default. Loki clusters can be monitored using the meta-monitoring stack, which monitors the logs, metrics, and traces of the Loki cluster. There are two deployment options for this stack, see the installation instructions within [Monitoring]({{< relref "./monitor-and-alert" >}}).
The Loki Helm chart does not deploy self-monitoring by default. Loki clusters can be monitored using the meta-monitoring stack, which monitors the logs, metrics, and traces of the Loki cluster. There are two deployment options for this stack, see the installation instructions within [Monitoring](../monitor-and-alert/).
{{<admonitiontype="note">}}
The meta-monitoring stack replaces the monitoring section of the Loki helm chart which is now **DEPRECATED**. See the [Monitoring]({{< relref "./monitor-and-alert" >}}) section for more information.
The meta-monitoring stack replaces the monitoring section of the Loki helm chart which is now **DEPRECATED**. See the [Monitoring](../monitor-and-alert/) section for more information.
{{</admonition>}}
## Canary
This chart installs the [Loki Canary app]({{< relref "../../../operations/loki-canary" >}}) by default. This is another tool to verify the Loki deployment is in a healthy state. It can be disabled by setting `lokiCanary.enabled=false`.
This chart installs the [Loki Canary app](../../../../operations/loki-canary/) by default. This is another tool to verify the Loki deployment is in a healthy state. It can be disabled by setting `lokiCanary.enabled=false`.
## Gateway
@ -48,4 +48,4 @@ If NetworkPolicies are enabled, they are more restrictive if the gateway is enab
## Caching
By default, this chart configures in-memory caching. If that caching does not work for your deployment, you should setup [memcache]({{< relref "../../../operations/caching" >}}).
By default, this chart configures in-memory caching. If that caching does not work for your deployment, you should setup [memcache](../../../../operations/caching/).
The [scalable]({{< relref "../install-scalable" >}}) installation requires a managed object store such as AWS S3 or Google Cloud Storage or a self-hosted store such as Minio. The [single binary]({{< relref "../install-monolithic" >}}) installation can only use the filesystem for storage.
The [scalable](../install-scalable/) installation requires a managed object store such as AWS S3 or Google Cloud Storage or a self-hosted store such as Minio. The [single binary](../install-monolithic/) installation can only use the filesystem for storage.
This guide assumes Loki will be installed in one of the modes above and that a `values.yaml ` has been created.
This Helm Chart deploys Grafana Loki in [simple scalable mode](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/deployment-modes/#simple-scalable) within a Kubernetes cluster.
This chart configures Loki to run `read`, `write`, and `backend` targets in a [scalable mode]({{< relref "../../../../get-started/deployment-modes#simple-scalable" >}}). Loki’s simple scalable deployment mode separates execution paths into read, write, and backend targets.
This chart configures Loki to run `read`, `write`, and `backend` targets in a [scalable mode](../../../../get-started/deployment-modes/#simple-scalable). Loki’s simple scalable deployment mode separates execution paths into read, write, and backend targets.
The default Helm chart deploys the following components:
- Read component (3 replicas)
@ -242,7 +242,7 @@ minio:
```
{{</collapse>}}
To configure other storage providers, refer to the [Helm Chart Reference]({{< relref "../reference" >}}).
To configure other storage providers, refer to the [Helm Chart Reference](../reference/).
## Next Steps
* Configure an agent to [send log data to Loki](/docs/loki/<LOKI_VERSION>/send-data/).
This section contains instructions for migrating from one Loki implementation to another.
- [Migrate]({{< relref "./migrate-to-tsdb" >}}) to TSDB index.
- [Migrate]({{< relref "./migrate-from-distributed" >}}) from the `Loki-distributed` Helm chart to the `loki` Helm chart.
- [Migrate]({{< relref "./migrate-to-three-scalable-targets" >}}) from the two target Helm chart to the three target scalable configuration Helm chart.
- [Migrate]({{< relref "./migrate-storage-clients" >}}) from the legacy storage clients to the Thanos object storage client.
- [Migrate](migrate-to-tsdb/) to TSDB index.
- [Migrate](migrate-from-distributed/) from the `Loki-distributed` Helm chart to the `loki` Helm chart.
- [Migrate](migrate-to-three-scalable-targets/) from the two target Helm chart to the three target scalable configuration Helm chart.
- [Migrate](migrate-storage-clients/) from the legacy storage clients to the Thanos object storage client.
[TSDB]({{< relref "../../../operations/storage/tsdb" >}}) is the recommended index type for Loki and is where the current development lies.
If you are running Loki with [boltb-shipper]({{< relref "../../../operations/storage/boltdb-shipper" >}}) or any of the [legacy index types](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/storage/#index-storage) that have been deprecated,
[TSDB](../../../operations/storage/tsdb/) is the recommended index type for Loki and is where the current development lies.
If you are running Loki with [boltb-shipper](../../../operations/storage/boltdb-shipper/) or any of the [legacy index types](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/storage/#index-storage) that have been deprecated,
we strongly recommend migrating to TSDB.
@ -68,7 +68,7 @@ storage_config:
### Run compactor
We strongly recommended running the [compactor]({{< relref "../../../operations/storage/retention#compactor" >}}) when using TSDB index. It is responsible for running compaction and retention on TSDB index.
We strongly recommended running the [compactor](../../../operations/storage/retention/#compactor) when using TSDB index. It is responsible for running compaction and retention on TSDB index.
Not running index compaction will result in sub-optimal query performance.
Please refer to the [compactor section]({{< relref "../../../operations/storage/retention#compactor" >}}) for more information and configuration examples.
Please refer to the [compactor section](../../../operations/storage/retention/#compactor) for more information and configuration examples.