chore: Remove relref shortcodes (#16624)

pull/16633/head
Robby Milo 1 year ago committed by GitHub
parent 1e1e7a94d5
commit e4c6888ba8
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 4
      docs/sources/_index.md
  2. 6
      docs/sources/alert/_index.md
  3. 12
      docs/sources/community/_index.md
  4. 13
      docs/sources/community/design-documents/_index.md
  5. 2
      docs/sources/community/lids/0001-Introduction.md
  6. 34
      docs/sources/community/maintaining/release/_index.md
  7. 2
      docs/sources/community/maintaining/release/backport-commits.md
  8. 2
      docs/sources/community/maintaining/release/create-release-branch.md
  9. 4
      docs/sources/community/maintaining/release/document-metrics-configurations-changes.md
  10. 2
      docs/sources/community/maintaining/release/major-release.md
  11. 8
      docs/sources/community/maintaining/release/patch-go-version.md
  12. 10
      docs/sources/community/maintaining/release/patch-vulnerabilities.md
  13. 2
      docs/sources/community/maintaining/release/prepare-release.md
  14. 2
      docs/sources/community/maintaining/release/update-version-numbers.md
  15. 2
      docs/sources/configure/_index.md
  16. 4
      docs/sources/configure/examples/_index.md
  17. 79
      docs/sources/configure/examples/query-frontend.md
  18. 14
      docs/sources/get-started/architecture.md
  19. 20
      docs/sources/get-started/components.md
  20. 2
      docs/sources/get-started/deployment-modes.md
  21. 2
      docs/sources/get-started/labels/_index.md
  22. 4
      docs/sources/get-started/labels/bp-labels.md
  23. 10
      docs/sources/get-started/overview.md
  24. 2
      docs/sources/operations/_index.md
  25. 6
      docs/sources/operations/authentication.md
  26. 2
      docs/sources/operations/blocking-queries.md
  27. 2
      docs/sources/operations/meta-monitoring/_index.md
  28. 9
      docs/sources/operations/query-fairness/_index.md
  29. 2
      docs/sources/operations/request-validation-rate-limits.md
  30. 11
      docs/sources/operations/scalability.md
  31. 2
      docs/sources/operations/storage/_index.md
  32. 2
      docs/sources/operations/storage/legacy-storage.md
  33. 2
      docs/sources/operations/storage/logs-deletion.md
  34. 8
      docs/sources/operations/storage/table-manager/_index.md
  35. 4
      docs/sources/operations/storage/tsdb.md
  36. 2
      docs/sources/operations/troubleshooting.md
  37. 4
      docs/sources/query/_index.md
  38. 12
      docs/sources/query/log_queries/_index.md
  39. 6
      docs/sources/query/metric_queries.md
  40. 30
      docs/sources/reference/loki-http-api.md
  41. 14
      docs/sources/release-notes/v2-3.md
  42. 12
      docs/sources/release-notes/v2-4.md
  43. 2
      docs/sources/release-notes/v2-5.md
  44. 6
      docs/sources/release-notes/v2-6.md
  45. 4
      docs/sources/release-notes/v2-7.md
  46. 2
      docs/sources/release-notes/v2-8.md
  47. 4
      docs/sources/send-data/alloy/_index.md
  48. 8
      docs/sources/send-data/docker-driver/configuration.md
  49. 2
      docs/sources/send-data/fluentd/_index.md
  50. 6
      docs/sources/send-data/lambda-promtail/_index.md
  51. 2
      docs/sources/send-data/logstash/_index.md
  52. 6
      docs/sources/send-data/otel/_index.md
  53. 10
      docs/sources/send-data/promtail/_index.md
  54. 8
      docs/sources/send-data/promtail/cloud/_index.md
  55. 16
      docs/sources/send-data/promtail/cloud/ec2/_index.md
  56. 2
      docs/sources/send-data/promtail/cloud/eks/_index.md
  57. 2
      docs/sources/send-data/promtail/cloud/gcp/_index.md
  58. 18
      docs/sources/send-data/promtail/configuration.md
  59. 4
      docs/sources/send-data/promtail/logrotation/_index.md
  60. 10
      docs/sources/send-data/promtail/pipelines.md
  61. 44
      docs/sources/send-data/promtail/scraping.md
  62. 50
      docs/sources/send-data/promtail/stages/_index.md
  63. 2
      docs/sources/send-data/promtail/stages/drop.md
  64. 2
      docs/sources/send-data/promtail/stages/eventlogmessage.md
  65. 2
      docs/sources/send-data/promtail/stages/json.md
  66. 2
      docs/sources/send-data/promtail/stages/logfmt.md
  67. 6
      docs/sources/send-data/promtail/stages/match.md
  68. 2
      docs/sources/send-data/promtail/stages/pack.md
  69. 2
      docs/sources/send-data/promtail/stages/structured_metadata.md
  70. 2
      docs/sources/send-data/promtail/stages/tenant.md
  71. 6
      docs/sources/send-data/promtail/troubleshooting/_index.md
  72. 6
      docs/sources/setup/_index.md
  73. 12
      docs/sources/setup/install/_index.md
  74. 16
      docs/sources/setup/install/helm/concepts.md
  75. 2
      docs/sources/setup/install/helm/configure-storage/_index.md
  76. 2
      docs/sources/setup/install/helm/install-microservices/_index.md
  77. 4
      docs/sources/setup/install/helm/install-scalable/_index.md
  78. 4
      docs/sources/setup/install/helm/monitor-and-alert/_index.md
  79. 4
      docs/sources/setup/install/local.md
  80. 8
      docs/sources/setup/migrate/_index.md
  81. 8
      docs/sources/setup/migrate/migrate-to-tsdb/_index.md

@ -4,6 +4,8 @@ description: Grafana Loki is a set of open source components that can be compose
aliases:
- /docs/loki/
weight: 100
cascade:
GRAFANA_VERSION: latest
hero:
title: Grafana Loki
level: 1
@ -41,7 +43,7 @@ cards:
## Overview
Unlike other logging systems, Loki is built around the idea of only indexing metadata about your logs' labels (just like Prometheus labels).
Log data itself is then compressed and stored in chunks in object stores such as Amazon Simple Storage Service (S3) or Google Cloud Storage (GCS), or even locally on the filesystem.
Log data itself is then compressed and stored in chunks in object stores such as Amazon Simple Storage Service (S3) or Google Cloud Storage (GCS), or even locally on the filesystem.
## Explore

@ -83,7 +83,7 @@ We support [Prometheus-compatible](https://prometheus.io/docs/prometheus/latest/
> Querying the precomputed result will then often be much faster than executing the original expression every time it is needed. This is especially useful for dashboards, which need to query the same expression repeatedly every time they refresh.
Loki allows you to run [metric queries]({{< relref "../query/metric_queries" >}}) over your logs, which means
Loki allows you to run [metric queries](../query/metric_queries/) over your logs, which means
that you can derive a numeric aggregation from your logs, like calculating the number of requests over time from your NGINX access log.
### Example
@ -167,7 +167,7 @@ Further configuration options can be found under [ruler](https://grafana.com/doc
### Operations
Please refer to the [Recording Rules]({{< relref "../operations/recording-rules" >}}) page.
Please refer to the [Recording Rules](../operations/recording-rules/) page.
## Use cases
@ -308,7 +308,7 @@ The [Cortex rules action](https://github.com/grafana/cortex-rules-action) introd
One option to scale the Ruler is by scaling it horizontally. However, with multiple Ruler instances running they will need to coordinate to determine which instance will evaluate which rule. Similar to the ingesters, the Rulers establish a hash ring to divide up the responsibilities of evaluating rules.
The possible configurations are listed fully in the [configuration documentation]({{< relref "../configure" >}}), but in order to shard rules across multiple Rulers, the rules API must be enabled via flag (`-ruler.enable-api`) or config file parameter. Secondly, the Ruler requires its own ring to be configured. From there the Rulers will shard and handle the division of rules automatically. Unlike ingesters, Rulers do not hand over responsibility: all rules are re-sharded randomly every time a Ruler is added to or removed from the ring.
The possible configurations are listed fully in the [configuration documentation](../configure/), but in order to shard rules across multiple Rulers, the rules API must be enabled via flag (`-ruler.enable-api`) or config file parameter. Secondly, the Ruler requires its own ring to be configured. From there the Rulers will shard and handle the division of rules automatically. Unlike ingesters, Rulers do not hand over responsibility: all rules are re-sharded randomly every time a Ruler is added to or removed from the ring.
A full sharding-enabled Ruler example is:

@ -5,9 +5,9 @@ weight: 1100
---
# Community
1. [Governance]({{< relref "./governance" >}})
1. [Getting in Touch]({{< relref "./getting-in-touch" >}})
1. [Contributing]({{< relref "./contributing" >}})
1. [Maintaining Loki]({{< relref "./maintaining" >}})
1. [Loki Improvement Documents]({{< relref "./lids" >}})
1. [Design documents]({{< relref "./design-documents" >}})
1. [Governance](governance/)
1. [Getting in Touch](getting-in-touch/)
1. [Contributing](contributing/)
1. [Maintaining Loki](maintaining/)
1. [Loki Improvement Documents](lids/)
1. [Design documents](design-documents/)

@ -1,13 +1,14 @@
---
title: Design documents
description: Loki Design documents
aliases:
- ../design-documents/
aliases:
- ../design-documents/
weight: 600
---
# Design documents
- [Labels from Logs]({{< relref "./labels" >}})
- [Promtail Push API]({{< relref "./2020-02-Promtail-Push-API" >}})
- [Write-Ahead Logs]({{< relref "./2020-09-Write-Ahead-Log" >}})
- [Ordering Constraint Removal]({{< relref "./2021-01-Ordering-Constraint-Removal" >}})
- [Labels from Logs](labels/)
- [Promtail Push API](2020-02-promtail-push-api/)
- [Write-Ahead Logs](2020-09-write-ahead-log/)
- [Ordering Constraint Removal](2021-01-ordering-constraint-removal/)

@ -53,4 +53,4 @@ Inspired by Python's [PEP](https://peps.python.org/pep-0001/) and Kafka's [KIP](
Google Docs were considered for this, but they are less useful because:
- they would need to be owned by the Grafana Labs organisation, so that they remain viewable even if the author closes their account
- we already have previous [design documents]({{< relref "../design-documents" >}}) in our documentation and, in a recent ([5th Jan 2023](https://docs.google.com/document/d/1MNjiHQxwFukm2J4NJRWyRgRIiK7VpokYyATzJ5ce-O8/edit#heading=h.78vexgrrtw5a)) community call, the community expressed a preference for this type of approach
- we already have previous [design documents](../../design-documents/) in our documentation and, in a recent ([5th Jan 2023](https://docs.google.com/document/d/1MNjiHQxwFukm2J4NJRWyRgRIiK7VpokYyATzJ5ce-O8/edit#heading=h.78vexgrrtw5a)) community call, the community expressed a preference for this type of approach

@ -10,31 +10,31 @@ weight: 500
This document is a series of instructions for core Grafana Loki maintainers to be able
to publish a new [Grafana Loki](https://github.com/grafana/loki) release.
The general process for releasing a new version of Grafana Loki is to merge the release PR for that version. Every commit to branches matching the pattern `release-[0-9]+.[0-9]+.x` will trigger a [prepare patch release]({{< relref "./prepare-release.md" >}}) workflow. This workflow will build release candidates for that patch, automatically generate release notes based on the commits since the last release, and update the long-running PR for that release. To publish the release, merge the PR.
The general process for releasing a new version of Grafana Loki is to merge the release PR for that version. Every commit to branches matching the pattern `release-[0-9]+.[0-9]+.x` will trigger a [prepare patch release](prepare-release/) workflow. This workflow will build release candidates for that patch, automatically generate release notes based on the commits since the last release, and update the long-running PR for that release. To publish the release, merge the PR.
Every commit to branches matching the pattern `k[0-9]+` will trigger a [prepare minor release]({{< relref "./prepare-release.md" >}}) workflow. This follows the same process as a patch release, but prepares a minor release instead. To publish the minor release, merge the PR.
Every commit to branches matching the pattern `k[0-9]+` will trigger a [prepare minor release](prepare-release/) workflow. This follows the same process as a patch release, but prepares a minor release instead. To publish the minor release, merge the PR.
Releasing a new major version requires a [custom major release workflow]({{< relref "./major-release.md" >}}) to be created to run of the branch we want to release from. Once that workflow is created, the steps for releasing a new major are the same as a minor or patch release.
Releasing a new major version requires a [custom major release workflow](major-release/) to be created to run of the branch we want to release from. Once that workflow is created, the steps for releasing a new major are the same as a minor or patch release.
## Release stable version
1. [Create release branch]({{< relref "./create-release-branch" >}})
1. [Backport commits]({{< relref "./backport-commits" >}})
1. [Document Metrics and Configurations changes]({{< relref "./document-metrics-configurations-changes" >}})
1. [Prepare Upgrade guide]({{< relref "./prepare-upgrade-guide" >}})
1. [Update version numbers]({{< relref "./update-version-numbers" >}})
1. [Create release branch](create-release-branch/)
1. [Backport commits](backport-commits/)
1. [Document Metrics and Configurations changes](document-metrics-configurations-changes/)
1. [Prepare Upgrade guide](prepare-upgrade-guide/)
1. [Update version numbers](update-version-numbers/)
## Release patched version
1. [Backport commits]({{< relref "./backport-commits" >}})
1. [Document Metrics and Configurations changes]({{< relref "./document-metrics-configurations-changes" >}})
1. [Prepare Upgrade guide]({{< relref "./prepare-upgrade-guide" >}})
1. [Merge Release PR]({{< relref "./merge-release-pr" >}})
1. [Update version numbers]({{< relref "./update-version-numbers" >}})
1. [Backport commits](backport-commits/)
1. [Document Metrics and Configurations changes](document-metrics-configurations-changes/)
1. [Prepare Upgrade guide](prepare-upgrade-guide/)
1. [Merge Release PR](merge-release-pr/)
1. [Update version numbers](update-version-numbers/)
## Release security patched version
1. [Patch vulnerabilities]({{< relref "./patch-vulnerabilities" >}})
1. [Backport commits]({{< relref "./backport-commits" >}})
1. [Merge Release PR]({{< relref "./merge-release-pr" >}})
1. [Update version numbers]({{< relref "./update-version-numbers" >}})
1. [Patch vulnerabilities](patch-vulnerabilities/)
1. [Backport commits](backport-commits/)
1. [Merge Release PR](merge-release-pr/)
1. [Update version numbers](update-version-numbers/)

@ -9,7 +9,7 @@ Any PRs or commits not on the release branch that you want to include in the rel
## Before you begin
1. Determine the [VERSION_PREFIX]({{< relref "./concepts/version" >}}).
1. Determine the [VERSION_PREFIX](../concepts/version/).
2. If the release branch already has all the code changes on it, skip this step.

@ -9,7 +9,7 @@ branch is then used for all the Stable Releases, and all Patch Releases for that
## Before you begin
1. Determine the [VERSION_PREFIX]({{< relref "./concepts/version" >}}).
1. Determine the [VERSION_PREFIX](../concepts/version/).
1. Announce about the upcoming release in `#loki-releases` internal slack channel.
1. Skip this announcement for a patch release. Create an issue to communicate beginning of the release process with the community. Example issue [here](https://github.com/grafana/loki/issues/10468).

@ -17,11 +17,11 @@ All the steps are performed on `release-VERSION_PREFIX` branch.
$ OLD_VERSION=X.Y.Z ./tools/diff-config.sh
```
1. Record configurations that are modified (either renamed or had its default value changed) in the [upgrade guide]({{< relref "./prepare-upgrade-guide" >}}).
1. Record configurations that are modified (either renamed or had its default value changed) in the [upgrade guide](../prepare-upgrade-guide/).
1. Check if any metrics have changed.
```
$ OLD_VERSION=X.Y.Z ./tools/diff-metrics.sh
```
1. Record metrics whose names have been modified in the [upgrade guide]({{< relref "./prepare-upgrade-guide" >}}).
1. Record metrics whose names have been modified in the [upgrade guide](../prepare-upgrade-guide/).

@ -5,7 +5,7 @@ description: Describes the process to create a workflow for a major release of G
# Prepare Major Release
A major release follows the same process as [minor and patch releases]({{< relref "./prepare-release.md" >}}), but requires a custom workflow to be created to run on the branch we want to release from. The reason for this is that we don't do major releases very often, so it is not economical to keep those workflows running all the time.
A major release follows the same process as [minor and patch releases](../prepare-release/), but requires a custom workflow to be created to run on the branch we want to release from. The reason for this is that we don't do major releases very often, so it is not economical to keep those workflows running all the time.
To create a major release workflow, follow the steps below.

@ -8,7 +8,7 @@ Update vulnerable Go version to non-vulnerable Go version to build Grafana Loki
## Before you begin.
1. Determine the [VERSION_PREFIX]({{< relref "./concepts/version" >}}).
1. Determine the [VERSION_PREFIX](../concepts/version/).
1. Need to sign-in to Docker hub to be able to push Loki build image.
@ -18,8 +18,8 @@ Update vulnerable Go version to non-vulnerable Go version to build Grafana Loki
1. Update Go version in the Grafana Loki build image (`loki-build-image/Dockerfile`) on the `main` branch.
1. [Release a new Loki Build Image]({{< relref "../release-loki-build-image.md" >}})
1. [Release a new Loki Build Image](../../release-loki-build-image/)
1. [Backport]({{< relref "./backport-commits" >}}) the Dockerfile change to `release-VERSION_PREFIX` branch.
1. [Backport](../backport-commits/) the Dockerfile change to `release-VERSION_PREFIX` branch.
1. [Backport]({{< relref "./backport-commits" >}}) the Loki Build Image version change from `main` to `release-VERSION_PREFIX` branch.
1. [Backport](../backport-commits/) the Loki Build Image version change from `main` to `release-VERSION_PREFIX` branch.

@ -8,7 +8,7 @@ This step patches vulnerabilities in Grafana Loki binaries and Docker images.
## Before you begin
1. Determine the [VERSION_PREFIX]({{< relref "./concepts/version" >}}).
1. Determine the [VERSION_PREFIX](../concepts/version/).
Vulnerabilities can be from two main sources.
@ -34,7 +34,7 @@ Before start patching vulnerabilities, know what are you patching. It can be one
1. Patch it on `main` branch
1. [Backport]({{< relref "./backport-commits" >}}) to `release-$VERSION_PREFIX` branch.
1. [Backport](../backport-commits/) to `release-$VERSION_PREFIX` branch.
1. Patch Go dependencies.
@ -47,14 +47,14 @@ Before start patching vulnerabilities, know what are you patching. It can be one
go mod tidy
go mod vendor
```
1. [Backport]({{< relref "./backport-commits" >}}) it to `release-$VERSION_PREFIX` branch.
1. [Backport](../backport-commits/) it to `release-$VERSION_PREFIX` branch.
1. Repeat for each Go dependency
1. [Patch Go compiler]({{< relref "./patch-go-version" >}}).
1. [Patch Go compiler](../patch-go-version/).
1. Patch Grafana Loki Docker dependencies, for example: Alphine Linux base images).
1. Update Docker image version. [Example PR](https://github.com/grafana/loki/pull/10573).
1. [Backport]({{< relref "./backport-commits" >}}) to `release-$VERSION_PREFIX` branch
1. [Backport](../backport-commits/) to `release-$VERSION_PREFIX` branch

@ -13,4 +13,4 @@ Releasing Grafana Loki consists of merging a long-running release PR. Two workfl
## Major releases
Major releases follow the same process as minor and patch releases, but require a custom workflow to be created to run on the branch we want to release from. The reason for this is that we don't do major releases very often, so it is not economical to keep those workflows running all the time.To create a major release workflow, follow the steps in the [major release workflow]({{< relref "./major-release.md" >}}) document.
Major releases follow the same process as minor and patch releases, but require a custom workflow to be created to run on the branch we want to release from. The reason for this is that we don't do major releases very often, so it is not economical to keep those workflows running all the time.To create a major release workflow, follow the steps in the [major release workflow](../major-release/) document.

@ -8,7 +8,7 @@ Upgrade the Loki version to the new release version in documents, examples, json
## Before you begin
1. Determine the [VERSION_PREFIX]({{< relref "./concepts/version" >}}).
1. Determine the [VERSION_PREFIX](../concepts/version/).
2. Skip this step if you are doing a patch release on old release branch.

@ -13,7 +13,7 @@ Grafana Loki is configured in a YAML file (usually referred to as `loki.yaml` )
which contains information on the Loki server and its individual components,
depending on which mode Loki is launched in.
Configuration examples can be found in the [Configuration Examples]({{< relref "./examples/configuration-examples" >}}) document.
Configuration examples can be found in the [Configuration Examples](examples/configuration-examples/) document.
<!-- The shared `configuration.md` file is generated from `/docs/templates/configuration.template`. To make changes to the included content, modify the template file and run `make doc` from root directory to regenerate the shared file. -->

@ -9,6 +9,6 @@ weight: 800
The following pages contain examples of how to configure Grafana Loki.
- [Configuration snippets and ready-to-use configuration examples]({{< relref "./configuration-examples" >}}).
- [Deploy a query frontend on a existing cluster]({{< relref "./query-frontend" >}}).
- [Configuration snippets and ready-to-use configuration examples](configuration-examples/).
- [Deploy a query frontend on a existing cluster](query-frontend/).
- [Configuration examples for using Thanos-based storage clients](./thanos-storage-configs).

@ -1,9 +1,13 @@
---
title: Query frontend example
menuTitle:
menuTitle:
description: Kubernetes query frontend example.
weight: 200
weight: 200
aliases:
- ../../configuration/query-frontend/
- ../../configure/query-frontend/
---
# Query frontend example
## Disclaimer
@ -68,6 +72,7 @@ data:
```
### Frontend Service
```yaml
apiVersion: v1
kind: Service
@ -79,10 +84,10 @@ metadata:
namespace: <namespace>
spec:
ports:
- name: query-frontend-http
port: 3100
protocol: TCP
targetPort: 3100
- name: query-frontend-http
port: 3100
protocol: TCP
targetPort: 3100
selector:
name: query-frontend
sessionAffinity: None
@ -113,33 +118,33 @@ spec:
name: query-frontend
spec:
containers:
- args:
- -config.file=/etc/loki/config.yaml
- -log.level=debug
- -target=query-frontend
image: grafana/loki:latest
imagePullPolicy: Always
name: query-frontend
ports:
- containerPort: 3100
name: http
protocol: TCP
resources:
limits:
memory: 1200Mi
requests:
cpu: "2"
memory: 600Mi
volumeMounts:
- mountPath: /etc/loki
name: loki-frontend
- args:
- -config.file=/etc/loki/config.yaml
- -log.level=debug
- -target=query-frontend
image: grafana/loki:latest
imagePullPolicy: Always
name: query-frontend
ports:
- containerPort: 3100
name: http
protocol: TCP
resources:
limits:
memory: 1200Mi
requests:
cpu: '2'
memory: 600Mi
volumeMounts:
- mountPath: /etc/loki
name: loki-frontend
restartPolicy: Always
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
- configMap:
defaultMode: 420
name: loki-frontend
name: loki-frontend
name: loki-frontend
```
### Grafana
@ -151,13 +156,13 @@ Once you've deployed these, point your Grafana data source to the new frontend s
The query frontend operates in one of two ways:
- Specify `--frontend.downstream-url` or its YAML equivalent, `frontend.downstream_url`. This proxies requests over HTTP to the specified URL.
- Without `--frontend.downstream-url` or its yaml equivalent `frontend.downstream_url`, the query frontend defaults to a pull service. As a pull service, the frontend instantiates per-tenant queues that downstream queriers pull queries from via GRPC. To act as a pull service, queriers need to specify `-querier.frontend-address` or its YAML equivalent `frontend_worker.frontend_address`.
- Without `--frontend.downstream-url` or its yaml equivalent `frontend.downstream_url`, the query frontend defaults to a pull service. As a pull service, the frontend instantiates per-tenant queues that downstream queriers pull queries from via GRPC. To act as a pull service, queriers need to specify `-querier.frontend-address` or its YAML equivalent `frontend_worker.frontend_address`.
Set `ClusterIP=None` for the query frontend pull service.
This causes DNS resolution of each query frontend pod IP address.
It avoids wrongly resolving to the service IP.
Set `ClusterIP=None` for the query frontend pull service.
This causes DNS resolution of each query frontend pod IP address.
It avoids wrongly resolving to the service IP.
Enable `publishNotReadyAddresses=true` on the query frontend pull service.
Doing so eliminates a race condition in which the query frontend address
is needed before the query frontend becomes ready
when at least one querier connects.
Enable `publishNotReadyAddresses=true` on the query frontend pull service.
Doing so eliminates a race condition in which the query frontend address
is needed before the query frontend becomes ready
when at least one querier connects.

@ -18,7 +18,7 @@ To get started easily, run Grafana Loki in "single binary" mode with all compone
Grafana Loki is designed to easily redeploy a cluster under a different mode as your needs change, with no configuration changes or minimal configuration changes.
For more information, refer to [Deployment modes]({{< relref "./deployment-modes" >}}) and [Components]({{< relref "./components" >}}).
For more information, refer to [Deployment modes](../deployment-modes/) and [Components](../components/).
![Loki components](../loki_architecture_components.svg "Loki components")
@ -28,7 +28,7 @@ Loki stores all data in a single object storage backend, such as Amazon Simple S
This mode uses an adapter called **index shipper** (or short **shipper**) to store index (TSDB or BoltDB) files the same way we store chunk files in object storage.
This mode of operation became generally available with Loki 2.0 and is fast, cost-effective, and simple. It is where all current and future development lies.
Prior to 2.0, Loki had different storage backends for indexes and chunks. For more information, refer to [Legacy storage]({{< relref "../operations/storage/legacy-storage" >}}).
Prior to 2.0, Loki had different storage backends for indexes and chunks. For more information, refer to [Legacy storage](../../operations/storage/legacy-storage/).
### Data format
@ -45,14 +45,14 @@ The diagram above shows the high-level overview of the data that is stored in th
There are two index formats that are currently supported as single store with index shipper:
- [TSDB]({{< relref "../operations/storage/tsdb" >}}) (recommended)
- [TSDB](../../operations/storage/tsdb/) (recommended)
Time Series Database (or short TSDB) is an [index format](https://github.com/prometheus/prometheus/blob/main/tsdb/docs/format/index.md) originally developed by the maintainers of [Prometheus](https://github.com/prometheus/prometheus) for time series (metric) data.
It is extensible and has many advantages over the deprecated BoltDB index.
New storage features in Loki are solely available when using TSDB.
- [BoltDB]({{< relref "../operations/storage/boltdb-shipper" >}}) (deprecated)
- [BoltDB](../../operations/storage/boltdb-shipper/) (deprecated)
[Bolt](https://github.com/boltdb/bolt) is a low-level, transactional key-value store written in Go.
@ -106,7 +106,7 @@ The following ASCII diagram describes the chunk format in detail.
respectively.
The `structuredMetadata` section stores non-repeated strings. It is used to store label names and label values from
[structured metadata]({{< relref "./labels/structured-metadata" >}}).
[structured metadata](../labels/structured-metadata/).
Note that the labels strings and lengths within the `structuredMetadata` section are stored compressed.
#### Block format
@ -147,7 +147,7 @@ On a high level, the write path in Loki works as follows:
1. The distributor responds with a success (2xx status code) in case it received at least a quorum of acknowledged writes.
or with an error (4xx or 5xx status code) in case write operations failed.
Refer to [Components]({{< relref "./components" >}}) for a more detailed description of the components involved in the write path.
Refer to [Components](../components/) for a more detailed description of the components involved in the write path.
## Read path
@ -164,7 +164,7 @@ On a high level, the read path in Loki works as follows:
1. The query frontend waits for all sub-queries of a query to be finished and returned by the queriers.
1. The query frontend merges the indvidual results into a final result and return it to the client.
Refer to [Components]({{< relref "./components" >}}) for a more detailed description of the components involved in the read path.
Refer to [Components](../components/) for a more detailed description of the components involved in the read path.
## Multi-tenancy

@ -12,7 +12,7 @@ aliases:
Loki is a modular system that contains many components that can either be run together (in "single binary" mode with target `all`),
in logical groups (in "simple scalable deployment" mode with targets `read`, `write`, `backend`), or individually (in "microservice" mode).
For more information see [Deployment modes]({{< relref "./deployment-modes" >}}).
For more information see [Deployment modes](../deployment-modes/).
| Component | _individual_ | `all` | `read` | `write` | `backend` |
|----------------------------------------------------|--------------| - | - | - | - |
@ -133,7 +133,7 @@ the hash ring. Each ingester has a state of either `PENDING`, `JOINING`,
another ingester that is `LEAVING`. This only applies for legacy deployment modes.
{{< admonition type="note" >}}
Handoff is a deprecated behavior mainly used in stateless deployments of ingesters, which is discouraged. Instead, it's recommended using a stateful deployment model together with the [write ahead log]({{< relref "../operations/storage/wal" >}}).
Handoff is a deprecated behavior mainly used in stateless deployments of ingesters, which is discouraged. Instead, it's recommended using a stateful deployment model together with the [write ahead log](../../operations/storage/wal/).
{{< /admonition >}}
1. `JOINING` is an Ingester's state when it is currently inserting its tokens
@ -190,7 +190,7 @@ Logs from each unique set of labels are built up into "chunks" in memory and
then flushed to the backing storage backend.
If an ingester process crashes or exits abruptly, all the data that has not yet
been flushed could be lost. Loki is usually configured with a [Write Ahead Log]({{< relref "../operations/storage/wal" >}}) which can be _replayed_ on restart as well as with a `replication_factor` (usually 3) of each log to mitigate this risk.
been flushed could be lost. Loki is usually configured with a [Write Ahead Log](../../operations/storage/wal/) which can be _replayed_ on restart as well as with a `replication_factor` (usually 3) of each log to mitigate this risk.
When not configured to accept out-of-order writes,
all lines pushed to Loki for a given stream (unique combination of
@ -209,7 +209,7 @@ nanosecond timestamps:
### Handoff
{{< admonition type="warning" >}}
Handoff is deprecated behavior mainly used in stateless deployments of ingesters, which is discouraged. Instead, it's recommended using a stateful deployment model together with the [write ahead log]({{< relref "../operations/storage/wal" >}}).
Handoff is deprecated behavior mainly used in stateless deployments of ingesters, which is discouraged. Instead, it's recommended using a stateful deployment model together with the [write ahead log](../../operations/storage/wal/).
{{< /admonition >}}
By default, when an ingester is shutting down and tries to leave the hash ring,
@ -280,7 +280,7 @@ This cache is only applicable when using single store TSDB.
## Query scheduler
The **query scheduler** is an **optional service** providing more [advanced queuing functionality]({{< relref "../operations/query-fairness" >}}) than the [query frontend](#query-frontend).
The **query scheduler** is an **optional service** providing more [advanced queuing functionality](../../operations/query-fairness/) than the [query frontend](#query-frontend).
When using this component in the Loki deployment, query frontend pushes split up queries to the query scheduler which enqueues them in an internal in-memory queue.
There is a queue for each tenant to guarantee the query fairness across all tenants.
The queriers that connect to the query scheduler act as workers that pull their jobs from the queue, execute them, and return them to the query frontend for aggregation. Queriers therefore need to be configured with the query scheduler address (via the `-querier.scheduler-address` CLI flag) in order to allow them to connect to the query scheduler.
@ -290,7 +290,7 @@ Query schedulers are **stateless**. However, due to the in-memory queue, it's re
## Querier
The **querier** service is responsible for executing [Log Query Language (LogQL)]({{< relref "../query" >}}) queries.
The **querier** service is responsible for executing [Log Query Language (LogQL)](../../query/) queries.
The querier can handle HTTP requests from the client directly (in "single binary" mode, or as part of the read path in "simple scalable deployment")
or pull subqueries from the query frontend or query scheduler (in "microservice" mode).
@ -306,7 +306,7 @@ timestamp, label set, and log message.
The **index gateway** service is responsible for handling and serving metadata queries.
Metadata queries are queries that look up data from the index. The index gateway is only used by "shipper stores",
such as [single store TSDB]({{< relref "../operations/storage/tsdb" >}}) or [single store BoltDB]({{< relref "../operations/storage/boltdb-shipper" >}}).
such as [single store TSDB](../../operations/storage/tsdb/) or [single store BoltDB](../../operations/storage/boltdb-shipper/).
The query frontend queries the index gateway for the log volume of queries so it can make a decision on how to shard the queries.
The queriers query the index gateway for chunk references for a given query so they know which chunks to fetch and query.
@ -317,14 +317,14 @@ In `ring` mode, index gateways use a consistent hash ring to distribute and shar
## Compactor
The **compactor** service is used by "shipper stores", such as [single store TSDB]({{< relref "../operations/storage/tsdb" >}})
or [single store BoltDB]({{< relref "../operations/storage/boltdb-shipper" >}}), to compact the multiple index files produced by the ingesters
The **compactor** service is used by "shipper stores", such as [single store TSDB](../../operations/storage/tsdb/)
or [single store BoltDB](../../operations/storage/boltdb-shipper/), to compact the multiple index files produced by the ingesters
and shipped to object storage into single index files per day and tenant. This makes index lookups more efficient.
To do so, the compactor downloads the files from object storage in a regular interval, merges them into a single one,
uploads the newly created index, and cleans up the old files.
Additionally, the compactor is also responsible for [log retention]({{< relref "../operations/storage/retention" >}}) and [log deletion]({{< relref "../operations/storage/logs-deletion" >}}).
Additionally, the compactor is also responsible for [log retention](../../operations/storage/retention/) and [log deletion](../../operations/storage/logs-deletion/).
In a Loki deployment, the compactor service is usually run as a single instance.

@ -30,7 +30,7 @@ Query parallelization is limited by the number of instances and the setting `max
## Simple Scalable
The simple scalable deployment is the default configuration installed by the [Loki Helm Chart]({{< relref "../setup/install/helm" >}}). This deployment mode is the easiest way to deploy Loki at scale. It strikes a balance between deploying in [monolithic mode](#monolithic-mode) or deploying each component as a [separate microservice](#microservices-mode).
The simple scalable deployment is the default configuration installed by the [Loki Helm Chart](../../setup/install/helm/). This deployment mode is the easiest way to deploy Loki at scale. It strikes a balance between deploying in [monolithic mode](#monolithic-mode) or deploying each component as a [separate microservice](#microservices-mode).
{{< admonition type="note" >}}
This deployment mode is sometimes referred to by the acronym SSD for simple scalable deployment, not to be confused with solid state drives. Loki uses an object store.

@ -351,7 +351,7 @@ The two previous examples use statically defined labels with a single value; how
__path__: /var/log/apache.log
```
This regex matches every component of the log line and extracts the value of each component into a capture group. Inside the pipeline code, this data is placed in a temporary data structure that allows use for several purposes during the processing of that log line (at which point that temp data is discarded). Much more detail about this can be found in the [Promtail pipelines]({{< relref "../../send-data/promtail/pipelines" >}}) documentation.
This regex matches every component of the log line and extracts the value of each component into a capture group. Inside the pipeline code, this data is placed in a temporary data structure that allows use for several purposes during the processing of that log line (at which point that temp data is discarded). Much more detail about this can be found in the [Promtail pipelines](../../send-data/promtail/pipelines/) documentation.
From that regex, we will be using two of the capture groups to dynamically set two labels based on content from the log line itself:

@ -21,7 +21,7 @@ Too many label value combinations leads to too many streams. The penalties for t
To avoid those issues, don't add a label for something until you know you need it! Use filter expressions (`|= "text"`, `|~ "regex"`, …) and brute force those logs. It works -- and it's fast.
If you often parse a label from a log line at query time, the label has a high cardinality, and extracting that label is expensive in terms of performance; consider extracting the label on the client side
attaching it as [structured metadata]({{< relref "./structured-metadata" >}}) to log lines .
attaching it as [structured metadata](../structured-metadata/) to log lines .
From early on, we have set a label dynamically using Promtail pipelines for `level`. This seemed intuitive for us as we often wanted to only show logs for `level="error"`; however, we are re-evaluating this now as writing a query. `{app="loki"} |= "level=error"` is proving to be just as fast for many of our applications as `{app="loki",level="error"}`.
@ -54,7 +54,7 @@ Loki has several client options: [Grafana Alloy](https://grafana.com/docs/alloy/
Each of these come with ways to configure what labels are applied to create log streams. But be aware of what dynamic labels might be applied.
Use the Loki series API to get an idea of what your log streams look like and see if there might be ways to reduce streams and cardinality.
Series information can be queried through the [Series API](https://grafana.com/docs/loki/<LOKI_VERSION>/reference/loki-http-api/), or you can use [logcli]({{< relref "../../query" >}}).
Series information can be queried through the [Series API](https://grafana.com/docs/loki/<LOKI_VERSION>/reference/loki-http-api/), or you can use [logcli](../../../query/).
In Loki 1.6.0 and newer the logcli series command added the `--analyze-labels` flag specifically for debugging high cardinality labels:

@ -24,9 +24,9 @@ A typical Loki-based logging stack consists of 3 components:
- **Agent** - An agent or client, for example Grafana Alloy, or Promtail, which is distributed with Loki. The agent scrapes logs, turns the logs into streams by adding labels, and pushes the streams to Loki through an HTTP API.
- **Loki** - The main server, responsible for ingesting and storing logs and processing queries. It can be deployed in three different configurations, for more information see [deployment modes]({{< relref "../get-started/deployment-modes" >}}).
- **Loki** - The main server, responsible for ingesting and storing logs and processing queries. It can be deployed in three different configurations, for more information see [deployment modes](../deployment-modes/).
- **[Grafana](https://github.com/grafana/grafana)** for querying and displaying log data. You can also query logs from the command line, using [LogCLI]({{< relref "../query/logcli" >}}) or using the Loki API directly.
- **[Grafana](https://github.com/grafana/grafana)** for querying and displaying log data. You can also query logs from the command line, using [LogCLI](../../query/logcli/) or using the Loki API directly.
## Loki features
@ -35,7 +35,7 @@ In its most common deployment, “simple scalable mode”, Loki decouples reques
If needed, each of the Loki components can also be run as microservices designed to run natively within Kubernetes.
- **Multi-tenancy** - Loki allows multiple tenants to share a single Loki instance. With multi-tenancy, the data and requests of each tenant is completely isolated from the others.
Multi-tenancy is [configured]({{< relref "../operations/multi-tenancy" >}}) by assigning a tenant ID in the agent.
Multi-tenancy is [configured](../../operations/multi-tenancy/) by assigning a tenant ID in the agent.
- **Third-party integrations** - Several third-party agents (clients) have support for Loki, via plugins. This lets you keep your existing observability setup while also shipping logs to Loki.
@ -44,10 +44,10 @@ Similarly, the Loki index, because it indexes only the set of labels, is signifi
By leveraging object storage as the only data storage mechanism, Loki inherits the reliability and stability of the underlying object store. It also capitalizes on both the cost efficiency and operational simplicity of object storage over other storage mechanisms like locally attached solid state drives (SSD) and hard disk drives (HDD).
The compressed chunks, smaller index, and use of low-cost object storage, make Loki less expensive to operate.
- **LogQL, the Loki query language** - [LogQL]({{< relref "../query" >}}) is the query language for Loki. Users who are already familiar with the Prometheus query language, [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/), will find LogQL familiar and flexible for generating queries against the logs.
- **LogQL, the Loki query language** - [LogQL](../../query/) is the query language for Loki. Users who are already familiar with the Prometheus query language, [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/), will find LogQL familiar and flexible for generating queries against the logs.
The language also facilitates the generation of metrics from log data,
a powerful feature that goes well beyond log aggregation.
- **Alerting** - Loki includes a component called the [ruler]({{< relref "../alert" >}}), which can continually evaluate queries against your logs, and perform an action based on the result. This allows you to monitor your logs for anomalies or events. Loki integrates with [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), or the [alert manager](/docs/grafana/latest/alerting) within Grafana.
- **Alerting** - Loki includes a component called the [ruler](../../alert/), which can continually evaluate queries against your logs, and perform an action based on the result. This allows you to monitor your logs for anomalies or events. Loki integrates with [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), or the [alert manager](/docs/grafana/latest/alerting) within Grafana.
- **Grafana integration** - Loki integrates with Grafana, Mimir, and Tempo, providing a complete observability stack, and seamless correlation between logs, metrics and traces.

@ -13,4 +13,4 @@ This section includes the following topics for managing and tuning Loki:
{{< section >}}
- [Upgrade Loki]({{< relref "../setup/upgrade" >}})
- [Upgrade Loki](../setup/upgrade/)

@ -9,7 +9,7 @@ weight:
Grafana Loki does not come with any included authentication layer. Operators are
expected to run an authenticating reverse proxy in front of your services.
The simple scalable [deployment mode]({{< relref "../get-started/deployment-modes" >}}) requires a reverse proxy to be deployed in front of Loki, to direct client API requests to either the read or write nodes. The Loki Helm chart includes a default reverse proxy configuration, using Nginx.
The simple scalable [deployment mode](../../get-started/deployment-modes/) requires a reverse proxy to be deployed in front of Loki, to direct client API requests to either the read or write nodes. The Loki Helm chart includes a default reverse proxy configuration, using Nginx.
A list of open-source reverse proxies you can use:
@ -22,7 +22,7 @@ A list of open-source reverse proxies you can use:
When using Loki in multi-tenant mode, Loki requires the HTTP header
`X-Scope-OrgID` to be set to a string identifying the tenant; the responsibility
of populating this value should be handled by the authenticating reverse proxy.
For more information, read the [multi-tenancy]({{< relref "./multi-tenancy" >}}) documentation.{{< /admonition >}}
For more information, read the [multi-tenancy](../multi-tenancy/) documentation.{{< /admonition >}}
For information on authenticating Promtail, see the documentation for [how to
configure Promtail]({{< relref "../send-data/promtail/configuration" >}}).
configure Promtail](../../send-data/promtail/configuration/).

@ -63,4 +63,4 @@ Blocked queries are logged, as well as counted in the `loki_blocked_queries` met
## Scope
Queries received via the API and executed as [alerting/recording rules]({{< relref "../alert" >}}) will be blocked.
Queries received via the API and executed as [alerting/recording rules](../../alert/) will be blocked.

@ -24,7 +24,7 @@ The Loki [mixin](https://github.com/grafana/loki/blob/main/production/loki-mixin
- To install meta-monitoring using the Loki Helm Chart and a local Loki stack, follow [these directions](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/install/helm/monitor-and-alert/with-local-monitoring/).
- To install the Loki mixin, follow [these directions]({{< relref "./mixins" >}}).
- To install the Loki mixin, follow [these directions](mixins/).
You should also plan separately for infrastructure-level monitoring, to monitor the capacity or throughput of your storage provider, for example, or your networking layer.

@ -4,9 +4,10 @@ menuTitle: Query fairness
description: Describes methods for guaranteeing query fairness across multiple actors within a single tenant using the scheduler.
weight:
---
# Ensure query fairness within tenants using actors
Loki uses [shuffle sharding]({{< relref "../shuffle-sharding/_index.md" >}})
Loki uses [shuffle sharding](../shuffle-sharding/)
to minimize impact across tenants in case of querier failures or misbehaving
neighboring tenants.
@ -19,7 +20,7 @@ In that case, as an operator, you would also want to ensure some sort of query
fairness across these actors within the tenants. An actor could be a Grafana user,
a CLI user, or an application accessing the API. To achieve that, Loki
introduced hierarchical scheduler queues in version 2.9 based on
[LID 0003: Query fairness across users within tenants]({{< relref "../../community/lids/0003-QueryFairnessInScheduler.md" >}})
[LID 0003: Query fairness across users within tenants](../../community/lids/0003-queryfairnessinscheduler/)
and they are enabled by default.
## What are hierarchical queues and how do they work
@ -33,7 +34,7 @@ Tenant queues are the first level of the queue hierarchy. When a tenant
executes a query without any further controls, all of its sub-queries are
enqueued to the first level queue.
The second level of the queue hierarchy is that the tenant can have sub-queues.
The second level of the queue hierarchy is that the tenant can have sub-queues.
Similar to how shuffle sharding assigns queries at the tenant level, each time
the Loki Scheduler makes a round-robin pick at the second level of the query
@ -99,7 +100,7 @@ or its respective YAML configuration block:
```yaml
query_scheduler:
max_queue_hierarchy_levels: 2 # defaults to 3
max_queue_hierarchy_levels: 2 # defaults to 3
```
It is advised to keep the levels at a reasonable level (ideally 1 to 3 levels),

@ -15,7 +15,7 @@ It is recommended that Loki operators set up alerts or dashboards with these met
### Terminology
- **sample**: a log line with [structured metadata]({{< relref "../get-started/labels/structured-metadata" >}})
- **sample**: a log line with [structured metadata](../../get-started/labels/structured-metadata/)
- **stream**: samples with a unique combination of labels
- **active stream**: streams that are present in the ingesters - these have recently received log lines within the `chunk_idle_period` period (default: 30m)

@ -2,8 +2,9 @@
title: Manage larger production deployments
menuTitle: Scale Loki
description: Describes strategies how to scale a Loki deployment when log volume increases.
weight:
weight:
---
# Manage larger production deployments
When needing to scale Loki due to increased log volume, operators should consider running several Loki processes
@ -59,9 +60,9 @@ ruler:
address: dns:///<query-frontend-service>:<grpc-port>
```
See [`here`](/configuration/#ruler) for further configuration options.
See [`here`](/docs/loki/<LOKI_VERSION>/configuration/#ruler) for further configuration options.
When you enable remote rule evaluation, the `ruler` component becomes a gRPC client to the `query-frontend` service;
When you enable remote rule evaluation, the `ruler` component becomes a gRPC client to the `query-frontend` service;
this will result in far lower `ruler` resource usage because the majority of the work has been externalized.
The LogQL queries coming from the `ruler` will be executed against the given `query-frontend` service.
Requests will be load-balanced across all `query-frontend` IPs if the `dns:///` prefix is used.
@ -77,8 +78,8 @@ Remote rule evaluation can be tuned with the following options:
- `ruler_remote_evaluation_timeout`: maximum allowable execution time for rule evaluations
- `ruler_remote_evaluation_max_response_size`: maximum allowable response size over gRPC connection from `query-frontend` to `ruler`
Both of these can be specified globally in the [`limits_config`](/configuration/#limits_config) section
or on a [per-tenant basis](/configuration/#runtime-configuration-file).
Both of these can be specified globally in the [`limits_config`](/docs/loki/<LOKI_VERSION>/configuration/#limits_config) section
or on a [per-tenant basis](/docs/loki/<LOKI_VERSION>/configuration/#runtime-configuration-file).
Remote rule evaluation exposes a number of metrics:

@ -51,7 +51,7 @@ For more information:
### ⚠ Supported chunks stores, not typically recommended for production use
- [Filesystem]({{< relref "./filesystem" >}}) (please read more about the filesystem to understand the pros/cons before using with production data)
- [Filesystem](filesystem/) (please read more about the filesystem to understand the pros/cons before using with production data)
- S3 API compatible storage, such as [MinIO](https://min.io/)
### ❌ Deprecated chunks stores

@ -30,7 +30,7 @@ maintenance tasks. It consists of:
{{< admonition type="note" >}}
Unlike the other core components of Loki, the chunk store is not a separate
service, job, or process, but rather a library embedded in the two services
that need to access Loki data: the [ingester]({{< relref "../../get-started/components#ingester" >}}) and [querier]({{< relref "../../get-started/components#querier" >}}).
that need to access Loki data: the [ingester](../../../get-started/components/#ingester) and [querier](../../../get-started/components/#querier).
{{< /admonition >}}
The chunk store relies on a unified interface to the

@ -15,7 +15,7 @@ The compactor component exposes REST [endpoints](https://grafana.com/docs/loki/<
Hitting the endpoint specifies the streams and the time window.
The deletion of the log entries takes place after a configurable cancellation time period expires.
Log entry deletion relies on configuration of the custom logs retention workflow as defined for the [compactor]({{< relref "./retention#compactor" >}}). The compactor looks at unprocessed requests which are past their cancellation period to decide whether a chunk is to be deleted or not.
Log entry deletion relies on configuration of the custom logs retention workflow as defined for the [compactor](../retention/#compactor). The compactor looks at unprocessed requests which are past their cancellation period to decide whether a chunk is to be deleted or not.
## Configuration

@ -30,7 +30,7 @@ time range exceeds the retention period.
The Table Manager supports the following backends:
- **Index store**
- [Single Store (boltdb-shipper)]({{< relref "../boltdb-shipper" >}})
- [Single Store (boltdb-shipper)](../boltdb-shipper/)
- [Amazon DynamoDB](https://aws.amazon.com/dynamodb)
- [Google Bigtable](https://cloud.google.com/bigtable)
- [Apache Cassandra](https://cassandra.apache.org)
@ -148,7 +148,7 @@ the expected behavior.
{{< /admonition >}}
For detailed information on configuring the retention, refer to the
[Loki Storage Retention]({{< relref "../retention" >}})
[Loki Storage Retention](../retention/)
documentation.
## Active / inactive tables
@ -205,12 +205,12 @@ The Table Manager can be executed in two ways:
### Monolithic mode
When Loki runs in [monolithic mode]({{< relref "../../../get-started/deployment-modes" >}}),
When Loki runs in [monolithic mode](../../../get-started/deployment-modes/),
the Table Manager is also started as component of the entire stack.
### Microservices mode
When Loki runs in [microservices mode]({{< relref "../../../get-started/deployment-modes" >}}),
When Loki runs in [microservices mode](../../../get-started/deployment-modes/),
the Table Manager should be started as separate service named `table-manager`.
You can check out a production grade deployment example at

@ -6,7 +6,7 @@ weight: 100
---
# Single Store TSDB (tsdb)
Starting with Loki v2.8, TSDB is the recommended Loki index. It is heavily inspired by the Prometheus's TSDB [sub-project](https://github.com/prometheus/prometheus/tree/main/tsdb). For a deeper explanation you can read Loki maintainer Owen's [blog post](https://lokidex.com/posts/tsdb/). The short version is that this new index is more efficient, faster, and more scalable. It also resides in object storage like the [boltdb-shipper]({{< relref "./boltdb-shipper" >}}) index which preceded it.
Starting with Loki v2.8, TSDB is the recommended Loki index. It is heavily inspired by the Prometheus's TSDB [sub-project](https://github.com/prometheus/prometheus/tree/main/tsdb). For a deeper explanation you can read Loki maintainer Owen's [blog post](https://lokidex.com/posts/tsdb/). The short version is that this new index is more efficient, faster, and more scalable. It also resides in object storage like the [boltdb-shipper](../boltdb-shipper/) index which preceded it.
## Example Configuration
@ -75,7 +75,7 @@ We've added a user per-tenant limit called `tsdb_max_query_parallelism` in the `
Previously we would statically shard queries based on the index row shards configured [here](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#period_config).
TSDB does Dynamic Query Sharding based on how much data a query is going to be processing.
We additionally store size(KB) and number of lines for each chunk in the TSDB index which is then used by the [Query Frontend]({{< relref "../../get-started/components#query-frontend" >}}) for planning the query.
We additionally store size(KB) and number of lines for each chunk in the TSDB index which is then used by the [Query Frontend](../../../get-started/components/#query-frontend) for planning the query.
Based on our experience from operating many Loki clusters, we have configured TSDB to aim for processing 300-600 MBs of data per query shard.
This means with TSDB we will be running more, smaller queries.

@ -62,7 +62,7 @@ can have many possible causes.
If you have a reverse proxy in front of Loki, that is, between Loki and Grafana, then check any configured timeouts, such as an NGINX proxy read timeout.
- Other causes. To determine if the issue is related to Loki itself or another system such as Grafana or a client-side error,
attempt to run a [LogCLI]({{< relref "../query/logcli" >}}) query in as direct a manner as you can. For example, if running on virtual machines, run the query on the local machine. If running in a Kubernetes cluster, then port forward the Loki HTTP port, and attempt to run the query there. If you do not get a timeout, then consider these causes:
attempt to run a [LogCLI](../../query/logcli/) query in as direct a manner as you can. For example, if running on virtual machines, run the query on the local machine. If running in a Kubernetes cluster, then port forward the Loki HTTP port, and attempt to run the query there. If you do not get a timeout, then consider these causes:
- Adjust the [Grafana dataproxy timeout](/docs/grafana/latest/administration/configuration/#dataproxy). Configure Grafana with a large enough dataproxy timeout.
- Check timeouts for reverse proxies or load balancers between your client and Grafana. Queries to Grafana are made from the your local browser with Grafana serving as a proxy (a dataproxy). Therefore, connections from your client to Grafana must have their timeout configured as well.

@ -15,8 +15,8 @@ LogQL uses labels and operators for filtering.
There are two types of LogQL queries:
- [Log queries]({{< relref "./log_queries" >}}) return the contents of log lines.
- [Metric queries]({{< relref "./metric_queries" >}}) extend log queries to calculate values
- [Log queries](log_queries/) return the contents of log lines.
- [Metric queries](metric_queries/) extend log queries to calculate values
based on query results.
## Binary operators

@ -200,7 +200,7 @@ will always run faster than
Line filter expressions are the fastest way to filter logs once the
log stream selectors have been applied.
Line filter expressions have support matching IP addresses. See [Matching IP addresses]({{< relref "../ip" >}}) for details.
Line filter expressions have support matching IP addresses. See [Matching IP addresses](../ip/) for details.
### Removing color codes
@ -240,7 +240,7 @@ Using Duration, Number and Bytes will convert the label value prior to compariso
For instance, `logfmt | duration > 1m and bytes_consumed > 20MB`
If the conversion of the label value fails, the log line is not filtered and an `__error__` label is added. To filters those errors see the [pipeline errors]({{< relref "..#pipeline-errors" >}}) section.
If the conversion of the label value fails, the log line is not filtered and an `__error__` label is added. To filters those errors see the [pipeline errors](../#pipeline-errors) section.
You can chain multiple predicates using `and` and `or` which respectively express the `and` and `or` binary operations. `and` can be equivalently expressed by a comma, a space or another pipe. Label filters can be place anywhere in a log pipeline.
@ -271,11 +271,11 @@ To evaluate the logical `and` first, use parenthesis, as in this example:
> Label filter expressions are the only expression allowed after the unwrap expression. This is mainly to allow filtering errors from the metric extraction.
Label filter expressions have support matching IP addresses. See [Matching IP addresses]({{< relref "../ip" >}}) for details.
Label filter expressions have support matching IP addresses. See [Matching IP addresses](../ip/) for details.
### Parser expression
Parser expression can parse and extract labels from the log content. Those extracted labels can then be used for filtering using [label filter expressions](#label-filter-expression) or for [metric aggregations]({{< relref "../metric_queries" >}}).
Parser expression can parse and extract labels from the log content. Those extracted labels can then be used for filtering using [label filter expressions](#label-filter-expression) or for [metric aggregations](../metric_queries/).
Extracted label keys are automatically sanitized by all parsers, to follow Prometheus metric name convention.(They can only contain ASCII letters and digits, as well as underscores and colons. They cannot start with a digit.)
@ -295,7 +295,7 @@ If an extracted label key name already exists in the original log stream, the ex
Loki supports [JSON](#json), [logfmt](#logfmt), [pattern](#pattern), [regexp](#regular-expression) and [unpack](#unpack) parsers.
It's easier to use the predefined parsers `json` and `logfmt` when you can. If you can't, the `pattern` and `regexp` parsers can be used for log lines with an unusual structure. The `pattern` parser is easier and faster to write; it also outperforms the `regexp` parser.
Multiple parsers can be used by a single log pipeline. This is useful for parsing complex logs. There are examples in [Multiple parsers]({{< relref "../query_examples#examples-that-use-multiple-parsers" >}}).
Multiple parsers can be used by a single log pipeline. This is useful for parsing complex logs. There are examples in [Multiple parsers](../query_examples/#examples-that-use-multiple-parsers).
#### JSON
@ -555,7 +555,7 @@ those labels:
#### unpack
The `unpack` parser parses a JSON log line, unpacking all embedded labels from Promtail's [`pack` stage]({{< relref "../../send-data/promtail/stages/pack.md" >}}).
The `unpack` parser parses a JSON log line, unpacking all embedded labels from Promtail's [`pack` stage](../../send-data/promtail/stages/pack/).
**A special property `_entry` will also be used to replace the original log line**.
For example, using `| unpack` with the log line:

@ -68,7 +68,7 @@ count_over_time({job="mysql"}[5m]) offset 5m // INVALID
### Unwrapped range aggregations
Unwrapped ranges uses extracted labels as sample values instead of log lines. However to select which label will be used within the aggregation, the log query must end with an unwrap expression and optionally a label filter expression to discard [errors]({{< relref ".#pipeline-errors" >}}).
Unwrapped ranges uses extracted labels as sample values instead of log lines. However to select which label will be used within the aggregation, the log query must end with an unwrap expression and optionally a label filter expression to discard [errors](./#pipeline-errors).
The unwrap expression is noted `| unwrap label_identifier` where the label identifier is the label name to use for extracting sample values.
@ -104,7 +104,7 @@ Which can be used to aggregate over distinct labels dimensions by including a `w
`without` removes the listed labels from the result vector, while all other labels are preserved the output. `by` does the opposite and drops labels that are not listed in the `by` clause, even if their label values are identical between all elements of the vector.
See [Unwrap examples]({{< relref "./query_examples#unwrap-examples" >}}) for query examples that use the unwrap expression.
See [Unwrap examples](../query_examples/#unwrap-examples) for query examples that use the unwrap expression.
## Built-in aggregation operators
@ -135,7 +135,7 @@ The aggregation operators can either be used to aggregate over all label values
The `without` clause removes the listed labels from the resulting vector, keeping all others.
The `by` clause does the opposite, dropping labels that are not listed in the clause, even if their label values are identical between all elements of the vector.
See [vector aggregation examples]({{< relref "./query_examples#vector-aggregation-examples" >}}) for query examples that use vector aggregation expressions.
See [vector aggregation examples](../query_examples/#vector-aggregation-examples) for query examples that use vector aggregation expressions.
## Functions

@ -26,12 +26,12 @@ These endpoints are exposed by the `distributor`, `write`, and `all` components:
- [`POST /loki/api/v1/push`](#ingest-logs)
- [`POST /otlp/v1/logs`](#ingest-logs-using-otlp)
A [list of clients]({{< relref "../send-data" >}}) can be found in the clients documentation.
A [list of clients](../../send-data/) can be found in the clients documentation.
### Query endpoints
{{< admonition type="note" >}}
Requests sent to the query endpoints must use valid LogQL syntax. For more information, see the [LogQL]({{< relref "../query" >}}) section of the documentation.
Requests sent to the query endpoints must use valid LogQL syntax. For more information, see the [LogQL](../../query/) section of the documentation.
{{< /admonition >}}
These HTTP endpoints are exposed by the `querier`, `query-frontend`, `read`, and `all` components:
@ -238,7 +238,7 @@ Alternatively, if the `Content-Type` header is set to `application/json`, a JSON
You can set `Content-Encoding: gzip` request header and post gzipped JSON.
You can optionally attach [structured metadata]({{< relref "../get-started/labels/structured-metadata" >}}) to each log line by adding a JSON object to the end of the log line array.
You can optionally attach [structured metadata](../../get-started/labels/structured-metadata/) to each log line by adding a JSON object to the end of the log line array.
The JSON object must be a valid JSON object with string keys and string values. The JSON object should not contain any nested object.
The JSON object must be set immediately after the log line. Here is an example of a log entry with some structured metadata attached:
@ -290,7 +290,7 @@ This type of query is often referred to as an instant query. Instant queries are
and will return a 400 (Bad Request) in case a log type query is provided.
The endpoint accepts the following query parameters in the URL:
- `query`: The [LogQL]({{< relref "../query" >}}) query to perform. Requests that do not use valid LogQL syntax will return errors.
- `query`: The [LogQL](../../query/) query to perform. Requests that do not use valid LogQL syntax will return errors.
- `limit`: The max number of entries to return. It defaults to `100`. Only applies to query types which produce a stream (log lines) response.
- `time`: The evaluation time for the query as a nanosecond Unix epoch or another [supported format](#timestamps). Defaults to now.
- `direction`: Determines the sort order of logs. Supported values are `forward` or `backward`. Defaults to `backward`.
@ -416,7 +416,7 @@ gave this response:
```
If your cluster has
[Grafana Loki Multi-Tenancy]({{< relref "../operations/multi-tenancy" >}}) enabled,
[Grafana Loki Multi-Tenancy](../../operations/multi-tenancy/) enabled,
set the `X-Scope-OrgID` header to identify the tenant you want to query.
Here is the same example query for the single tenant called `Tenant1`:
@ -465,7 +465,7 @@ GET /loki/api/v1/query_range
This type of query is often referred to as a range query. Range queries are used for both log and metric type LogQL queries.
It accepts the following query parameters in the URL:
- `query`: The [LogQL]({{< relref "../query" >}}) query to perform.
- `query`: The [LogQL](../../query/) query to perform.
- `limit`: The max number of entries to return. It defaults to `100`. Only applies to query types which produce a stream (log lines) response.
- `start`: The start time for the query as a nanosecond Unix epoch or another [supported format](#timestamps). Defaults to one hour ago. Loki returns results with timestamp greater or equal to this value.
- `end`: The end time for the query as a nanosecond Unix epoch or another [supported format](#timestamps). Defaults to now. Loki returns results with timestamp lower than this value.
@ -850,7 +850,7 @@ The `/loki/api/v1/index/stats` endpoint can be used to query the index for the n
URL query parameters:
- `query`: The [LogQL]({{< relref "../query" >}}) matchers to check (that is, `{job="foo", env!="dev"}`)
- `query`: The [LogQL](../../query/) matchers to check (that is, `{job="foo", env!="dev"}`)
- `start=<nanosecond Unix epoch>`: Start timestamp.
- `end=<nanosecond Unix epoch>`: End timestamp.
@ -897,7 +897,7 @@ The other way to change aggregations is with the `aggregateBy` parameter. The de
URL query parameters:
- `query`: The [LogQL]({{< relref "../query" >}}) matchers to check (that is, `{job="foo", env=~".+"}`). This parameter is required.
- `query`: The [LogQL](../../query/) matchers to check (that is, `{job="foo", env=~".+"}`). This parameter is required.
- `start=<nanosecond Unix epoch>`: Start timestamp. This parameter is required.
- `end=<nanosecond Unix epoch>`: End timestamp. This parameter is required.
- `limit`: How many metric series to return. The parameter is optional, the default is `100`.
@ -945,7 +945,7 @@ ts=<_> caller=grpc_logging.go:66 level=info method=/cortex.Ingester/Push duratio
URL query parameters:
- `query`: The [LogQL]({{< relref "../query" >}}) matchers to check (that is, `{job="foo", env=~".+"}`). This parameter is required.
- `query`: The [LogQL](../../query/) matchers to check (that is, `{job="foo", env=~".+"}`). This parameter is required.
- `start=<nanosecond Unix epoch>`: Start timestamp. This parameter is required.
- `end=<nanosecond Unix epoch>`: End timestamp. This parameter is required.
- `step=<duration string or float number of seconds>`: Step between samples for occurrences of this pattern. This parameter is optional.
@ -1004,7 +1004,7 @@ gave this response:
```
The result is a list of patterns detected in the logs, with the number of samples for each pattern at each timestamp.
The pattern format is the same as the [LogQL]({{< relref "../query" >}}) pattern filter and parser and can be used in queries for filtering matching logs.
The pattern format is the same as the [LogQL](../../query/) pattern filter and parser and can be used in queries for filtering matching logs.
Each sample is a tuple of timestamp (second) and count.
## Stream logs
@ -1016,7 +1016,7 @@ GET /loki/api/v1/tail
`/loki/api/v1/tail` is a WebSocket endpoint that streams log messages based on a query to the client.
It accepts the following query parameters in the URL:
- `query`: The [LogQL]({{< relref "../query" >}}) query to perform.
- `query`: The [LogQL](../../query/) query to perform.
- `delay_for`: The number of seconds to delay retrieving logs to let slow
loggers catch up. Defaults to 0 and cannot be larger than 5.
- `limit`: The max number of entries to return. It defaults to `100`.
@ -1086,7 +1086,7 @@ GET /metrics
```
`/metrics` returns exposed Prometheus metrics. See
[Observing Loki]({{< relref "../operations/meta-monitoring" >}})
[Observing Loki](../../operations/meta-monitoring/)
for a list of exported metrics.
In microservices mode, the `/metrics` endpoint is exposed by all components.
@ -1382,7 +1382,7 @@ PUT /loki/api/v1/delete
```
Create a new delete request for the authenticated tenant.
The [log entry deletion]({{< relref "../operations/storage/logs-deletion" >}}) documentation has configuration details.
The [log entry deletion](../../operations/storage/logs-deletion/) documentation has configuration details.
Log entry deletion is supported _only_ when TSDB or BoltDB Shipper is configured for the index store.
@ -1422,7 +1422,7 @@ GET /loki/api/v1/delete
```
List the existing delete requests for the authenticated tenant.
The [log entry deletion]({{< relref "../operations/storage/logs-deletion" >}}) documentation has configuration details.
The [log entry deletion](../../operations/storage/logs-deletion/) documentation has configuration details.
Log entry deletion is supported _only_ when TSDB or BoltDB Shipper is configured for the index store.
@ -1459,7 +1459,7 @@ DELETE /loki/api/v1/delete
```
Remove a delete request for the authenticated tenant.
The [log entry deletion]({{< relref "../operations/storage/logs-deletion" >}}) documentation has configuration details.
The [log entry deletion](../../operations/storage/logs-deletion/) documentation has configuration details.
Loki allows cancellation of delete requests until the requests are picked up for processing. It is controlled by the `delete_request_cancel_period` YAML configuration or the equivalent command line option when invoking Loki. To cancel a delete request that has been picked up for processing or is partially complete, pass the `force=true` query parameter to the API.

@ -16,15 +16,15 @@ Some parts of the Loki repo will remain Apache-2.0 licensed (mainly clients and
## Features and enhancements
* Loki now has the ability to apply [custom retention]({{< relref "../operations/storage/retention" >}}) based on stream selectors! This will allow much finer control over log retention all of which is now handled by Loki, no longer requiring the use of object store configs for retention.
* Coming along hand in hand with storing logs for longer durations is the ability to [delete log streams]({{< relref "../operations/storage/logs-deletion" >}}). The initial implementation lets you submit delete request jobs which will be processed after 24 hours.
* A very exciting new LogQL parser has been introduced: the [pattern parser]({{< relref "../query/log_queries#parser-expression" >}}). Much simpler and faster than regexp for log lines that have a little bit of structure to them such as the [Common Log Format](https://en.wikipedia.org/wiki/Common_Log_Format). This is now Loki's fastest parser so try it out on any of your log lines!
* Extending on the work of Alerting Rules, Loki now accepts [recording rules]({{< relref "../alert#recording-rules" >}}). This lets you turn your logs into metrics and push them to Prometheus or any Prometheus compatible remote_write endpoint.
* LogQL can understand [IP addresses]({{< relref "../query/ip" >}})! This enables filtering on IP addresses and subnet ranges.
* Loki now has the ability to apply [custom retention](../../operations/storage/retention/) based on stream selectors! This will allow much finer control over log retention all of which is now handled by Loki, no longer requiring the use of object store configs for retention.
* Coming along hand in hand with storing logs for longer durations is the ability to [delete log streams](../../operations/storage/logs-deletion/). The initial implementation lets you submit delete request jobs which will be processed after 24 hours.
* A very exciting new LogQL parser has been introduced: the [pattern parser](../../query/log_queries/#parser-expression). Much simpler and faster than regexp for log lines that have a little bit of structure to them such as the [Common Log Format](https://en.wikipedia.org/wiki/Common_Log_Format). This is now Loki's fastest parser so try it out on any of your log lines!
* Extending on the work of Alerting Rules, Loki now accepts [recording rules](../../alert/#recording-rules). This lets you turn your logs into metrics and push them to Prometheus or any Prometheus compatible remote_write endpoint.
* LogQL can understand [IP addresses](../../query/ip/)! This enables filtering on IP addresses and subnet ranges.
For those of you running Loki as microservices, the following features will improve performance operations significantly for many operations.
* We created an [index gateway]({{< relref "../operations/storage/boltdb-shipper#index-gateway" >}}) which takes on the task of downloading the boltdb-shipper index files allowing you to run your queriers without any local disk requirements, this is really helpful in Kubernetes environments where you can return your queriers from Statefulsets back to Deployments and save a lot of PVC costs and operational headaches.
* We created an [index gateway](../../operations/storage/boltdb-shipper/#index-gateway) which takes on the task of downloading the boltdb-shipper index files allowing you to run your queriers without any local disk requirements, this is really helpful in Kubernetes environments where you can return your queriers from Statefulsets back to Deployments and save a lot of PVC costs and operational headaches.
* Ingester queriers [are now shardable](https://github.com/grafana/loki/pull/3852), this is a significant performance boost for high volume log streams when querying recent data.
* Instant queries can now be [split and sharded](https://github.com/grafana/loki/pull/3984) making them just as fast as range queries.
@ -42,7 +42,7 @@ Lastly several useful additions to the LogQL query language have been included:
## Upgrade considerations
The path from 2.2.1 to 2.3.0 should be smooth, as always, read the [Upgrade Guide]({{< relref "../setup/upgrade#230" >}}) for important upgrade guidance.
The path from 2.2.1 to 2.3.0 should be smooth, as always, read the [Upgrade Guide](../../setup/upgrade/#230) for important upgrade guidance.
One change we consider noteworthy however is:

@ -14,12 +14,12 @@ Loki 2.4 focuses on two items:
## Features and enhancements
* [**Loki no longer requires logs to be sent in perfect chronological order.**](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#accept-out-of-order-writes) Support for out of order logs is one of the most highly requested features for Loki. The strict ordering constraint has been removed.
* Scaling Loki is now easier with a hybrid deployment mode that falls between our single binary and our microservices. The [Simple scalable deployment]({{< relref "../get-started/deployment-modes" >}}) scales Loki with new `read` and `write` targets. Where previously you would have needed Kubernetes and the microservices approach to start tapping into Loki’s potential, it’s now possible to do this in a simpler way.
* Scaling Loki is now easier with a hybrid deployment mode that falls between our single binary and our microservices. The [Simple scalable deployment](../../get-started/deployment-modes/) scales Loki with new `read` and `write` targets. Where previously you would have needed Kubernetes and the microservices approach to start tapping into Loki’s potential, it’s now possible to do this in a simpler way.
* The new [`common` section](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#common) results in a 70% smaller Loki configuration. Pair that with updated defaults and Loki comes out of the box with more appropriate defaults and limits. Check out the [example local configuration](https://github.com/grafana/loki/blob/main/cmd/loki/loki-local-config.yaml) as the new reference for running Loki.
* [**Recording rules**]({{< relref "../alert#recording-rules" >}}) are no longer an experimental feature. We've given them a more resilient implementation which leverages the existing write ahead log code in Prometheus.
* The new [**Promtail Kafka Consumer**]({{< relref "../send-data/promtail/scraping#kafka" >}}) can easily get your logs out of Kafka and into Loki.
* There are **nice LogQL enhancements**, thanks to the amazing Loki community. LogQL now has [group_left and group_right]({{< relref "../query#many-to-one-and-one-to-many-vector-matches" >}}). And, the `label_format` and `line_format` functions now support [working with dates and times]({{< relref "../query/template_functions#now" >}}).
* Another great community contribution allows Promtail to [**accept ndjson and plaintext log files over HTTP**]({{< relref "../send-data/promtail/configuration#loki_push_api" >}}).
* [**Recording rules**](../../alert/#recording-rules) are no longer an experimental feature. We've given them a more resilient implementation which leverages the existing write ahead log code in Prometheus.
* The new [**Promtail Kafka Consumer**](../../send-data/promtail/scraping/#kafka) can easily get your logs out of Kafka and into Loki.
* There are **nice LogQL enhancements**, thanks to the amazing Loki community. LogQL now has [group_left and group_right](../../query/#many-to-one-and-one-to-many-vector-matches). And, the `label_format` and `line_format` functions now support [working with dates and times](../../query/template_functions/#now).
* Another great community contribution allows Promtail to [**accept ndjson and plaintext log files over HTTP**](../../send-data/promtail/configuration/#loki_push_api).
All in all, about 260 PR’s went into Loki 2.4, and we thank everyone for helping us make the best Loki yet.
@ -27,7 +27,7 @@ For a full list of all changes, look at the [CHANGELOG](https://github.com/grafa
## Upgrade Considerations
Please read the [upgrade guide]({{< relref "../setup/upgrade#240" >}}) before updating Loki.
Please read the [upgrade guide](../../setup/upgrade/#240) before updating Loki.
We made a lot of changes to Loki’s configuration as part of this release.
We have tried our best to make sure changes are compatible with existing configurations, however some changes to default limits may impact users who didn't have values explicitly set for these limits in their configuration files.

@ -25,7 +25,7 @@ For a full list of all changes, look at the [CHANGELOG](https://github.com/grafa
## Upgrade Considerations
As always, please read the [upgrade guide]({{< relref "../setup/upgrade#250" >}}) before upgrading Loki.
As always, please read the [upgrade guide](../../setup/upgrade/#250) before upgrading Loki.
### Changes to the config `split_queries_by_interval`
The most likely impact many people will see is Loki failing to start because of a change in the YAML configuration for `split_queries_by_interval`. It was previously possible to define this value in two places.

@ -10,8 +10,8 @@ Grafana Labs is excited to announce the release of Loki 2.6. Here's a summary of
## Features and enhancements
- **Query multiple tenants at once.** We've introduced cross-tenant query federation, which allows you to issue one query to multiple tenants and get a single, consolidated result. This is great for scenarios where you need a global view of logs within your multi-tenant cluster. For more information on how to enable this feature, see [Multi-Tenancy]({{< relref "../operations/multi-tenancy.md" >}}).
- **Filter out and delete certain log lines from query results.** This is particularly useful in cases where users may accidentally write sensitive information to Loki that they do not want exposed. Users craft a LogQL query that selects the specific lines they're interested in, and then can choose to either filter out those lines from query results, or permanently delete them from Loki's storage. For more information, see [Logs Deletion]({{< relref "../operations/storage/logs-deletion.md" >}}).
- **Query multiple tenants at once.** We've introduced cross-tenant query federation, which allows you to issue one query to multiple tenants and get a single, consolidated result. This is great for scenarios where you need a global view of logs within your multi-tenant cluster. For more information on how to enable this feature, see [Multi-Tenancy](../../operations/multi-tenancy/).
- **Filter out and delete certain log lines from query results.** This is particularly useful in cases where users may accidentally write sensitive information to Loki that they do not want exposed. Users craft a LogQL query that selects the specific lines they're interested in, and then can choose to either filter out those lines from query results, or permanently delete them from Loki's storage. For more information, see [Logs Deletion](../../operations/storage/logs-deletion/).
- **Improved query performance on instant queries.** Loki now splits instant queries with a large time range (for example, `sum(rate({app="foo"}[6h]))`) into several smaller sub-queries and executes them in parallel. Users don't need to take any action to enjoy this performance improvement; however, they can adjust the number of sub-queries generated by modifying the `split_queries_by_interval` configuration parameter, which currently defaults to `30m`.
- **Support Baidu AI Cloud as a storage backend.** Loki users can now use Baidu Object Storage (BOS) as their storage backend. See [bos_storage_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/) for details.
@ -19,7 +19,7 @@ For a full list of all changes, look at the [CHANGELOG](https://github.com/grafa
## Upgrade Considerations
As always, please read the [upgrade guide]({{< relref "../setup/upgrade#260" >}}) before upgrading Loki.
As always, please read the [upgrade guide](../../setup/upgrade/#260) before upgrading Loki.
## Bug fixes

@ -14,7 +14,7 @@ Grafana Labs is excited to announce the release of Loki 2.7. Here's a summary of
- **Better Support for Azure Blob Storage** thanks to the ability to use Azure's Service Principal Credentials.
- **Logs can now be pushed from the Loki canary** so you don't have to rely on a scraping service to use the canary.
- **Additional `label_format` fields** `__timestamp__` and `__line__`.
- **`fifocache` has been renamed** The in-memory `fifocache` has been renamed to `embedded-cache`. Check [upgrade guide]({{< relref "../setup/upgrade#270" >}}) for more details
- **`fifocache` has been renamed** The in-memory `fifocache` has been renamed to `embedded-cache`. Check [upgrade guide](../../setup/upgrade/#270) for more details
- **New HTTP endpoint for Ingester shutdown** that will also delete the ring token.
- **Faster label queries** thanks to new parallization.
- **Introducing Stream Sharding** an experimental new feature to help deal with very large streams.
@ -30,7 +30,7 @@ For a full list of all, look at the [CHANGELOG](https://github.com/grafana/loki/
## Upgrade Considerations
As always, please read the [upgrade guide]({{< relref "../setup/upgrade#270" >}}) before upgrading Loki.
As always, please read the [upgrade guide](../../setup/upgrade/#270) before upgrading Loki.
## Bug fixes

@ -18,7 +18,7 @@ For a full list of all changes, look at the [CHANGELOG](https://github.com/grafa
## Upgrade Considerations
As always, please read the [upgrade guide]({{< relref "../setup/upgrade#270" >}}) before upgrading Loki.
As always, please read the [upgrade guide](../../setup/upgrade/#270) before upgrading Loki.
## Bug fixes

@ -58,7 +58,7 @@ Here is a non-exhaustive list of components that can be used to build a log pipe
To learn more about how to configure Alloy to send logs to Loki within different scenarios, follow these interactive tutorials:
- [Sending OpenTelemetry logs to Loki using Alloy]({{< relref "./examples/alloy-otel-logs" >}})
- [Sending logs over Kafka to Loki using Alloy]({{< relref "./examples/alloy-kafka-logs" >}})
- [Sending OpenTelemetry logs to Loki using Alloy](examples/alloy-otel-logs/)
- [Sending logs over Kafka to Loki using Alloy](examples/alloy-kafka-logs/)

@ -13,7 +13,7 @@ each container will use the default driver unless configured otherwise.
## Installation
Before configuring the plugin, [install or upgrade the Grafana Loki Docker Driver Client]({{< relref "../docker-driver" >}})
Before configuring the plugin, [install or upgrade the Grafana Loki Docker Driver Client](../)
## Change the logging driver for a container
@ -110,7 +110,7 @@ Stack name and service name for each swarm service and project name and service
## Labels
Loki can receive a set of labels along with log line. These labels are used to index log entries and query back logs using [LogQL stream selector]({{< relref "../../query/log_queries#log-stream-selector" >}}).
Loki can receive a set of labels along with log line. These labels are used to index log entries and query back logs using [LogQL stream selector](../../../query/log_queries/#log-stream-selector).
By default, the Docker driver will add the following labels to each log line:
@ -215,8 +215,8 @@ To specify additional logging driver options, you can use the --log-opt NAME=VAL
| `loki-min-backoff` | No | `500ms` | The minimum amount of time to wait before retrying a batch. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-max-backoff` | No | `5m` | The maximum amount of time to wait before retrying a batch. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-retries` | No | `10` | The maximum amount of retries for a log batch. Setting it to `0` will retry indefinitely. |
| `loki-pipeline-stage-file` | No | | The location of a pipeline stage configuration file ([example](https://github.com/grafana/loki/blob/main/clients/cmd/docker-driver/pipeline-example.yaml)). Pipeline stages allows to parse log lines to extract more labels, [see associated documentation]({{< relref "../../send-data/promtail/stages" >}}). |
| `loki-pipeline-stages` | No | | The pipeline stage configuration provided as a string [see pipeline stages](#pipeline-stages) and [associated documentation]({{< relref "../../send-data/promtail/stages" >}}). |
| `loki-pipeline-stage-file` | No | | The location of a pipeline stage configuration file ([example](https://github.com/grafana/loki/blob/main/clients/cmd/docker-driver/pipeline-example.yaml)). Pipeline stages allows to parse log lines to extract more labels, [see associated documentation](../../promtail/stages/). |
| `loki-pipeline-stages` | No | | The pipeline stage configuration provided as a string [see pipeline stages](#pipeline-stages) and [associated documentation](../../promtail/stages/). |
| `loki-relabel-config` | No | | A [Prometheus relabeling configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) allowing you to rename labels [see relabeling](#relabeling). |
| `loki-tenant-id` | No | | Set the tenant id (http header`X-Scope-OrgID`) when sending logs to Loki. It can be overridden by a pipeline stage. |
| `loki-tls-ca-file` | No | | Set the path to a custom certificate authority. |

@ -32,7 +32,7 @@ The Docker image `grafana/fluent-plugin-loki:main` contains [default configurati
This image also uses `LOKI_URL`, `LOKI_USERNAME`, and `LOKI_PASSWORD` environment variables to specify the Loki endpoint, user, and password (you can leave the USERNAME and PASSWORD blank if they're not used).
This image starts an instance of Fluentd that forwards incoming logs to the specified Loki URL. As an alternate, containerized applications can also use [docker driver plugin]({{< relref "../docker-driver" >}}) to ship logs without needing Fluentd.
This image starts an instance of Fluentd that forwards incoming logs to the specified Loki URL. As an alternate, containerized applications can also use [docker driver plugin](../docker-driver/) to ship logs without needing Fluentd.
### Example

@ -9,7 +9,7 @@ weight: 700
# Lambda Promtail client
Grafana Loki includes [Terraform](https://www.terraform.io/) and [CloudFormation](https://aws.amazon.com/cloudformation/) for shipping Cloudwatch, Cloudtrail, VPC Flow Logs and loadbalancer logs to Loki via a [lambda function](https://aws.amazon.com/lambda/). This is done via [lambda-promtail](https://github.com/grafana/loki/blob/main/tools/lambda-promtail) which processes cloudwatch events and propagates them to Loki (or a Promtail instance) via the push-api [scrape config]({{< relref "../../send-data/promtail/configuration#loki_push_api" >}}).
Grafana Loki includes [Terraform](https://www.terraform.io/) and [CloudFormation](https://aws.amazon.com/cloudformation/) for shipping Cloudwatch, Cloudtrail, VPC Flow Logs and loadbalancer logs to Loki via a [lambda function](https://aws.amazon.com/lambda/). This is done via [lambda-promtail](https://github.com/grafana/loki/blob/main/tools/lambda-promtail) which processes cloudwatch events and propagates them to Loki (or a Promtail instance) via the push-api [scrape config](../promtail/configuration/#loki_push_api).
## Deployment
@ -91,7 +91,7 @@ If using CloudFormation to write your infrastructure code, you should consider t
### Ephemeral Jobs
This workflow is intended to be an effective approach for monitoring ephemeral jobs such as those run on AWS Lambda which are otherwise hard/impossible to monitor via one of the other Loki [clients]({{< relref ".." >}}).
This workflow is intended to be an effective approach for monitoring ephemeral jobs such as those run on AWS Lambda which are otherwise hard/impossible to monitor via one of the other Loki [clients](../).
Ephemeral jobs can quite easily run afoul of cardinality best practices. During high request load, an AWS lambda function might balloon in concurrency, creating many log streams in Cloudwatch. For this reason lambda-promtail defaults to **not** keeping the log stream value as a label when propagating the logs to Loki. This is only possible because new versions of Loki no longer have an ingestion ordering constraint on logs within a single stream.
@ -151,7 +151,7 @@ aws cloudformation create-stack \
## Propagated Labels
Incoming logs can have seven special labels assigned to them which can be used in [relabeling]({{< relref "../../send-data/promtail/configuration#relabel_configs" >}}) or later stages in a Promtail [pipeline]({{< relref "../../send-data/promtail/pipelines" >}}):
Incoming logs can have seven special labels assigned to them which can be used in [relabeling](../promtail/configuration/#relabel_configs) or later stages in a Promtail [pipeline](../promtail/pipelines/):
- `__aws_log_type`: Where this log came from (Cloudwatch, Kinesis or S3).
- `__aws_cloudwatch_log_group`: The associated Cloudwatch Log Group for this log.

@ -239,7 +239,7 @@ An array of fields which will be mapped to labels and sent to Loki, when this li
#### metadata_fields
An array of fields which will be mapped to [structured metadata]({{< relref "../../get-started/labels/structured-metadata.md" >}}) and sent to Loki for each log line
An array of fields which will be mapped to [structured metadata](../../get-started/labels/structured-metadata/) and sent to Loki for each log line
#### batch_wait

@ -16,7 +16,7 @@ For ingesting logs to Loki using the OpenTelemetry Collector, you must use the [
## Loki configuration
When logs are ingested by Loki using an OpenTelemetry protocol (OTLP) ingestion endpoint, some of the data is stored as [Structured Metadata]({{< relref "../../get-started/labels/structured-metadata" >}}).
When logs are ingested by Loki using an OpenTelemetry protocol (OTLP) ingestion endpoint, some of the data is stored as [Structured Metadata](../../get-started/labels/structured-metadata/).
You must set `allow_structured_metadata` to `true` within your Loki config file. Otherwise, Loki will reject the log payload as malformed. Note that Structured Metadata is enabled by default in Loki 3.0 and later.
@ -74,7 +74,7 @@ service:
Since the OpenTelemetry protocol differs from the Loki storage model, here is how data in the OpenTelemetry format will be mapped by default to the Loki data model during ingestion, which can be changed as explained later:
- Index labels: Resource attributes map well to index labels in Loki, since both usually identify the source of the logs. The default list of Resource Attributes to store as Index labels can be configured using `default_resource_attributes_as_index_labels` under [distributor's otlp_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#distributor). By default, the following resource attributes will be stored as index labels, while the remaining attributes are stored as [Structured Metadata]({{< relref "../../get-started/labels/structured-metadata" >}}) with each log entry:
- Index labels: Resource attributes map well to index labels in Loki, since both usually identify the source of the logs. The default list of Resource Attributes to store as Index labels can be configured using `default_resource_attributes_as_index_labels` under [distributor's otlp_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#distributor). By default, the following resource attributes will be stored as index labels, while the remaining attributes are stored as [Structured Metadata](../../get-started/labels/structured-metadata/) with each log entry:
- cloud.availability_zone
- cloud.region
- container.name
@ -101,7 +101,7 @@ Since the OpenTelemetry protocol differs from the Loki storage model, here is ho
- LogLine: `LogRecord.Body` holds the body of the log. However, since Loki only supports Log body in string format, we will stringify non-string values using the [AsString method from the OTel collector lib](https://github.com/open-telemetry/opentelemetry-collector/blob/ab3d6c5b64701e690aaa340b0a63f443ff22c1f0/pdata/pcommon/value.go#L353).
- [Structured Metadata]({{< relref "../../get-started/labels/structured-metadata" >}}): Anything which can’t be stored in Index labels and LogLine would be stored as Structured Metadata. Here is a non-exhaustive list of what will be stored in Structured Metadata to give a sense of what it will hold:
- [Structured Metadata](../../get-started/labels/structured-metadata/): Anything which can’t be stored in Index labels and LogLine would be stored as Structured Metadata. Here is a non-exhaustive list of what will be stored in Structured Metadata to give a sense of what it will hold:
- Resource Attributes not stored as Index labels is replicated and stored with each log entry.
- Everything under InstrumentationScope is replicated and stored with each log entry.
- Everything under LogRecord except `LogRecord.Body`, `LogRecord.TimeUnixNano` and sometimes `LogRecord.ObservedTimestamp`.

@ -44,7 +44,7 @@ Kubernetes API server while `static` usually covers all other use cases.
Just like Prometheus, `promtail` is configured using a `scrape_configs` stanza.
`relabel_configs` allows for fine-grained control of what to ingest, what to
drop, and the final metadata to attach to the log line. Refer to the docs for
[configuring Promtail]({{< relref "./configuration" >}}) for more details.
[configuring Promtail](configuration/) for more details.
### Support for compressed files
@ -118,12 +118,12 @@ There are a few instances where this might be helpful:
## Receiving logs From Syslog
When the [Syslog Target]({{< relref "./configuration#syslog" >}}) is being used, logs
When the [Syslog Target](configuration/#syslog) is being used, logs
can be written with the syslog protocol to the configured port.
## AWS
If you need to run Promtail on Amazon Web Services EC2 instances, you can use our [detailed tutorial]({{< relref "./cloud/ec2" >}}).
If you need to run Promtail on Amazon Web Services EC2 instances, you can use our [detailed tutorial](cloud/ec2/).
## Labeling and parsing
@ -136,7 +136,7 @@ To allow more sophisticated filtering afterwards, Promtail allows to set labels
not only from service discovery, but also based on the contents of each log
line. The `pipeline_stages` can be used to add or update labels, correct the
timestamp, or re-write log lines entirely. Refer to the documentation for
[pipelines]({{< relref "./pipelines" >}}) for more details.
[pipelines](pipelines/) for more details.
## Shipping
@ -162,7 +162,7 @@ This endpoint returns 200 when Promtail is up and running, and there's at least
### `GET /metrics`
This endpoint returns Promtail metrics for Prometheus. Refer to
[Observing Grafana Loki]({{< relref "../../operations/meta-monitoring" >}}) for the list
[Observing Grafana Loki](../../operations/meta-monitoring/) for the list
of exported metrics.
### Promtail web server config

@ -12,8 +12,8 @@ weight: 300
Sending logs from cloud services to Grafana Loki is a little different depending on the AWS service you are using. The following tutorials walk you through configuring cloud services to send logs to Loki.
- [Amazon Elastic Compute Cloud (EC2)]({{< relref "./ec2" >}})
- [Amazon Elastic Container Service (ECS)]({{< relref "./ecs" >}})
- [Amazon Elastic Kubernetes Service (EKS)]({{< relref "./eks" >}})
- [Google Cloud Platform (GCP)]({{< relref "./gcp" >}})
- [Amazon Elastic Compute Cloud (EC2)](ec2/)
- [Amazon Elastic Container Service (ECS)](ecs/)
- [Amazon Elastic Kubernetes Service (EKS)](eks/)
- [Google Cloud Platform (GCP)](gcp/)

@ -9,7 +9,7 @@ weight: 100
# Run the Promtail client on AWS EC2
In this tutorial we're going to setup [Promtail]({{< relref "../../../../send-data/promtail" >}}) on an AWS EC2 instance and configure it to sends all its logs to a Grafana Loki instance.
In this tutorial we're going to setup [Promtail](../../) on an AWS EC2 instance and configure it to sends all its logs to a Grafana Loki instance.
{{< docs/shared source="loki" lookup="promtail-deprecation.md" version="<LOKI_VERSION>" >}}
@ -44,7 +44,7 @@ aws ec2 create-security-group --group-name promtail-ec2 --description "promtail
}
```
Now let's authorize inbound access for SSH and [Promtail]({{< relref "../../../../send-data/promtail" >}}) server:
Now let's authorize inbound access for SSH and [Promtail](../../) server:
```bash
aws ec2 authorize-security-group-ingress --group-id sg-02c489bbdeffdca1d --protocol tcp --port 22 --cidr 0.0.0.0/0
@ -88,7 +88,7 @@ ssh ec2-user@ec2-13-59-62-37.us-east-2.compute.amazonaws.com
## Setting up Promtail
First let's make sure we're running as root by using `sudo -s`.
Next we'll download, install and give executable right to [Promtail]({{< relref "../../../../send-data/promtail" >}}).
Next we'll download, install and give executable right to [Promtail](../../).
```bash
mkdir /opt/promtail && cd /opt/promtail
@ -97,7 +97,7 @@ unzip "promtail-linux-amd64.zip"
chmod a+x "promtail-linux-amd64"
```
Now we're going to download the [Promtail configuration]({{< relref "../../../../send-data/promtail" >}}) file below and edit it, don't worry we will explain what those means.
Now we're going to download the [Promtail configuration](../../) file below and edit it, don't worry we will explain what those means.
The file is also available as a gist at [cyriltovena/promtail-ec2.yaml][config gist].
```bash
@ -140,11 +140,11 @@ scrape_configs:
target_label: __host__
```
The **server** section indicates Promtail to bind his http server to 3100. Promtail serves HTTP pages for [troubleshooting]({{< relref "../../../../send-data/promtail/troubleshooting" >}}) service discovery and targets.
The **server** section indicates Promtail to bind his http server to 3100. Promtail serves HTTP pages for [troubleshooting](../../troubleshooting/) service discovery and targets.
The **clients** section allow you to target your loki instance, if you're using GrafanaCloud simply replace `<user id>` and `<api secret>` with your credentials. Otherwise just replace the whole URL with your custom Loki instance.(e.g `http://my-loki-instance.my-org.com/loki/api/v1/push`)
[Promtail]({{< relref "../../../../send-data/promtail" >}}) uses the same [Prometheus **scrape_configs**][prometheus scrape config]. This means if you already own a Prometheus instance the config will be very similar and easy to grasp.
[Promtail](../../) uses the same [Prometheus **scrape_configs**][prometheus scrape config]. This means if you already own a Prometheus instance the config will be very similar and easy to grasp.
Since we're running on AWS EC2 we want to uses EC2 service discovery, this will allows us to scrape metadata about the current instance (and even your custom tags) and attach those to our logs. This way managing and querying on logs will be much easier.
@ -236,7 +236,7 @@ Jul 08 15:48:57 ip-172-31-45-69.us-east-2.compute.internal promtail-linux-amd64[
Jul 08 15:48:57 ip-172-31-45-69.us-east-2.compute.internal promtail-linux-amd64[2732]: level=info ts=2020-07-08T15:48:57.56029474Z caller=main.go:67 msg="Starting Promtail" version="(version=1.6.0, branch=HEAD, revision=12c7eab8)"
```
You can now verify in Grafana that Loki has correctly received your instance logs by using the [LogQL]({{< relref "../../../../query" >}}) query `{zone="us-east-2"}`.
You can now verify in Grafana that Loki has correctly received your instance logs by using the [LogQL](../../../../query/) query `{zone="us-east-2"}`.
{{< figure alt="Grafana Loki logs" align="center" src="./promtail-ec2-logs.png" >}}
@ -267,7 +267,7 @@ You can download the final config example from our [GitHub repository][final con
That's it, save the config and you can `reboot` the machine (or simply restart the service `systemctl restart promtail.service`).
Let's head back to Grafana and verify that your Promtail logs are available in Grafana by using the [LogQL]({{< relref "../../../../query" >}}) query `{unit="promtail.service"}` in Explore. Finally make sure to checkout [live tailing][live tailing] to see logs appearing as they are ingested in Loki.
Let's head back to Grafana and verify that your Promtail logs are available in Grafana by using the [LogQL](../../../../query/) query `{unit="promtail.service"}` in Explore. Finally make sure to checkout [live tailing][live tailing] to see logs appearing as they are ingested in Loki.
[promtail]: ../../promtail/README
[aws cli]: https://aws.amazon.com/cli/

@ -52,7 +52,7 @@ Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.8-eks-fd1
## Adding Promtail DaemonSet
To ship all your pods logs we're going to set up [Promtail]({{< relref "../../../../send-data/promtail" >}}) as a DaemonSet in our cluster. This means it will run on each nodes of the cluster, we will then configure it to find the logs of your containers on the host.
To ship all your pods logs we're going to set up [Promtail](../../) as a DaemonSet in our cluster. This means it will run on each nodes of the cluster, we will then configure it to find the logs of your containers on the host.
What's nice about Promtail is that it uses the same [service discovery as Prometheus][prometheus conf], you should make sure the `scrape_configs` of Promtail matches the Prometheus one. Not only this is simpler to configure, but this also means Metrics and Logs will have the same metadata (labels) attached by the Prometheus service discovery. When querying Grafana you will be able to correlate metrics and logs very quickly, you can read more about this on our [blogpost][correlate].

@ -238,7 +238,7 @@ We need a service account with the following permissions:
This enables Promtail to read log entries from the pubsub subscription created before.
You can find an example for Promtail scrape config for `gcplog` [here]({{< relref "../../scraping#gcp-log-scraping" >}})
You can find an example for Promtail scrape config for `gcplog` [here](../../scraping/#gcp-log-scraping)
If you are scraping logs from multiple GCP projects, then this serviceaccount should have above permissions in all the projects you are tyring to scrape.

@ -42,8 +42,8 @@ defined by the schema below. Brackets indicate that a parameter is optional. For
non-list parameters the value is set to the specified default.
For more detailed information on configuring how to discover and scrape logs from
targets, see [Scraping]({{< relref "./scraping" >}}). For more information on transforming logs
from scraped targets, see [Pipelines]({{< relref "./pipelines" >}}).
targets, see [Scraping](../scraping/). For more information on transforming logs
from scraped targets, see [Pipelines](../pipelines/).
## Reload at runtime
@ -462,7 +462,7 @@ docker_sd_configs:
### pipeline_stages
[Pipeline]({{< relref "./pipelines" >}}) stages are used to transform log entries and their labels. The pipeline is executed after the discovery process finishes. The `pipeline_stages` object consists of a list of stages which correspond to the items listed below.
[Pipeline](../pipelines/) stages are used to transform log entries and their labels. The pipeline is executed after the discovery process finishes. The `pipeline_stages` object consists of a list of stages which correspond to the items listed below.
In most cases, you extract data from logs with `regex` or `json` stages. The extracted data is transformed into a temporary map object. The data can then be used by Promtail e.g. as values for `labels` or as an `output`. Additionally any other stage aside from `docker` and `cri` can access the extracted data.
@ -608,7 +608,7 @@ template:
#### match
The match stage conditionally executes a set of stages when a log entry matches
a configurable [LogQL]({{< relref "../../query" >}}) stream selector.
a configurable [LogQL](../../../query/) stream selector.
```yaml
match:
@ -922,8 +922,8 @@ Promtail needs to wait for the next message to catch multi-line messages,
therefore delays between messages can occur.
See recommended output configurations for
[syslog-ng]({{< relref "./scraping#syslog-ng-output-configuration" >}}) and
[rsyslog]({{< relref "./scraping#rsyslog-output-configuration" >}}). Both configurations enable
[syslog-ng](../scraping/#syslog-ng-output-configuration) and
[rsyslog](../scraping/#rsyslog-output-configuration). Both configurations enable
IETF Syslog with octet-counting.
You may need to increase the open files limit for the Promtail process
@ -1302,7 +1302,7 @@ Each GELF message received will be encoded in JSON as the log line. For example:
{"version":"1.1","host":"example.org","short_message":"A short message","timestamp":1231231123,"level":5,"_some_extra":"extra"}
```
You can leverage [pipeline stages]({{< relref "./stages" >}}) with the GELF target,
You can leverage [pipeline stages](../stages/) with the GELF target,
if for example, you want to parse the log line and extract more labels or change the log line format.
```yaml
@ -1468,7 +1468,7 @@ All Cloudflare logs are in JSON. Here is an example:
}
```
You can leverage [pipeline stages]({{< relref "./stages" >}}) if, for example, you want to parse the JSON log line and extract more labels or change the log line format.
You can leverage [pipeline stages](../stages/) if, for example, you want to parse the JSON log line and extract more labels or change the log line format.
### heroku_drain
@ -2177,7 +2177,7 @@ The `tracing` block configures tracing for Jaeger. Currently, limited to configu
## Example Docker Config
It's fairly difficult to tail Docker files on a standalone machine because they are in different locations for every OS. We recommend the [Docker logging driver]({{< relref "../../send-data/docker-driver" >}}) for local Docker installs or Docker Compose.
It's fairly difficult to tail Docker files on a standalone machine because they are in different locations for every OS. We recommend the [Docker logging driver](../../docker-driver/) for local Docker installs or Docker Compose.
If running in a Kubernetes environment, you should look at the defined configs which are in [helm](https://github.com/grafana/helm-charts/blob/main/charts/promtail/templates/configmap.yaml) and [jsonnet](https://github.com/grafana/loki/blob/main/production/ksonnet/promtail/scrape_config.libsonnet), these leverage the prometheus service discovery libraries (and give Promtail its name) for automatically finding and tailing pods. The jsonnet config explains with comments what each section is for.

@ -96,7 +96,7 @@ Here, the `create` mode works as explained in (2) above. The `create` mode is op
### Kubernetes
[Kubernetes Service Discovery in Promtail]({{< relref "../scraping#kubernetes-discovery" >}}) also uses file-based scraping. Meaning, logs from your pods are stored on the nodes and Promtail scrapes the pod logs from the node files.
[Kubernetes Service Discovery in Promtail](../scraping/#kubernetes-discovery) also uses file-based scraping. Meaning, logs from your pods are stored on the nodes and Promtail scrapes the pod logs from the node files.
You can [configure](https://kubernetes.io/docs/concepts/cluster-administration/logging/#log-rotation) the `kubelet` process running on each node to manage log rotation via two configuration settings.
@ -160,4 +160,4 @@ We recommend using kubelet for log rotation.
Promtail uses `polling` to watch for file changes. A `polling` mechanism combined with a [copy and truncate](#copy-and-truncate) log rotation may result in losing some logs. As explained earlier in this topic, this happens when the file is truncated before Promtail reads all the log lines from such a file.
Therefore, for a long-term solution, we strongly recommend changing the log rotation strategy to [rename and create](#rename-and-create). Alternatively, as a workaround in the short term, you can tweak the promtail client's `batchsize` [config]({{< relref "../configuration#clients" >}}) to set higher values (like 5M or 8M). This gives Promtail more room to read loglines without frequently waiting for push responses from the Loki server.
Therefore, for a long-term solution, we strongly recommend changing the log rotation strategy to [rename and create](#rename-and-create). Alternatively, as a workaround in the short term, you can tweak the promtail client's `batchsize` [config](../configuration/#clients) to set higher values (like 5M or 8M). This gives Promtail more room to read loglines without frequently waiting for push responses from the Loki server.

@ -33,13 +33,13 @@ stages:
condition.
Typical pipelines will start with a parsing stage (such as a
[regex]({{< relref "./stages/regex" >}}) or [json]({{< relref "./stages/json" >}}) stage) to extract data
[regex](../stages/regex/) or [json](../stages/json/) stage) to extract data
from the log line. Then, a series of action stages will be present to do
something with that extracted data. The most common action stage will be a
[labels]({{< relref "./stages/labels" >}}) stage to turn extracted data into a label.
[labels](../stages/labels/) stage to turn extracted data into a label.
A common stage will also be the [match]({{< relref "./stages/match" >}}) stage to selectively
apply stages or drop entries based on a [LogQL stream selector and filter expressions]({{< relref "../../query" >}}).
A common stage will also be the [match](../stages/match/) stage to selectively
apply stages or drop entries based on a [LogQL stream selector and filter expressions](../../../query/).
Note that pipelines can not currently be used to deduplicate logs; Grafana Loki will
receive the same log line multiple times if, for example:
@ -205,5 +205,5 @@ given log entry.
## Stages
Refer to the [Promtail Stages Configuration Reference]({{< relref "./stages/_index.md#promtail-pipeline-stages" >}}) for the
Refer to the [Promtail Stages Configuration Reference](../stages/#promtail-pipeline-stages) for the
schema on the various supported stages supported.

@ -13,18 +13,18 @@ weight: 400
Promtail currently supports scraping from the following sources:
- [Azure event hubs]({{< relref "#azure-event-hubs" >}})
- [Cloudflare]({{< relref "#cloudflare" >}})
- [File target discovery]({{< relref "#file-target-discovery" >}})
- [GCP Logs]({{< relref "#gcp-log-scraping" >}})
- [GELF]({{< relref "#gelf" >}})
- [Heroku Drain]({{< relref "#gcp-log-scraping" >}})
- [HTTP client]({{< relref "#http-client" >}})
- [journal scraping]({{< relref "#journal-scraping-linux-only" >}})
- [Kafka]({{< relref "#kafka" >}})
- [Relabeling]({{< relref "#relabeling" >}})
- [Syslog]({{< relref "#syslog-receiver" >}})
- [Windows]({{< relref "#windows-event-log" >}})
- [Azure event hubs](#azure-event-hubs)
- [Cloudflare](#cloudflare)
- [File target discovery](#file-target-discovery)
- [GCP Logs](#gcp-log-scraping)
- [GELF](#gelf)
- [Heroku Drain](#gcp-log-scraping)
- [HTTP client](#http-client)
- [journal scraping](#journal-scraping-linux-only)
- [Kafka](#kafka)
- [Relabeling](#relabeling)
- [Syslog](#syslog-receiver)
- [Windows](#windows-event-log)
## Azure Event Hubs
@ -49,7 +49,7 @@ Targets can be configured using the `azure_event_hubs` stanza:
```
Only `fully_qualified_namespace`, `connection_string` and `event_hubs` are required fields.
Read the [configuration]({{< relref "./configuration#azure-event-hubs" >}}) section for more information.
Read the [configuration](../configuration/#azure-event-hubs) section for more information.
## Cloudflare
@ -68,7 +68,7 @@ scrape_configs:
```
Only `api_token` and `zone_id` are required.
Refer to the [Cloudfare]({{< relref "./configuration#cloudflare" >}}) configuration section for details.
Refer to the [Cloudfare](../configuration/#cloudflare) configuration section for details.
## File Target Discovery
@ -180,7 +180,7 @@ relabel_configs:
target_label: '__host__'
```
See [Relabeling](#relabeling) for more information. For more information on how to configure the service discovery see the [Kubernetes Service Discovery configuration]({{< relref "./configuration#kubernetes_sd_config" >}}).
See [Relabeling](#relabeling) for more information. For more information on how to configure the service discovery see the [Kubernetes Service Discovery configuration](../configuration/#kubernetes_sd_config).
## GCP Log scraping
@ -212,7 +212,7 @@ Here `project_id` and `subscription` are the only required fields.
- `project_id` is the GCP project id.
- `subscription` is the GCP pubsub subscription where Promtail can consume log entries from.
Before using `gcplog` target, GCP should be [configured]({{< relref "./cloud/gcp" >}}) with pubsub subscription to receive logs from.
Before using `gcplog` target, GCP should be [configured](../cloud/gcp/) with pubsub subscription to receive logs from.
It also supports `relabeling` and `pipeline` stages just like other targets.
@ -248,7 +248,7 @@ section. This server exposes the single endpoint `POST /gcp/api/v1/push`, respon
For Google's PubSub to be able to send logs, **Promtail server must be publicly accessible, and support HTTPS**. For that, Promtail can be deployed
as part of a larger orchestration service like Kubernetes, which can handle HTTPS traffic through an ingress, or it can be hosted behind
a proxy/gateway, offloading the HTTPS to that component and routing the request to Promtail. Once that's solved, GCP can be [configured]({{< relref "./cloud/gcp" >}})
a proxy/gateway, offloading the HTTPS to that component and routing the request to Promtail. Once that's solved, GCP can be [configured](../cloud/gcp/)
to send logs to Promtail.
It also supports `relabeling` and `pipeline` stages.
@ -320,7 +320,7 @@ Configuration is specified in a`heroku_drain` block within the Promtail `scrape_
```
Within the `scrape_configs` configuration for a Heroku Drain target, the `job_name` must be a Prometheus-compatible [metric name](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).
The [server]({{< relref "./configuration#server" >}}) section configures the HTTP server created for receiving logs.
The [server](../configuration/#server) section configures the HTTP server created for receiving logs.
`labels` defines a static set of label values added to each received log entry. `use_incoming_timestamp` can be used to pass
the timestamp received from Heroku.
@ -371,7 +371,7 @@ clients:
- [ <client_option> ]
```
Refer to [`client_config`]({{< relref "./configuration#clients" >}}) from the Promtail
Refer to [`client_config`](../configuration/#clients) from the Promtail
Configuration reference for all available options.
## Journal Scraping (Linux Only)
@ -490,7 +490,7 @@ scrape_configs:
```
Only the `brokers` and `topics` are required.
Read the [configuration]({{< relref "./configuration#kafka" >}}) section for more information.
Read the [configuration](../configuration/#kafka) section for more information.
## Relabeling
@ -641,7 +641,7 @@ You can relabel default labels via [Relabeling](#relabeling) if required.
Providing a path to a bookmark is mandatory, it will be used to persist the last event processed and allow
resuming the target without skipping logs.
Read the [configuration]({{< relref "./configuration#windows_events" >}}) section for more information.
Read the [configuration](../configuration/#windows_events) section for more information.
See the [eventlogmessage]({{< relref "./stages/eventlogmessage" >}}) stage for extracting
See the [eventlogmessage](../stages/eventlogmessage/) stage for extracting
data from the `message`.

@ -12,40 +12,40 @@ weight: 700
{{< docs/shared source="loki" lookup="promtail-deprecation.md" version="<LOKI_VERSION>" >}}
This section is a collection of all stages Promtail supports in a
[Pipeline]({{< relref "../pipelines" >}}).
[Pipeline](../pipelines/).
Parsing stages:
- [docker]({{< relref "./docker" >}}): Extract data by parsing the log line using the standard Docker format.
- [cri]({{< relref "./cri" >}}): Extract data by parsing the log line using the standard CRI format.
- [regex]({{< relref "./regex" >}}): Extract data using a regular expression.
- [json]({{< relref "./json" >}}): Extract data by parsing the log line as JSON.
- [logfmt]({{< relref "./logfmt" >}}): Extract data by parsing the log line as logfmt.
- [replace]({{< relref "./replace" >}}): Replace data using a regular expression.
- [multiline]({{< relref "./multiline" >}}): Merge multiple lines into a multiline block.
- [geoip]({{< relref "./geoip" >}}): Extract geoip data from extracted labels.
- [docker](docker/): Extract data by parsing the log line using the standard Docker format.
- [cri](cri/): Extract data by parsing the log line using the standard CRI format.
- [regex](regex/): Extract data using a regular expression.
- [json](json/): Extract data by parsing the log line as JSON.
- [logfmt](logfmt/): Extract data by parsing the log line as logfmt.
- [replace](replace/): Replace data using a regular expression.
- [multiline](multiline/): Merge multiple lines into a multiline block.
- [geoip](geoip/): Extract geoip data from extracted labels.
Transform stages:
- [template]({{< relref "./template" >}}): Use Go templates to modify extracted data.
- [pack]({{< relref "./pack" >}}): Packs a log line in a JSON object allowing extracted values and labels to be placed inside the log line.
- [decolorize]({{< relref "./decolorize" >}}): Strips ANSI color sequences from the log line.
- [template](template/): Use Go templates to modify extracted data.
- [pack](pack/): Packs a log line in a JSON object allowing extracted values and labels to be placed inside the log line.
- [decolorize](decolorize/): Strips ANSI color sequences from the log line.
Action stages:
- [timestamp]({{< relref "./timestamp" >}}): Set the timestamp value for the log entry.
- [output]({{< relref "./output" >}}): Set the log line text.
- [labeldrop]({{< relref "./labeldrop" >}}): Drop label set for the log entry.
- [labelallow]({{< relref "./labelallow" >}}): Allow label set for the log entry.
- [labels]({{< relref "./labels" >}}): Update the label set for the log entry.
- [limit]({{< relref "./limit" >}}): Limit the rate lines will be sent to Loki.
- [sampling]({{< relref "./sampling" >}}): Sampling the lines will be sent to Loki.
- [static_labels]({{< relref "./static_labels" >}}): Add static-labels to the log entry.
- [metrics]({{< relref "./metrics" >}}): Calculate metrics based on extracted data.
- [tenant]({{< relref "./tenant" >}}): Set the tenant ID value to use for the log entry.
- [structured_metadata]({{< relref "./structured_metadata" >}}): Add structured metadata to the log entry.
- [timestamp](timestamp/): Set the timestamp value for the log entry.
- [output](output/): Set the log line text.
- [labeldrop](labeldrop/): Drop label set for the log entry.
- [labelallow](labelallow/): Allow label set for the log entry.
- [labels](labels/): Update the label set for the log entry.
- [limit](limit/): Limit the rate lines will be sent to Loki.
- [sampling](sampling/): Sampling the lines will be sent to Loki.
- [static_labels](static_labels/): Add static-labels to the log entry.
- [metrics](metrics/): Calculate metrics based on extracted data.
- [tenant](tenant/): Set the tenant ID value to use for the log entry.
- [structured_metadata](structured_metadata/): Add structured metadata to the log entry.
Filtering stages:
- [match]({{< relref "./match" >}}): Conditionally run stages based on the label set.
- [drop]({{< relref "./drop" >}}): Conditionally drop log lines based on several options.
- [match](match/): Conditionally run stages based on the label set.
- [drop](drop/): Conditionally drop log lines based on several options.

@ -129,7 +129,7 @@ Would drop this log line:
#### Drop old log lines
{{< admonition type="note" >}}
For `older_than` to work, you must be using the [timestamp]({{< relref "./timestamp" >}}) stage to set the timestamp from the ingested log line _before_ applying the `drop` stage.
For `older_than` to work, you must be using the [timestamp](../timestamp/) stage to set the timestamp from the ingested log line _before_ applying the `drop` stage.
{{< /admonition >}}
Given the pipeline:

@ -38,7 +38,7 @@ eventlogmessage:
The extracted data can hold non-string values and this stage does not do any
type conversions; downstream stages will need to perform correct type
conversion of these values as necessary. Please refer to the
[the `template` stage]({{< relref "./template" >}}) for how to do this.
[the `template` stage](../template/) for how to do this.
## Example combined with json

@ -40,7 +40,7 @@ This stage uses the Go JSON unmarshaler, which means non-string types like
numbers or booleans will be unmarshaled into those types. The extracted data
can hold non-string values and this stage does not do any type conversions;
downstream stages will need to perform correct type conversion of these values
as necessary. Please refer to the [the `template` stage]({{< relref "./template" >}}) for how
as necessary. Please refer to the [the `template` stage](../template/) for how
to do this.
If the value extracted is a complex type, such as an array or a JSON object, it

@ -31,7 +31,7 @@ This stage uses the [go-logfmt](https://github.com/go-logfmt/logfmt) unmarshaler
numbers or booleans will be unmarshaled into those types. The extracted data
can hold non-string values, and this stage does not do any type conversions;
downstream stages will need to perform correct type conversion of these values
as necessary. Please refer to the [`template` stage]({{< relref "./template" >}}) for how
as necessary. Please refer to the [`template` stage](../template/) for how
to do this.
If the value extracted is a complex type, its value is extracted as a string.

@ -13,8 +13,8 @@ weight:
The match stage is a filtering stage that conditionally applies a set of stages
or drop entries when a log entry matches a configurable LogQL
[stream selector]({{< relref "../../../query/log_queries#log-stream-selector" >}}) and
[filter expressions]({{< relref "../../../query/log_queries#line-filter-expression" >}}).
[stream selector](../../../../query/log_queries/#log-stream-selector) and
[filter expressions](../../../../query/log_queries/#line-filter-expression).
{{< admonition type="note" >}}
The filters do not include label filter expressions such as `| label == "foobar"`.
@ -50,7 +50,7 @@ match:
[<stages>...]
```
Refer to the [Promtail Stages Configuration Reference]({{< relref "./_index.md#promtail-pipeline-stages" >}}) for the
Refer to the [Promtail Stages Configuration Reference](./#promtail-pipeline-stages) for the
schema on the various stages supported here.
### Example

@ -65,7 +65,7 @@ This would create a log line
}
```
**Loki 2.2 also includes a new [`unpack` parser]({{< relref "../../../query/log_queries#unpack" >}}) to work with the pack stage.**
**Loki 2.2 also includes a new [`unpack` parser](../../../../query/log_queries/#unpack) to work with the pack stage.**
For example:

@ -8,7 +8,7 @@ description: The 'structured_metadata' Promtail pipeline stage
{{< docs/shared source="loki" lookup="promtail-deprecation.md" version="<LOKI_VERSION>" >}}
The `structured_metadata` stage is an action stage that takes data from the extracted map and
modifies the [structured metadata]({{< relref "../../../get-started/labels/structured-metadata" >}}) that is sent to Loki with the log entry.
modifies the [structured metadata](../../../../get-started/labels/structured-metadata/) that is sent to Loki with the log entry.
{{< admonition type="warning" >}}
Structured metadata will be rejected by Loki unless you enable the `allow_structured_metadata` per tenant configuration (in the `limits_config`).

@ -13,7 +13,7 @@ weight:
The tenant stage is an action stage that sets the tenant ID for the log entry
picking it from a field in the extracted data map. If the field is missing, the
default promtail client [`tenant_id`]({{< relref "../configuration#clients" >}}) will
default promtail client [`tenant_id`](../../configuration/#clients) will
be used.

@ -17,7 +17,7 @@ This document describes known failure modes of Promtail on edge cases and the ad
Promtail can be configured to print log stream entries instead of sending them to Loki.
This can be used in combination with [piping data](#pipe-data-to-promtail) to debug or troubleshoot Promtail log parsing.
In dry run mode, Promtail still support reading from a [positions]({{< relref "../configuration#positions" >}}) file however no update will be made to the targeted file, this is to ensure you can easily retry the same set of lines.
In dry run mode, Promtail still support reading from a [positions](../configuration/#positions) file however no update will be made to the targeted file, this is to ensure you can easily retry the same set of lines.
To start Promtail in dry run mode use the flag `--dry-run` as shown in the example below:
@ -80,9 +80,9 @@ This will add labels `k1` and `k2` with respective values `v1` and `v2`.
In pipe mode Promtail also support file configuration using `--config.file`, however do note that positions config is not used and
only **the first scrape config is used**.
[`static_configs:`]({{< relref "../configuration" >}}) can be used to provide static labels, although the targets property is ignored.
[`static_configs:`](../configuration/) can be used to provide static labels, although the targets property is ignored.
If you don't provide any [`scrape_config:`]({{< relref "../configuration#scrape_configs" >}}) a default one is used which will automatically adds the following default labels: `{job="stdin",hostname="<detected_hostname>"}`.
If you don't provide any [`scrape_config:`](../configuration/#scrape_configs) a default one is used which will automatically adds the following default labels: `{job="stdin",hostname="<detected_hostname>"}`.
For example you could use this config below to parse and add the label `level` on all your piped logs:

@ -7,6 +7,6 @@ weight: 300
# Setup Loki
- [Install]({{< relref "./install" >}}) Loki.
- [Migrate]({{< relref "./migrate" >}}) from one Loki implementation to another.
- [Upgrade]({{< relref "./upgrade" >}}) from one Loki version to a newer version.
- [Install](install/) Loki.
- [Migrate](migrate/) from one Loki implementation to another.
- [Upgrade](upgrade/) from one Loki version to a newer version.

@ -11,15 +11,15 @@ weight: 200
There are several methods of installing Loki:
- [Install using Helm (recommended)]({{< relref "./helm" >}})
- [Install using Tanka]({{< relref "./tanka" >}})
- [Install using Docker or Docker Compose]({{< relref "./docker" >}})
- [Install and run locally]({{< relref "./local" >}})
- [Install from source]({{< relref "./install-from-source" >}})
- [Install using Helm (recommended)](helm/)
- [Install using Tanka](tanka/)
- [Install using Docker or Docker Compose](docker/)
- [Install and run locally](local/)
- [Install from source](install-from-source/)
Alloy:
- [Install Alloy](https://grafana.com/docs/alloy/latest/set-up/install/)
- [Ingest Logs with Alloy]({{< relref "../../send-data/alloy" >}})
- [Ingest Logs with Alloy](../../send-data/alloy/)
## General process

@ -18,24 +18,24 @@ This section describes the components installed by the Helm Chart.
## 3 methods of deployment
The Loki chart supports three methods of deployment:
- [Monolithic]({{< relref "./install-monolithic" >}})
- [Simple Scalable]({{< relref "./install-scalable" >}})
- [Microservice]({{< relref "./install-microservices" >}})
- [Monolithic](../install-monolithic/)
- [Simple Scalable](../install-scalable/)
- [Microservice](../install-microservices/)
By default, the chart installs in [Simple Scalable]({{< relref "./install-scalable" >}}) mode. This is the recommended method for most users. To understand the differences between deployment methods, see the [Loki deployment modes]({{< relref "../../../get-started/deployment-modes" >}}) documentation.
By default, the chart installs in [Simple Scalable](../install-scalable/) mode. This is the recommended method for most users. To understand the differences between deployment methods, see the [Loki deployment modes](../../../../get-started/deployment-modes/) documentation.
## Monitoring Loki
The Loki Helm chart does not deploy self-monitoring by default. Loki clusters can be monitored using the meta-monitoring stack, which monitors the logs, metrics, and traces of the Loki cluster. There are two deployment options for this stack, see the installation instructions within [Monitoring]({{< relref "./monitor-and-alert" >}}).
The Loki Helm chart does not deploy self-monitoring by default. Loki clusters can be monitored using the meta-monitoring stack, which monitors the logs, metrics, and traces of the Loki cluster. There are two deployment options for this stack, see the installation instructions within [Monitoring](../monitor-and-alert/).
{{< admonition type="note" >}}
The meta-monitoring stack replaces the monitoring section of the Loki helm chart which is now **DEPRECATED**. See the [Monitoring]({{< relref "./monitor-and-alert" >}}) section for more information.
The meta-monitoring stack replaces the monitoring section of the Loki helm chart which is now **DEPRECATED**. See the [Monitoring](../monitor-and-alert/) section for more information.
{{< /admonition >}}
## Canary
This chart installs the [Loki Canary app]({{< relref "../../../operations/loki-canary" >}}) by default. This is another tool to verify the Loki deployment is in a healthy state. It can be disabled by setting `lokiCanary.enabled=false`.
This chart installs the [Loki Canary app](../../../../operations/loki-canary/) by default. This is another tool to verify the Loki deployment is in a healthy state. It can be disabled by setting `lokiCanary.enabled=false`.
## Gateway
@ -48,4 +48,4 @@ If NetworkPolicies are enabled, they are more restrictive if the gateway is enab
## Caching
By default, this chart configures in-memory caching. If that caching does not work for your deployment, you should setup [memcache]({{< relref "../../../operations/caching" >}}).
By default, this chart configures in-memory caching. If that caching does not work for your deployment, you should setup [memcache](../../../../operations/caching/).

@ -14,7 +14,7 @@ keywords:
# Configure storage
The [scalable]({{< relref "../install-scalable" >}}) installation requires a managed object store such as AWS S3 or Google Cloud Storage or a self-hosted store such as Minio. The [single binary]({{< relref "../install-monolithic" >}}) installation can only use the filesystem for storage.
The [scalable](../install-scalable/) installation requires a managed object store such as AWS S3 or Google Cloud Storage or a self-hosted store such as Minio. The [single binary](../install-monolithic/) installation can only use the filesystem for storage.
This guide assumes Loki will be installed in one of the modes above and that a `values.yaml ` has been created.

@ -369,7 +369,7 @@ singleBinary:
```
{{< /collapse >}}
To configure other storage providers, refer to the [Helm Chart Reference]({{< relref "../reference" >}}).
To configure other storage providers, refer to the [Helm Chart Reference](../reference/).
## Deploying the Loki Helm chart to a Production Environment

@ -13,7 +13,7 @@ keywords:
This Helm Chart deploys Grafana Loki in [simple scalable mode](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/deployment-modes/#simple-scalable) within a Kubernetes cluster.
This chart configures Loki to run `read`, `write`, and `backend` targets in a [scalable mode]({{< relref "../../../../get-started/deployment-modes#simple-scalable" >}}). Loki’s simple scalable deployment mode separates execution paths into read, write, and backend targets.
This chart configures Loki to run `read`, `write`, and `backend` targets in a [scalable mode](../../../../get-started/deployment-modes/#simple-scalable). Loki’s simple scalable deployment mode separates execution paths into read, write, and backend targets.
The default Helm chart deploys the following components:
- Read component (3 replicas)
@ -242,7 +242,7 @@ minio:
```
{{< /collapse >}}
To configure other storage providers, refer to the [Helm Chart Reference]({{< relref "../reference" >}}).
To configure other storage providers, refer to the [Helm Chart Reference](../reference/).
## Next Steps
* Configure an agent to [send log data to Loki](/docs/loki/<LOKI_VERSION>/send-data/).

@ -15,5 +15,5 @@ keywords:
There are two common ways to monitor Loki:
- [Monitor using Grafana Cloud (recommended)]({{< relref "./with-grafana-cloud" >}})
- [Monitor using Local Monitoring]({{< relref "./with-local-monitoring" >}})
- [Monitor using Grafana Cloud (recommended)](with-grafana-cloud/)
- [Monitor using Local Monitoring](with-local-monitoring/)

@ -39,7 +39,7 @@ The configuration runs Loki as a single binary.
Don't download LogCLI or Loki Canary at this time.
LogCLI allows you to run Loki queries in a command line interface.
[Loki Canary]({{< relref "../../operations/loki-canary" >}}) is a tool to audit Loki performance.
[Loki Canary](../../../operations/loki-canary/) is a tool to audit Loki performance.
1. Extract the package contents into the same directory. This is where the two programs will run.
1. In the command line, change directory (`cd` on most systems) to the directory with Loki and Promtail.
@ -71,7 +71,7 @@ The configuration runs Loki as a single binary.
Loki runs and displays Loki logs in your command line and on http://localhost:3100/metrics.
The next step is running an agent to send logs to Loki.
To do so with Promtail, refer to the [Promtail configuration]({{< relref "../../send-data/promtail" >}}).
To do so with Promtail, refer to the [Promtail configuration](../../../send-data/promtail/).
## Release binaries - openSUSE Linux only

@ -9,7 +9,7 @@ weight: 300
This section contains instructions for migrating from one Loki implementation to another.
- [Migrate]({{< relref "./migrate-to-tsdb" >}}) to TSDB index.
- [Migrate]({{< relref "./migrate-from-distributed" >}}) from the `Loki-distributed` Helm chart to the `loki` Helm chart.
- [Migrate]({{< relref "./migrate-to-three-scalable-targets" >}}) from the two target Helm chart to the three target scalable configuration Helm chart.
- [Migrate]({{< relref "./migrate-storage-clients" >}}) from the legacy storage clients to the Thanos object storage client.
- [Migrate](migrate-to-tsdb/) to TSDB index.
- [Migrate](migrate-from-distributed/) from the `Loki-distributed` Helm chart to the `loki` Helm chart.
- [Migrate](migrate-to-three-scalable-targets/) from the two target Helm chart to the three target scalable configuration Helm chart.
- [Migrate](migrate-storage-clients/) from the legacy storage clients to the Thanos object storage client.

@ -10,8 +10,8 @@ keywords:
# Migrate to TSDB
[TSDB]({{< relref "../../../operations/storage/tsdb" >}}) is the recommended index type for Loki and is where the current development lies.
If you are running Loki with [boltb-shipper]({{< relref "../../../operations/storage/boltdb-shipper" >}}) or any of the [legacy index types](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/storage/#index-storage) that have been deprecated,
[TSDB](../../../operations/storage/tsdb/) is the recommended index type for Loki and is where the current development lies.
If you are running Loki with [boltb-shipper](../../../operations/storage/boltdb-shipper/) or any of the [legacy index types](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/storage/#index-storage) that have been deprecated,
we strongly recommend migrating to TSDB.
@ -68,7 +68,7 @@ storage_config:
### Run compactor
We strongly recommended running the [compactor]({{< relref "../../../operations/storage/retention#compactor" >}}) when using TSDB index. It is responsible for running compaction and retention on TSDB index.
We strongly recommended running the [compactor](../../../operations/storage/retention/#compactor) when using TSDB index. It is responsible for running compaction and retention on TSDB index.
Not running index compaction will result in sub-optimal query performance.
Please refer to the [compactor section]({{< relref "../../../operations/storage/retention#compactor" >}}) for more information and configuration examples.
Please refer to the [compactor section](../../../operations/storage/retention/#compactor) for more information and configuration examples.

Loading…
Cancel
Save