Fix documentation linter errors (#8229)

pull/8238/head
Travis Patterson 3 years ago committed by GitHub
parent 3e5fb36f30
commit 1c954166b8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 13
      docs/sources/best-practices/_index.md
  2. 19
      docs/sources/clients/_index.md
  3. 7
      docs/sources/clients/aws/_index.md
  4. 21
      docs/sources/clients/aws/ec2/_index.md
  5. 7
      docs/sources/clients/aws/ecs/_index.md
  6. 7
      docs/sources/clients/aws/eks/_index.md
  7. 13
      docs/sources/clients/docker-driver/_index.md
  8. 61
      docs/sources/clients/docker-driver/configuration.md
  9. 9
      docs/sources/clients/fluentbit/_index.md
  10. 11
      docs/sources/clients/fluentd/_index.md
  11. 15
      docs/sources/clients/k6/_index.md
  12. 1
      docs/sources/clients/k6/log-generation.md
  13. 1
      docs/sources/clients/k6/query-scenario.md
  14. 3
      docs/sources/clients/k6/write-scenario.md
  15. 11
      docs/sources/clients/lambda-promtail/_index.md
  16. 9
      docs/sources/clients/logstash/_index.md
  17. 19
      docs/sources/clients/promtail/_index.md
  18. 27
      docs/sources/clients/promtail/configuration.md
  19. 7
      docs/sources/clients/promtail/gcplog-cloud.md
  20. 3
      docs/sources/clients/promtail/installation.md
  21. 11
      docs/sources/clients/promtail/logrotation/_index.md
  22. 0
      docs/sources/clients/promtail/logrotation/logrotation-components.png
  23. 0
      docs/sources/clients/promtail/logrotation/logrotation-copy-and-truncate.png
  24. 0
      docs/sources/clients/promtail/logrotation/logrotation-rename-and-create.png
  25. 37
      docs/sources/clients/promtail/pipelines.md
  26. 9
      docs/sources/clients/promtail/scraping.md
  27. 45
      docs/sources/clients/promtail/stages/_index.md
  28. 3
      docs/sources/clients/promtail/stages/cri.md
  29. 3
      docs/sources/clients/promtail/stages/decolorize.md
  30. 3
      docs/sources/clients/promtail/stages/docker.md
  31. 5
      docs/sources/clients/promtail/stages/drop.md
  32. 5
      docs/sources/clients/promtail/stages/json.md
  33. 3
      docs/sources/clients/promtail/stages/labelallow.md
  34. 3
      docs/sources/clients/promtail/stages/labeldrop.md
  35. 3
      docs/sources/clients/promtail/stages/labels.md
  36. 7
      docs/sources/clients/promtail/stages/limit.md
  37. 6
      docs/sources/clients/promtail/stages/logfmt.md
  38. 7
      docs/sources/clients/promtail/stages/match.md
  39. 3
      docs/sources/clients/promtail/stages/metrics.md
  40. 5
      docs/sources/clients/promtail/stages/multiline.md
  41. 3
      docs/sources/clients/promtail/stages/output.md
  42. 5
      docs/sources/clients/promtail/stages/pack.md
  43. 3
      docs/sources/clients/promtail/stages/regex.md
  44. 3
      docs/sources/clients/promtail/stages/replace.md
  45. 3
      docs/sources/clients/promtail/stages/static_labels.md
  46. 3
      docs/sources/clients/promtail/stages/template.md
  47. 5
      docs/sources/clients/promtail/stages/tenant.md
  48. 3
      docs/sources/clients/promtail/stages/timestamp.md
  49. 13
      docs/sources/clients/promtail/troubleshooting/_index.md
  50. 0
      docs/sources/clients/promtail/troubleshooting/inspect.png
  51. 7
      docs/sources/community/_index.md
  52. 1
      docs/sources/community/contributing.md
  53. 5
      docs/sources/community/getting-in-touch.md
  54. 27
      docs/sources/community/governance.md
  55. 3
      docs/sources/configuration/query-frontend.md
  56. 3
      docs/sources/design-documents/2020-02-Promtail-Push-API.md
  57. 3
      docs/sources/design-documents/2020-09-Write-Ahead-Log.md
  58. 3
      docs/sources/design-documents/2021-01-Ordering-Constraint-Removal.md
  59. 9
      docs/sources/design-documents/_index.md
  60. 3
      docs/sources/design-documents/labels.md
  61. 3
      docs/sources/fundamentals/_index.md
  62. 9
      docs/sources/fundamentals/architecture/_index.md
  63. 13
      docs/sources/fundamentals/architecture/components/_index.md
  64. 0
      docs/sources/fundamentals/architecture/components/loki_architecture_components.svg
  65. 11
      docs/sources/fundamentals/architecture/deployment-modes/_index.md
  66. 0
      docs/sources/fundamentals/architecture/deployment-modes/microservices-mode.png
  67. 0
      docs/sources/fundamentals/architecture/deployment-modes/monolithic-mode.png
  68. 0
      docs/sources/fundamentals/architecture/deployment-modes/simple-scalable.png
  69. 5
      docs/sources/fundamentals/architecture/rings/_index.md
  70. 0
      docs/sources/fundamentals/architecture/rings/ring-overview.png
  71. 7
      docs/sources/fundamentals/labels.md
  72. 5
      docs/sources/fundamentals/overview/_index.md
  73. 12
      docs/sources/getting-started/_index.md
  74. 13
      docs/sources/installation/_index.md
  75. 3
      docs/sources/installation/docker.md
  76. 4
      docs/sources/installation/helm/_index.md
  77. 4
      docs/sources/installation/helm/concepts.md
  78. 6
      docs/sources/installation/helm/configure-storage/index.md
  79. 4
      docs/sources/installation/helm/install-monolithic/index.md
  80. 4
      docs/sources/installation/helm/install-scalable/index.md
  81. 2
      docs/sources/installation/helm/migrate-from-distributed/index.md
  82. 2
      docs/sources/installation/helm/migrate-to-three-scalable-targets/index.md
  83. 1
      docs/sources/installation/install-from-source.md
  84. 5
      docs/sources/installation/istio.md
  85. 7
      docs/sources/installation/local.md
  86. 2
      docs/sources/installation/sizing/index.md
  87. 5
      docs/sources/installation/tanka.md
  88. 5
      docs/sources/lids/0001-Introduction.md
  89. 3
      docs/sources/lids/_index.md
  90. 5
      docs/sources/lids/template.md
  91. 7
      docs/sources/logql/_index.md
  92. 4
      docs/sources/logql/analyzer.md
  93. 1
      docs/sources/logql/ip.md
  94. 17
      docs/sources/logql/log_queries/_index.md
  95. 0
      docs/sources/logql/log_queries/query_components.png
  96. 7
      docs/sources/logql/metric_queries.md
  97. 3
      docs/sources/logql/template_functions.md
  98. 3
      docs/sources/maintaining/_index.md
  99. 3
      docs/sources/maintaining/release-loki-build-image.md
  100. 3
      docs/sources/maintaining/release.md
  101. Some files were not shown because too many files have changed in this diff Show More

@ -1,8 +1,9 @@
---
title: Best practices
description: Grafana Loki label best practices
weight: 400
---
# Grafana Loki label best practices
# Best practices
Grafana Loki is under active development, and we are constantly working to improve performance. But here are some of the most current best practices for labels that will give you the best experience with Loki.
@ -22,7 +23,7 @@ This may seem surprising, but if applications have medium to low volume, that la
Above, we mentioned not to add labels until you _need_ them, so when would you _need_ labels?? A little farther down is a section on `chunk_target_size`. If you set this to 1MB (which is reasonable), this will try to cut chunks at 1MB compressed size, which is about 5MB-ish of uncompressed logs (might be as much as 10MB depending on compression). If your logs have sufficient volume to write 5MB in less time than `max_chunk_age`, or **many** chunks in that timeframe, you might want to consider splitting it into separate streams with a dynamic label.
What you want to avoid is splitting a log file into streams, which result in chunks getting flushed because the stream is idle or hits the max age before being full. As of [Loki 1.4.0](https://grafana.com/blog/2020/04/01/loki-v1.4.0-released-with-query-statistics-and-up-to-300x-regex-optimization/), there is a metric which can help you understand why chunks are flushed `sum by (reason) (rate(loki_ingester_chunks_flushed_total{cluster="dev"}[1m]))`.
What you want to avoid is splitting a log file into streams, which result in chunks getting flushed because the stream is idle or hits the max age before being full. As of [Loki 1.4.0](/blog/2020/04/01/loki-v1.4.0-released-with-query-statistics-and-up-to-300x-regex-optimization/), there is a metric which can help you understand why chunks are flushed `sum by (reason) (rate(loki_ingester_chunks_flushed_total{cluster="dev"}[1m]))`.
It’s not critical that every chunk be full when flushed, but it will improve many aspects of operation. As such, our current guidance here is to avoid dynamic labels as much as possible and instead favor filter expressions. For example, don’t add a `level` dynamic label, just `|= "level=debug"` instead.
@ -34,11 +35,11 @@ Try to keep values bounded to as small a set as possible. We don't have perfect
## Be aware of dynamic labels applied by clients
Loki has several client options: [Promtail](https://github.com/grafana/loki/tree/master/docs/sources/clients/promtail) (which also supports systemd journal ingestion and TCP-based syslog ingestion), [Fluentd](https://github.com/grafana/loki/tree/main/clients/cmd/fluentd), [Fluent Bit](https://github.com/grafana/loki/tree/main/clients/cmd/fluent-bit), a [Docker plugin](https://grafana.com/blog/2019/07/15/lokis-path-to-ga-docker-logging-driver-plugin-support-for-systemd/), and more!
Loki has several client options: [Promtail](/grafana/loki/tree/master/docs/sources/clients/promtail) (which also supports systemd journal ingestion and TCP-based syslog ingestion), [Fluentd](https://github.com/grafana/loki/tree/main/clients/cmd/fluentd), [Fluent Bit](https://github.com/grafana/loki/tree/main/clients/cmd/fluent-bit), a [Docker plugin](/blog/2019/07/15/lokis-path-to-ga-docker-logging-driver-plugin-support-for-systemd/), and more!
Each of these come with ways to configure what labels are applied to create log streams. But be aware of what dynamic labels might be applied.
Use the Loki series API to get an idea of what your log streams look like and see if there might be ways to reduce streams and cardinality.
Series information can be queried through the [Series API](https://grafana.com/docs/loki/latest/api/#series), or you can use [logcli](https://grafana.com/docs/loki/latest/getting-started/logcli/).
Series information can be queried through the [Series API](/docs/loki/latest/api/#series), or you can use [logcli](/docs/loki/latest/getting-started/logcli/).
In Loki 1.6.0 and newer the logcli series command added the `--analyze-labels` flag specifically for debugging high cardinality labels:
@ -69,7 +70,7 @@ Loki can cache data at many levels, which can drastically improve performance. D
## Time ordering of logs
Loki [accepts out-of-order writes](../configuration/#accept-out-of-order-writes) _by default_.
Loki [accepts out-of-order writes]({{< relref "../configuration/#accept-out-of-order-writes" >}}) _by default_.
This section identifies best practices when Loki is _not_ configured to accept out-of-order writes.
One issue many people have with Loki is their client receiving errors for out of order log entries. This happens because of this hard and fast rule within Loki:
@ -101,7 +102,7 @@ What can we do about this? What if this was because the sources of these logs we
{job="syslog", instance="host2"} 00:00:02 i'm a syslog! <- Accepted, still in order for stream 2
```
But what if the application itself generated logs that were out of order? Well, I'm afraid this is a problem. If you are extracting the timestamp from the log line with something like [the Promtail pipeline stage](https://grafana.com/docs/loki/latest/clients/promtail/stages/timestamp/), you could instead _not_ do this and let Promtail assign a timestamp to the log lines. Or you can hopefully fix it in the application itself.
But what if the application itself generated logs that were out of order? Well, I'm afraid this is a problem. If you are extracting the timestamp from the log line with something like [the Promtail pipeline stage](/docs/loki/latest/clients/promtail/stages/timestamp/), you could instead _not_ do this and let Promtail assign a timestamp to the log lines. Or you can hopefully fix it in the application itself.
It's also worth noting that the batching nature of the Loki push API can lead to some instances of out of order errors being received which are really false positives. (Perhaps a batch partially succeeded and was present; or anything that previously succeeded would return an out of order entry; or anything new would be accepted.)

@ -1,21 +1,22 @@
---
title: Clients
description: Grafana Loki clients
weight: 600
---
# Grafana Loki clients
# Clients
Grafana Loki supports the following official clients for sending logs:
- [Promtail](promtail/)
- [Docker Driver](docker-driver/)
- [Fluentd](fluentd/)
- [Fluent Bit](fluentbit/)
- [Logstash](logstash/)
- [Lambda Promtail](lambda-promtail/)
- [Promtail]({{<relref "promtail">}})
- [Docker Driver]({{<relref "docker-driver">}})
- [Fluentd]({{<relref "fluentd">}})
- [Fluent Bit]({{<relref "fluentbit">}})
- [Logstash]({{<relref "logstash">}})
- [Lambda Promtail]({{<relref "lambda-promtail">}})
There are also a number of third-party clients, see [Unofficial clients](#unofficial-clients).
The [xk6-loki extension](https://github.com/grafana/xk6-loki) permits [load testing Loki](k6/).
The [xk6-loki extension](https://github.com/grafana/xk6-loki) permits [load testing Loki]({{<relref "k6">}}).
## Picking a client
@ -58,7 +59,7 @@ By adding our output plugin you can quickly try Loki without doing big configura
### Lambda Promtail
This is a workflow combining the Promtail push-api [scrape config](promtail/configuration#loki_push_api_config) and the [lambda-promtail](lambda-promtail/) AWS Lambda function which pipes logs from Cloudwatch to Loki.
This is a workflow combining the Promtail push-api [scrape config]({{<relref "promtail/configuration#loki_push_api">}}) and the [lambda-promtail]({{<relref "lambda-promtail">}}) AWS Lambda function which pipes logs from Cloudwatch to Loki.
This is a good choice if you're looking to try out Loki in a low-footprint way or if you wish to monitor AWS lambda logs in Loki.

@ -1,10 +1,11 @@
---
title: AWS
description: AWS Clients
weight: 30
---
Sending logs from AWS services to Grafana Loki is a little different depending on what AWS service you are using:
* [Elastic Compute Cloud (EC2)](ec2/)
* [Elastic Container Service (ECS)](ecs/)
* [Elastic Kubernetes Service (EKS)](eks/)
* [Elastic Compute Cloud (EC2)]({{<relref "./ec2/_index.md">}})
* [Elastic Container Service (ECS)]({{<relref "./ecs/_index.md">}})
* [Elastic Kubernetes Service (EKS)]({{<relref "./eks/_index.md">}})

@ -1,13 +1,14 @@
---
title: EC2
description: Running Promtail on AWS EC2
---
# Running Promtail on AWS EC2
# EC2
In this tutorial we're going to setup [Promtail](../../promtail/) on an AWS EC2 instance and configure it to sends all its logs to a Grafana Loki instance.
In this tutorial we're going to setup [Promtail]({{< relref "../../promtail/" >}}) on an AWS EC2 instance and configure it to sends all its logs to a Grafana Loki instance.
<!-- TOC -->
- [Running Promtail on AWS EC2](#running-promtail-on-aws-ec2)
- [Running Promtail on AWS EC2](#ec2)
- [Requirements](#requirements)
- [Creating an EC2 instance](#creating-an-ec2-instance)
- [Setting up Promtail](#setting-up-promtail)
@ -47,7 +48,7 @@ aws ec2 create-security-group --group-name promtail-ec2 --description "promtail
}
```
Now let's authorize inbound access for SSH and [Promtail](../../promtail/) server:
Now let's authorize inbound access for SSH and [Promtail]({{< relref "../../promtail/" >}}) server:
```bash
aws ec2 authorize-security-group-ingress --group-id sg-02c489bbdeffdca1d --protocol tcp --port 22 --cidr 0.0.0.0/0
@ -87,7 +88,7 @@ ssh ec2-user@ec2-13-59-62-37.us-east-2.compute.amazonaws.com
## Setting up Promtail
First let's make sure we're running as root by using `sudo -s`.
Next we'll download, install and give executable right to [Promtail](../../promtail/).
Next we'll download, install and give executable right to [Promtail]({{< relref "../../promtail/" >}}).
```bash
mkdir /opt/promtail && cd /opt/promtail
@ -96,7 +97,7 @@ unzip "promtail-linux-amd64.zip"
chmod a+x "promtail-linux-amd64"
```
Now we're going to download the [Promtail configuration](../../promtail/) file below and edit it, don't worry we will explain what those means.
Now we're going to download the [Promtail configuration]({{< relref "../../promtail/" >}}) file below and edit it, don't worry we will explain what those means.
The file is also available as a gist at [cyriltovena/promtail-ec2.yaml][config gist].
```bash
@ -139,11 +140,11 @@ scrape_configs:
target_label: __host__
```
The **server** section indicates Promtail to bind his http server to 3100. Promtail serves HTTP pages for [troubleshooting](../../promtail/troubleshooting) service discovery and targets.
The **server** section indicates Promtail to bind his http server to 3100. Promtail serves HTTP pages for [troubleshooting]({{< relref "../../promtail/troubleshooting" >}}) service discovery and targets.
The **clients** section allow you to target your loki instance, if you're using GrafanaCloud simply replace `<user id>` and `<api secret>` with your credentials. Otherwise just replace the whole URL with your custom Loki instance.(e.g `http://my-loki-instance.my-org.com/loki/api/v1/push`)
[Promtail](../../promtail/) uses the same [Prometheus **scrape_configs**][prometheus scrape config]. This means if you already own a Prometheus instance the config will be very similar and easy to grasp.
[Promtail]({{< relref "../../promtail/" >}}) uses the same [Prometheus **scrape_configs**][prometheus scrape config]. This means if you already own a Prometheus instance the config will be very similar and easy to grasp.
Since we're running on AWS EC2 we want to uses EC2 service discovery, this will allows us to scrape metadata about the current instance (and even your custom tags) and attach those to our logs. This way managing and querying on logs will be much easier.
@ -234,7 +235,7 @@ Jul 08 15:48:57 ip-172-31-45-69.us-east-2.compute.internal promtail-linux-amd64[
Jul 08 15:48:57 ip-172-31-45-69.us-east-2.compute.internal promtail-linux-amd64[2732]: level=info ts=2020-07-08T15:48:57.56029474Z caller=main.go:67 msg="Starting Promtail" version="(version=1.6.0, branch=HEAD, revision=12c7eab8)"
```
You can now verify in Grafana that Loki has correctly received your instance logs by using the [LogQL](../../../logql/) query `{zone="us-east-2"}`.
You can now verify in Grafana that Loki has correctly received your instance logs by using the [LogQL]({{< relref "../../../logql/" >}}) query `{zone="us-east-2"}`.
![Grafana Loki logs][ec2 logs]
@ -263,7 +264,7 @@ Note that you can use [relabeling][relabeling] to convert systemd labels to matc
That's it, save the config and you can `reboot` the machine (or simply restart the service `systemctl restart promtail.service`).
Let's head back to Grafana and verify that your Promtail logs are available in Grafana by using the [LogQL](../../../logql/) query `{unit="promtail.service"}` in Explore. Finally make sure to checkout [live tailing][live tailing] to see logs appearing as they are ingested in Loki.
Let's head back to Grafana and verify that your Promtail logs are available in Grafana by using the [LogQL]({{< relref "../../../logql/" >}}) query `{unit="promtail.service"}` in Explore. Finally make sure to checkout [live tailing][live tailing] to see logs appearing as they are ingested in Loki.
[promtail]: ../../promtail/README
[aws cli]: https://aws.amazon.com/cli/

@ -1,7 +1,8 @@
---
title: ECS
description: ending Logs From AWS Elastic Container Service (ECS)
---
# Sending Logs From AWS Elastic Container Service (ECS)
# ECS
[ECS][ECS] is the fully managed container orchestration service by Amazon. Combined with [Fargate][Fargate] you can run your container workload without the need to provision your own compute resources. In this tutorial we will see how you can leverage [Firelens][Firelens] an AWS log router to forward all your logs and your workload metadata to a Grafana Loki instance.
@ -9,7 +10,7 @@ After this tutorial you will able to query all your logs in one place using Graf
<!-- TOC -->
- [Sending Logs From AWS Elastic Container Service (ECS)](#sending-logs-from-aws-elastic-container-service-ecs)
- [Sending Logs From AWS Elastic Container Service (ECS)](#ecs)
- [Requirements](#requirements)
- [Setting up the ECS cluster](#setting-up-the-ecs-cluster)
- [Creating your task definition](#creating-your-task-definition)
@ -73,7 +74,7 @@ aws iam create-role --role-name ecsTaskExecutionRole --assume-role-policy-docum
Note down the [ARN][arn] of this new role, we'll use it later to create an ECS task.
Finally we'll give the [ECS task execution policy][ecs iam](`AmazonECSTaskExecutionRolePolicy`) to the created role, this will allows us to manage logs with [Firelens][Firelens]:
Finally we'll give the [ECS task execution policy][ecs iam] `AmazonECSTaskExecutionRolePolicy` to the created role, this will allows us to manage logs with [Firelens][Firelens]:
```bash
aws iam attach-role-policy --role-name ecsTaskExecutionRole --policy-arn "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"

@ -1,7 +1,8 @@
---
title: EKS
description: Sending logs from EKS with Promtail
---
# Sending logs from EKS with Promtail
# EKS
In this tutorial we'll see how to set up Promtail on [EKS][eks]. Amazon Elastic Kubernetes Service (Amazon [EKS][eks]) is a fully managed Kubernetes service, using Promtail we'll get full visibility into our cluster logs. We'll start by forwarding pods logs then nodes services and finally Kubernetes events.
@ -9,7 +10,7 @@ After this tutorial you will able to query all your logs in one place using Graf
<!-- TOC -->
- [Sending logs from EKS with Promtail](#sending-logs-from-eks-with-promtail)
- [Sending logs from EKS with Promtail](#eks)
- [Requirements](#requirements)
- [Setting up the cluster](#setting-up-the-cluster)
- [Adding Promtail DaemonSet](#adding-promtail-daemonset)
@ -51,7 +52,7 @@ Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.8-eks-fd1
## Adding Promtail DaemonSet
To ship all your pods logs we're going to set up [Promtail](../../promtail/) as a DaemonSet in our cluster. This means it will run on each nodes of the cluster, we will then configure it to find the logs of your containers on the host.
To ship all your pods logs we're going to set up [Promtail]({{< relref "../../promtail/" >}}) as a DaemonSet in our cluster. This means it will run on each nodes of the cluster, we will then configure it to find the logs of your containers on the host.
What's nice about Promtail is that it uses the same [service discovery as Prometheus][prometheus conf], you should make sure the `scrape_configs` of Promtail matches the Prometheus one. Not only this is simpler to configure, but this also means Metrics and Logs will have the same metadata (labels) attached by the Prometheus service discovery. When querying Grafana you will be able to correlate metrics and logs very quickly, you can read more about this on our [blogpost][correlate].

@ -1,18 +1,19 @@
---
title: Docker driver
description: Docker driver client
weight: 40
---
# Docker Driver Client
# Docker driver
Grafana Loki officially supports a Docker plugin that will read logs from Docker
containers and ship them to Loki. The plugin can be configured to send the logs
to a private Loki instance or [Grafana Cloud](https://grafana.com/oss/loki).
to a private Loki instance or [Grafana Cloud](/oss/loki).
> Docker plugins are not yet supported on Windows; see the
> [Docker Engine managed plugin system](https://docs.docker.com/engine/extend) documentation for more information.
Documentation on configuring the Loki Docker Driver can be found on the
[configuration page](./configuration).
[configuration page]({{<relref "configuration.md">}}).
If you have any questions or issues using the Docker plugin feel free to open an issue in this [repository](https://github.com/grafana/loki/issues).
@ -36,7 +37,7 @@ ID NAME DESCRIPTION ENABLED
ac720b8fcfdb loki Loki Logging Driver true
```
Once the plugin is installed it can be [configured](./configuration).
Once the plugin is installed it can be [configured]({{<relref "configuration.md">}}).
## Upgrading
@ -59,8 +60,8 @@ docker plugin disable loki --force
docker plugin rm loki
```
# Know Issues
## Known Issues
The driver keeps all logs in memory and will drop log entries if Loki is not reachable and if the quantity of `max_retries` has been exceeded. To avoid the dropping of log entries, setting `max_retries` to zero allows unlimited retries; the drive will continue trying forever until Loki is again reachable. Trying forever may have undesired consequences, because the Docker daemon will wait for the Loki driver to process all logs of a container, until the container is removed. Thus, the Docker daemon might wait forever if the container is stuck.
Use Promtail's [Docker target](../promtail/configuration/#docker) or [Docker service discovery](../promtail/configuration/#docker_sd_config) to avoid this issue.
Use Promtail's [Docker target]({{<relref "../promtail/configuration/#docker">}}) or [Docker service discovery]({{<relref "../promtail/configuration/#docker_sd_config">}}) to avoid this issue.

@ -1,14 +1,15 @@
---
title: Configuration
description: Configuring the Docker Driver
---
# Configuring the Docker Driver
# Configuration
The Docker daemon on each machine has a default logging driver and
each container will use the default driver unless configured otherwise.
## Installation
Before configuring the plugin, [install or upgrade the Grafana Loki Docker Driver Client](../../docker-driver/)
Before configuring the plugin, [install or upgrade the Grafana Loki Docker Driver Client]({{<relref "_index.md">}})
## Change the logging driver for a container
@ -103,7 +104,7 @@ Once deployed, the Grafana service will send its logs to Loki.
## Labels
Loki can received a set of labels along with log line. These labels are used to index log entries and query back logs using [LogQL stream selector](../../../logql/#log-stream-selector).
Loki can received a set of labels along with log line. These labels are used to index log entries and query back logs using [LogQL stream selector]({{<relref "../../logql/log_queries/#log-stream-selector">}}).
By default, the Docker driver will add the following labels to each log line:
@ -196,33 +197,33 @@ services:
To specify additional logging driver options, you can use the --log-opt NAME=VALUE flag.
| Option | Required? | Default Value | Description |
|---------------------------------|:---------:|:--------------------------:|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `loki-url` | Yes | | Loki HTTP push endpoint. |
| `loki-external-labels` | No | `container_name={{.Name}}` | Additional label value pair separated by `,` to send with logs. The value is expanded with the [Docker tag template format](https://docs.docker.com/config/containers/logging/log_tags/). (eg: `container_name={{.ID}}.{{.Name}},cluster=prod`) |
| `loki-timeout` | No | `10s` | The timeout to use when sending logs to the Loki instance. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-batch-wait` | No | `1s` | The amount of time to wait before sending a log batch complete or not. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-batch-size` | No | `1048576` | The maximum size of a log batch to send. |
| `loki-min-backoff` | No | `500ms` | The minimum amount of time to wait before retrying a batch. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-max-backoff` | No | `5m` | The maximum amount of time to wait before retrying a batch. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-retries` | No | `10` | The maximum amount of retries for a log batch. Setting it to `0` will retry indefinitely. |
| `loki-pipeline-stage-file` | No | | The location of a pipeline stage configuration file ([example](https://github.com/grafana/loki/blob/main/clients/cmd/docker-driver/pipeline-example.yaml)). Pipeline stages allows to parse log lines to extract more labels, [see associated documentation](../../promtail/stages/). |
| `loki-pipeline-stages` | No | | The pipeline stage configuration provided as a string [see pipeline stages](#pipeline-stages) and [associated documentation](../../promtail/stages/). |
| `loki-relabel-config` | No | | A [Prometheus relabeling configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) allowing you to rename labels [see relabeling](#relabeling). |
| `loki-tenant-id` | No | | Set the tenant id (http header`X-Scope-OrgID`) when sending logs to Loki. It can be overridden by a pipeline stage. |
| `loki-tls-ca-file` | No | | Set the path to a custom certificate authority. |
| `loki-tls-cert-file` | No | | Set the path to a client certificate file. |
| `loki-tls-key-file` | No | | Set the path to a client key. |
| `loki-tls-server-name` | No | | Name used to validate the server certificate. |
| `loki-tls-insecure-skip-verify` | No | `false` | Allow to skip tls verification. |
| `loki-proxy-url` | No | | Proxy URL use to connect to Loki. |
| `no-file` | No | `false` | This indicates the driver to not create log files on disk, however this means you won't be able to use `docker logs` on the container anymore. You can use this if you don't need to use `docker logs` and you run with limited disk space. (By default files are created) |
| `keep-file` | No | `false` | This indicates the driver to keep json log files once the container is stopped. By default files are removed, this means you won't be able to use `docker logs` once the container is stopped. |
| `max-size` | No | -1 | The maximum size of the log before it is rolled. A positive integer plus a modifier representing the unit of measure (k, m, or g). Defaults to -1 (unlimited). This is used by json-log required to keep the `docker log` command working. |
| `max-file` | No | 1 | The maximum number of log files that can be present. If rolling the logs creates excess files, the oldest file is removed. Only effective when max-size is also set. A positive integer. Defaults to 1. |
| `labels` | No | | Comma-separated list of keys of labels, which should be included in message, if these labels are specified for container. |
| `env` | No | | Comma-separated list of keys of environment variables to be included in message if they specified for a container. |
| `env-regex` | No | | A regular expression to match logging-related environment variables. Used for advanced log label options. If there is collision between the label and env keys, the value of the env takes precedence. Both options add additional fields to the labels of a logging message. |
| Option | Required? | Default Value | Description |
|---------------------------------|:---------:|:--------------------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `loki-url` | Yes | | Loki HTTP push endpoint. |
| `loki-external-labels` | No | `container_name={{.Name}}` | Additional label value pair separated by `,` to send with logs. The value is expanded with the [Docker tag template format](https://docs.docker.com/config/containers/logging/log_tags/). (eg: `container_name={{.ID}}.{{.Name}},cluster=prod`) |
| `loki-timeout` | No | `10s` | The timeout to use when sending logs to the Loki instance. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-batch-wait` | No | `1s` | The amount of time to wait before sending a log batch complete or not. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-batch-size` | No | `1048576` | The maximum size of a log batch to send. |
| `loki-min-backoff` | No | `500ms` | The minimum amount of time to wait before retrying a batch. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-max-backoff` | No | `5m` | The maximum amount of time to wait before retrying a batch. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-retries` | No | `10` | The maximum amount of retries for a log batch. Setting it to `0` will retry indefinitely. |
| `loki-pipeline-stage-file` | No | | The location of a pipeline stage configuration file ([example](/grafana/loki/blob/main/clients/cmd/docker-driver/pipeline-example.yaml)). Pipeline stages allows to parse log lines to extract more labels, [see associated documentation]({{<relref "../promtail/stages/_index.md">}}). |
| `loki-pipeline-stages` | No | | The pipeline stage configuration provided as a string [see pipeline stages](#pipeline-stages) and [associated documentation]({{<relref "../promtail/stages/_index.md">}}). |
| `loki-relabel-config` | No | | A [Prometheus relabeling configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) allowing you to rename labels [see relabeling](#relabeling). |
| `loki-tenant-id` | No | | Set the tenant id (http header`X-Scope-OrgID`) when sending logs to Loki. It can be overridden by a pipeline stage. |
| `loki-tls-ca-file` | No | | Set the path to a custom certificate authority. |
| `loki-tls-cert-file` | No | | Set the path to a client certificate file. |
| `loki-tls-key-file` | No | | Set the path to a client key. |
| `loki-tls-server-name` | No | | Name used to validate the server certificate. |
| `loki-tls-insecure-skip-verify` | No | `false` | Allow to skip tls verification. |
| `loki-proxy-url` | No | | Proxy URL use to connect to Loki. |
| `no-file` | No | `false` | This indicates the driver to not create log files on disk, however this means you won't be able to use `docker logs` on the container anymore. You can use this if you don't need to use `docker logs` and you run with limited disk space. (By default files are created) |
| `keep-file` | No | `false` | This indicates the driver to keep json log files once the container is stopped. By default files are removed, this means you won't be able to use `docker logs` once the container is stopped. |
| `max-size` | No | -1 | The maximum size of the log before it is rolled. A positive integer plus a modifier representing the unit of measure (k, m, or g). Defaults to -1 (unlimited). This is used by json-log required to keep the `docker log` command working. |
| `max-file` | No | 1 | The maximum number of log files that can be present. If rolling the logs creates excess files, the oldest file is removed. Only effective when max-size is also set. A positive integer. Defaults to 1. |
| `labels` | No | | Comma-separated list of keys of labels, which should be included in message, if these labels are specified for container. |
| `env` | No | | Comma-separated list of keys of environment variables to be included in message if they specified for a container. |
| `env-regex` | No | | A regular expression to match logging-related environment variables. Used for advanced log label options. If there is collision between the label and env keys, the value of the env takes precedence. Both options add additional fields to the labels of a logging message. |
## Troubleshooting

@ -1,8 +1,9 @@
---
title: Fluent Bit
description: Fluent Bit Loki Output
weight: 50
---
# Fluent Bit Loki Output
# Fluent Bit
[Fluent Bit](https://fluentbit.io/) is a fast and lightweight logs and metrics processor and forwarder that can be configured with the [Grafana Loki output plugin](https://docs.fluentbit.io/manual/pipeline/outputs/loki) to ship logs to Loki. You can define which log files you want to collect using the [`Tail`](https://docs.fluentbit.io/manual/pipeline/inputs/tail) or [`Stdin`](https://docs.fluentbit.io/manual/pipeline/inputs/standard-input) data pipeline input. Additionally, Fluent Bit supports multiple `Filter` and `Parser` plugins (`Kubernetes`, `JSON`, etc.) to structure and alter log lines.
@ -43,7 +44,7 @@ helm upgrade --install loki-stack grafana/loki-stack \
### AWS Elastic Container Service (ECS)
You can use fluent-bit Loki Docker image as a Firelens log router in AWS ECS.
For more information about this see our [AWS documentation](../aws/ecs)
For more information about this see our [AWS documentation]({{<relref "../aws/ecs">}})
### Local
@ -91,7 +92,7 @@ You can also adapt your plugins.conf, removing the need to change the command li
### Labels
Labels are used to [query logs](../../logql) `{container_name="nginx", cluster="us-west1"}`, they are usually metadata about the workload producing the log stream (`instance`, `container_name`, `region`, `cluster`, `level`). In Loki labels are indexed consequently you should be cautious when choosing them (high cardinality label values can have performance drastic impact).
Labels are used to [query logs]({{<relref "../../logql">}}) `{container_name="nginx", cluster="us-west1"}`, they are usually metadata about the workload producing the log stream (`instance`, `container_name`, `region`, `cluster`, `level`). In Loki labels are indexed consequently you should be cautious when choosing them (high cardinality label values can have performance drastic impact).
You can use `Labels`, `RemoveKeys` , `LabelKeys` and `LabelMapPath` to how the output plugin will perform labels extraction.
@ -150,7 +151,7 @@ Buffering refers to the ability to store the records somewhere, and while they a
The blocking state with some of the input plugins is not acceptable, because it can have an undesirable side effect on the part that generates the logs. Fluent Bit implements a buffering mechanism that is based on parallel processing. Therefore, it cannot send logs in order. There are two ways of handling the out-of-order logs:
- Configure Loki to [accept out-of-order writes](../../configuration/#accept-out-of-order-writes).
- Configure Loki to [accept out-of-order writes]({{<relref "../../configuration/#accept-out-of-order-writes">}}).
- Configure the Loki output plugin to use the buffering mechanism based on [`dque`](https://github.com/joncrlsn/dque), which is compatible with the Loki server strict time ordering:

@ -1,12 +1,13 @@
---
title: Fluentd
description: Fluentd Loki Output Plugin
weight: 60
---
# Fluentd Loki Output Plugin
# Fluentd
Grafana Loki has a [Fluentd](https://www.fluentd.org/) output plugin called
`fluent-plugin-grafana-loki` that enables shipping logs to a private Loki
instance or [Grafana Cloud](https://grafana.com/products/cloud/).
instance or [Grafana Cloud](/products/cloud/).
The plugin source code is in the [fluentd directory of the repository](https://github.com/grafana/loki/tree/main/clients/cmd/fluentd).
@ -26,7 +27,7 @@ The Docker image `grafana/fluent-plugin-loki:master` contains [default configura
This image also uses `LOKI_URL`, `LOKI_USERNAME`, and `LOKI_PASSWORD` environment variables to specify the Loki's endpoint, user, and password (you can leave the USERNAME and PASSWORD blank if they're not used).
This image will start an instance of Fluentd to forward incoming logs to the specified Loki URL. As an alternate, containerized applications can also use [docker driver plugin](../docker-driver/) to ship logs without needing Fluentd.
This image will start an instance of Fluentd to forward incoming logs to the specified Loki URL. As an alternate, containerized applications can also use [docker driver plugin]({{<relref "../docker-driver/">}}) to ship logs without needing Fluentd.
### Example
@ -146,7 +147,7 @@ Use with the `remove_keys kubernetes` option to eliminate metadata from the log.
### Multi-worker usage
Out-of-order inserts are enabled by default in Loki; refer to [accept out-of-order writes](../../configuration/#accept-out-of-order-writes).
Out-of-order inserts are enabled by default in Loki; refer to [accept out-of-order writes]({{<relref "../../configuration/#accept-out-of-order-writes">}}).
If out-of-order inserts are _disabled_, attempting to insert a log entry with an earlier timestamp after a log entry with identical labels but a later timestamp, the insert will fail with `HTTP status code: 500, message: rpc error: code = Unknown desc = Entry out of order`. Therefore, in order to use this plugin in a multi worker Fluentd setup, you'll need to include the worker ID in the labels or otherwise [ensure log streams are always sent to the same worker](https://docs.fluentd.org/deployment/multi-process-workers#less-than-worker-n-greater-than-directive).
For example, using [fluent-plugin-record-modifier](https://github.com/repeatedly/fluent-plugin-record-modifier):
@ -182,7 +183,7 @@ This plugin automatically adds a `fluentd_thread` label with the name of the buf
### `url`
The URL of the Loki server to send logs to. When sending data, the publish path (`../api/loki/v1/push`) will automatically be appended.
By default the url is set to `https://logs-prod-us-central1.grafana.net`, the url of the Grafana Labs [hosted Loki](https://grafana.com/products/cloud/) service.
By default the url is set to `https://logs-prod-us-central1.grafana.net`, the url of the Grafana Labs [hosted Loki](/products/cloud/) service.
#### Proxy Support

@ -1,9 +1,10 @@
---
title: k6 load testing
description: k6 Loki extension load testing
weight: 90
---
# k6 Loki extension load testing
# k6 load testing
Grafana [k6](https://k6.io) is a modern load-testing tool.
Its clean and approachable scripting [API](https://k6.io/docs/javascript-api/)
@ -80,12 +81,12 @@ The `Client` class exposes the following instance methods:
| method | description |
| ------ | ----------- |
| `push()` | shortcut for `pushParameterized(5, 800*1024, 1024*1024)` |
| `pushParameterized(streams, minSize, maxSize)` | execute push request ([POST /loki/api/v1/push]({{< relref "../../api/_index.md#post-lokiapiv1push" >}})) |
| `instantQuery(query, limit)` | execute instant query ([GET /loki/api/v1/query]({{< relref "../../api/_index.md#get-lokiapiv1query" >}})) |
| `client.rangeQuery(query, duration, limit)` | execute range query ([GET /loki/api/v1/query_range]({{< relref "../../api/_index.md#get-lokiapiv1query_range" >}})) |
| `client.labelsQuery(duration)` | execute labels query ([GET /loki/api/v1/labels]({{< relref "../../api/_index.md#get-lokiapiv1labels" >}})) |
| `client.labelValuesQuery(label, duration)` | execute label values query ([GET /loki/api/v1/label/\<name\>/values]({{< relref "../../api/_index.md#get-lokiapiv1labelnamevalues" >}})) |
| `client.seriesQuery(matchers, duration)` | execute series query ([GET /loki/api/v1/series]({{< relref "../../api/_index.md#series" >}})) |
| `pushParameterized(streams, minSize, maxSize)` | execute push request ([POST /loki/api/v1/push]({{< relref "../../api/#push-log-entries-to-loki" >}})) |
| `instantQuery(query, limit)` | execute instant query ([GET /loki/api/v1/query]({{< relref "../../api/#query-loki" >}})) |
| `client.rangeQuery(query, duration, limit)` | execute range query ([GET /loki/api/v1/query_range]({{< relref "../../api/#query-loki-over-a-range-of-time" >}})) |
| `client.labelsQuery(duration)` | execute labels query ([GET /loki/api/v1/labels]({{< relref "../../api/#list-labels-within-a-range-of-time" >}})) |
| `client.labelValuesQuery(label, duration)` | execute label values query ([GET /loki/api/v1/label/\<name\>/values]({{< relref "../../api/#list-label-values-within-a-range-of-time" >}})) |
| `client.seriesQuery(matchers, duration)` | execute series query ([GET /loki/api/v1/series]({{< relref "../../api/#list-series" >}})) |
**Javascript load test example:**

@ -1,5 +1,6 @@
---
title: Log generation
description: Log generation with K6
weight: 10
---
# Log generation

@ -1,5 +1,6 @@
---
title: Query testing
description: Query testing with K6
weight: 30
---
# Query testing

@ -1,8 +1,9 @@
---
title: Write path testing
description: Write path testing with K6
weight: 20
---
# Write path load testing
# Write path testing
There are multiple considerations when
load testing a Loki cluster's write path.

@ -1,10 +1,11 @@
---
title: Lambda Promtail
description: Lambda Promtail
weight: 20
---
# Lambda Promtail
Grafana Loki includes [Terraform](https://www.terraform.io/) and [CloudFormation](https://aws.amazon.com/cloudformation/) for shipping Cloudwatch and loadbalancer logs to Loki via a [lambda function](https://aws.amazon.com/lambda/). This is done via [lambda-promtail](https://github.com/grafana/loki/tree/master/tools/lambda-promtail) which processes cloudwatch events and propagates them to Loki (or a Promtail instance) via the push-api [scrape config](../promtail/configuration#loki_push_api_config).
Grafana Loki includes [Terraform](https://www.terraform.io/) and [CloudFormation](https://aws.amazon.com/cloudformation/) for shipping Cloudwatch and loadbalancer logs to Loki via a [lambda function](https://aws.amazon.com/lambda/). This is done via [lambda-promtail](https://github.com/grafana/loki/tree/master/tools/lambda-promtail) which processes cloudwatch events and propagates them to Loki (or a Promtail instance) via the push-api [scrape config]({{<relref "../promtail/configuration#loki_push_api">}}).
## Deployment
@ -55,7 +56,7 @@ To add tenant id add `-var "tenant_id=value"`.
Note that the creation of a subscription filter on Cloudwatch in the provided Terraform file only accepts an array of log group names.
It does **not** accept strings for regex filtering on the logs contents via the subscription filters. We suggest extending the Terraform file to do so.
Or, have lambda-promtail write to Promtail and use [pipeline stages](https://grafana.com/docs/loki/latest/clients/promtail/stages/drop/).
Or, have lambda-promtail write to Promtail and use [pipeline stages](/docs/loki/latest/clients/promtail/stages/drop/).
CloudFormation:
```
@ -84,7 +85,7 @@ To modify an existing CloudFormation stack, use [update-stack](https://docs.aws.
### Ephemeral Jobs
This workflow is intended to be an effective approach for monitoring ephemeral jobs such as those run on AWS Lambda which are otherwise hard/impossible to monitor via one of the other Loki [clients](../).
This workflow is intended to be an effective approach for monitoring ephemeral jobs such as those run on AWS Lambda which are otherwise hard/impossible to monitor via one of the other Loki [clients]({{<relref "../">}}).
Ephemeral jobs can quite easily run afoul of cardinality best practices. During high request load, an AWS lambda function might balloon in concurrency, creating many log streams in Cloudwatch. For this reason lambda-promtail defaults to **not** keeping the log stream value as a label when propagating the logs to Loki. This is only possible because new versions of Loki no longer have an ingestion ordering constraint on logs within a single stream.
@ -110,7 +111,7 @@ Cloudfront [real-time logs](https://docs.aws.amazon.com/AmazonCloudFront/latest/
## Propagated Labels
Incoming logs can have seven special labels assigned to them which can be used in [relabeling](../promtail/configuration/#relabel_config) or later stages in a Promtail [pipeline](../promtail/pipelines/):
Incoming logs can have seven special labels assigned to them which can be used in [relabeling]({{<relref "../promtail/configuration#relabel_configs">}}) or later stages in a Promtail [pipeline]({{<relref "../promtail/pipelines/">}}):
- `__aws_log_type`: Where this log came from (Cloudwatch, Kinesis or S3).
- `__aws_cloudwatch_log_group`: The associated Cloudwatch Log Group for this log.
@ -196,4 +197,4 @@ Instead we can pipeline Cloudwatch logs to a set of Promtails, which can mitigat
1) Using Promtail's push api along with the `use_incoming_timestamp: false` config, we let Promtail determine the timestamp based on when it ingests the logs, not the timestamp assigned by cloudwatch. Obviously, this means that we lose the origin timestamp because Promtail now assigns it, but this is a relatively small difference in a real time ingestion system like this.
2) In conjunction with (1), Promtail can coalesce logs across Cloudwatch log streams because it's no longer susceptible to out-of-order errors when combining multiple sources (lambda invocations).
One important aspect to keep in mind when running with a set of Promtails behind a load balancer is that we're effectively moving the cardinality problems from the number of log streams -> number of Promtails. If you have not configured Loki to [accept out-of-order writes](../../configuration#accept-out-of-order-writes), you'll need to assign a Promtail-specific label on each Promtail so that you don't run into out-of-order errors when the Promtails send data for the same log groups to Loki. This can easily be done via a configuration like `--client.external-labels=promtail=${HOSTNAME}` passed to Promtail.
One important aspect to keep in mind when running with a set of Promtails behind a load balancer is that we're effectively moving the cardinality problems from the number of log streams -> number of Promtails. If you have not configured Loki to [accept out-of-order writes]({{<relref "../../configuration#accept-out-of-order-writes">}}), you'll need to assign a Promtail-specific label on each Promtail so that you don't run into out-of-order errors when the Promtails send data for the same log groups to Loki. This can easily be done via a configuration like `--client.external-labels=promtail=${HOSTNAME}` passed to Promtail.

@ -1,12 +1,13 @@
---
title: Logstash
description: Logstash
weight: 70
---
# Logstash
Grafana Loki has a [Logstash](https://www.elastic.co/logstash) output plugin called
`logstash-output-loki` that enables shipping logs to a Loki
instance or [Grafana Cloud](https://grafana.com/products/cloud/).
instance or [Grafana Cloud](/products/cloud/).
## Installation
@ -105,7 +106,7 @@ Contains a `message` and `@timestamp` fields, which are respectively used to for
> You can use a different property for the log line by using the configuration property [`message_field`](#message_field). If you also need to change the timestamp value use the Logstash `date` filter to change the `@timestamp` field.
All other fields (except nested fields) will form the label set (key value pairs) attached to the log line. [This means you're responsible for mutating and dropping high cardinality labels](https://grafana.com/blog/2020/04/21/how-labels-in-loki-can-make-log-queries-faster-and-easier/) such as client IPs.
All other fields (except nested fields) will form the label set (key value pairs) attached to the log line. [This means you're responsible for mutating and dropping high cardinality labels](/blog/2020/04/21/how-labels-in-loki-can-make-log-queries-faster-and-easier/) such as client IPs.
You can usually do so by using a [`mutate`](https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html) filter.
**Note:** In version 1.1.0 and greater of this plugin you can also specify a list of labels to allowlist via the `include_fields` configuration.
@ -197,12 +198,12 @@ filter {
The url of the Loki server to send logs to.
When sending data the push path need to also be provided e.g. `http://localhost:3100/loki/api/v1/push`.
If you want to send to [GrafanaCloud](https://grafana.com/products/cloud/) you would use `https://logs-prod-us-central1.grafana.net/loki/api/v1/push`.
If you want to send to [GrafanaCloud](/products/cloud/) you would use `https://logs-prod-us-central1.grafana.net/loki/api/v1/push`.
#### username / password
Specify a username and password if the Loki server requires basic authentication.
If using the [GrafanaLab's hosted Loki](https://grafana.com/products/cloud/), the username needs to be set to your instance/user id and the password should be a Grafana.com api key.
If using the [GrafanaLab's hosted Loki](/products/cloud/), the username needs to be set to your instance/user id and the password should be a Grafana.com api key.
#### message_field

@ -1,11 +1,12 @@
---
title: Promtail
description: Promtail
weight: 10
---
# Promtail
Promtail is an agent which ships the contents of local logs to a private Grafana Loki
instance or [Grafana Cloud](https://grafana.com/oss/loki). It is usually
instance or [Grafana Cloud](/oss/loki). It is usually
deployed to every machine that has applications needed to be monitored.
It primarily:
@ -34,7 +35,7 @@ Kubernetes API server while `static` usually covers all other use cases.
Just like Prometheus, `promtail` is configured using a `scrape_configs` stanza.
`relabel_configs` allows for fine-grained control of what to ingest, what to
drop, and the final metadata to attach to the log line. Refer to the docs for
[configuring Promtail](configuration/) for more details.
[configuring Promtail]({{<relref "configuration.md">}}) for more details.
### Support for compressed files
@ -67,8 +68,8 @@ parsed data to Loki. Important details are:
to resume work from the last scraped line and process the rest of the remaining 55%.
* Since decompression and pushing can be very fast, depending on the size
of your compressed file Loki will rate-limit your ingestion. In that case you
might configure Promtail's [`limits` stage](https://grafana.com/docs/loki/latest/clients/promtail/stages/limit/) to slow the pace or increase
[ingestion limits on Loki](https://grafana.com/docs/loki/latest/configuration/#limits_config).
might configure Promtail's [`limits` stage](/docs/loki/latest/clients/promtail/stages/limit/) to slow the pace or increase
[ingestion limits on Loki](/docs/loki/latest/configuration/#limits_config).
* Log rotations **aren't supported as of now**, mostly because it requires us modifying Promtail to
rely on file inodes instead of file names. If you'd like to see support for it, please create a new
issue on Github asking for it and explaining your use case.
@ -78,7 +79,7 @@ parsed data to Loki. Important details are:
## Loki Push API
Promtail can also be configured to receive logs from another Promtail or any Loki client by exposing the [Loki Push API](../../api#post-lokiapiv1push) with the [loki_push_api](configuration#loki_push_api_config) scrape config.
Promtail can also be configured to receive logs from another Promtail or any Loki client by exposing the [Loki Push API]({{<relref "../../api#push-log-entries-to-loki">}}) with the [loki_push_api]({{<relref "configuration#loki_push_api">}}) scrape config.
There are a few instances where this might be helpful:
@ -88,12 +89,12 @@ There are a few instances where this might be helpful:
## Receiving logs From Syslog
When the [Syslog Target](configuration#syslog_config) is being used, logs
When the [Syslog Target]({{<relref "configuration#syslog">}}) is being used, logs
can be written with the syslog protocol to the configured port.
## AWS
If you need to run Promtail on Amazon Web Services EC2 instances, you can use our [detailed tutorial](../aws/ec2/).
If you need to run Promtail on Amazon Web Services EC2 instances, you can use our [detailed tutorial]({{<relref "../aws/ec2/">}}).
## Labeling and parsing
@ -106,7 +107,7 @@ To allow more sophisticated filtering afterwards, Promtail allows to set labels
not only from service discovery, but also based on the contents of each log
line. The `pipeline_stages` can be used to add or update labels, correct the
timestamp, or re-write log lines entirely. Refer to the documentation for
[pipelines](pipelines/) for more details.
[pipelines]({{<relref "pipelines">}}) for more details.
## Shipping
@ -132,7 +133,7 @@ This endpoint returns 200 when Promtail is up and running, and there's at least
### `GET /metrics`
This endpoint returns Promtail metrics for Prometheus. Refer to
[Observing Grafana Loki](../../operations/observability/) for the list
[Observing Grafana Loki]({{<relref "../../operations/observability/">}}) for the list
of exported metrics.
### Promtail web server config

@ -1,7 +1,8 @@
---
title: Configuration
description: Configuring Promtaim
---
# Configuring Promtail
# Configuration
Promtail is configured in a YAML file (usually referred to as `config.yaml`)
which contains information on the Promtail server, where positions are stored,
@ -34,8 +35,8 @@ defined by the schema below. Brackets indicate that a parameter is optional. For
non-list parameters the value is set to the specified default.
For more detailed information on configuring how to discover and scrape logs from
targets, see [Scraping](../scraping/). For more information on transforming logs
from scraped targets, see [Pipelines](../pipelines/).
targets, see [Scraping]({{<relref "scraping">}}). For more information on transforming logs
from scraped targets, see [Pipelines]({{<relref "pipelines">}}).
### Use environment variables in the configuration
@ -394,7 +395,7 @@ docker_sd_configs:
### pipeline_stages
[Pipeline](../pipelines/) stages are used to transform log entries and their labels. The pipeline is executed after the discovery process finishes. The `pipeline_stages` object consists of a list of stages which correspond to the items listed below.
[Pipeline]({{<relref "pipelines">}}) stages are used to transform log entries and their labels. The pipeline is executed after the discovery process finishes. The `pipeline_stages` object consists of a list of stages which correspond to the items listed below.
In most cases, you extract data from logs with `regex` or `json` stages. The extracted data is transformed into a temporary map object. The data can then be used by Promtail e.g. as values for `labels` or as an `output`. Additionally any other stage aside from `docker` and `cri` can access the extracted data.
@ -540,7 +541,7 @@ template:
#### match
The match stage conditionally executes a set of stages when a log entry matches
a configurable [LogQL](../../../logql/) stream selector.
a configurable [LogQL]({{<relref "../../logql/">}}) stream selector.
```yaml
match:
@ -806,8 +807,8 @@ Promtail needs to wait for the next message to catch multi-line messages,
therefore delays between messages can occur.
See recommended output configurations for
[syslog-ng](../scraping#syslog-ng-output-configuration) and
[rsyslog](../scraping#rsyslog-output-configuration). Both configurations enable
[syslog-ng]({{<relref "scraping#syslog-ng-output-configuration">}}) and
[rsyslog]({{<relref "scraping#rsyslog-output-configuration">}}). Both configurations enable
IETF Syslog with octet-counting.
You may need to increase the open files limit for the Promtail process
@ -861,7 +862,7 @@ max_message_length: <int>
### loki_push_api
The `loki_push_api` block configures Promtail to expose a [Loki push API](../../../api#post-lokiapiv1push) server.
The `loki_push_api` block configures Promtail to expose a [Loki push API]({{<relref "../../api#push-log-entries-to-loki">}}) server.
Each job configured with a `loki_push_api` will expose this API and will require a separate port.
@ -990,7 +991,7 @@ labels:
### Available Labels
When Promtail receives GCP logs, various internal labels are made available for [relabeling](#relabeling). This depends on the subscription type chosen.
When Promtail receives GCP logs, various internal labels are made available for [relabeling](#relabel_configs). This depends on the subscription type chosen.
**Internal labels available for pull**
@ -1120,7 +1121,7 @@ Each GELF message received will be encoded in JSON as the log line. For example:
{"version":"1.1","host":"example.org","short_message":"A short message","timestamp":1231231123,"level":5,"_some_extra":"extra"}
```
You can leverage [pipeline stages](pipeline_stages) with the GELF target,
You can leverage [pipeline stages]({{<relref "./stages">}}) with the GELF target,
if for example, you want to parse the log line and extract more labels or change the log line format.
```yaml
@ -1276,7 +1277,7 @@ All Cloudflare logs are in JSON. Here is an example:
}
```
You can leverage [pipeline stages](pipeline_stages) if, for example, you want to parse the JSON log line and extract more labels or change the log line format.
You can leverage [pipeline stages]({{<relref "./stages">}}) if, for example, you want to parse the JSON log line and extract more labels or change the log line format.
### heroku_drain
@ -1455,7 +1456,7 @@ As a fallback, the file contents are also re-read periodically at the specified
refresh interval.
Each target has a meta label `__meta_filepath` during the
[relabeling phase](#relabel_config). Its value is set to the
[relabeling phase](#relabel_configs). Its value is set to the
filepath from which the target was extracted.
```yaml
@ -1978,7 +1979,7 @@ The `tracing` block configures tracing for Jaeger. Currently, limited to configu
## Example Docker Config
It's fairly difficult to tail Docker files on a standalone machine because they are in different locations for every OS. We recommend the [Docker logging driver](../../docker-driver/) for local Docker installs or Docker Compose.
It's fairly difficult to tail Docker files on a standalone machine because they are in different locations for every OS. We recommend the [Docker logging driver]({{<relref "../docker-driver/">}}) for local Docker installs or Docker Compose.
If running in a Kubernetes environment, you should look at the defined configs which are in [helm](https://github.com/grafana/helm-charts/blob/main/charts/promtail/templates/configmap.yaml) and [jsonnet](https://github.com/grafana/loki/tree/master/production/ksonnet/promtail/scrape_config.libsonnet), these leverage the prometheus service discovery libraries (and give Promtail it's name) for automatically finding and tailing pods. The jsonnet config explains with comments what each section is for.

@ -1,7 +1,8 @@
---
title: Cloud setup GCP Logs
description: Cloud setup GCP logs
---
# Cloud setup GCP logs
# Cloud setup GCP Logs
This document explain how one can setup Google Cloud Platform to forward its cloud resource logs from a particular GCP project into Google Pubsub topic so that is available for Promtail to consume.
@ -123,7 +124,7 @@ gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--role='roles/iam.serviceAccountTokenCreator'
```
Having configured Promtail with the [GCP Logs Push target](./#push), hosted in an internet-facing and HTTPS enabled deployment, we can continue with creating
Having configured Promtail with the [GCP Logs Push target](#push), hosted in an internet-facing and HTTPS enabled deployment, we can continue with creating
the push subscription.
```bash
@ -230,7 +231,7 @@ We need a service account with following permissions.
This enables Promtail to read log entries from the pubsub subscription created before.
you can find example for Promtail scrape config for `gcplog` [here](../scraping/#gcplog-scraping)
you can find example for Promtail scrape config for `gcplog` [here]({{<relref "scraping/#gcp-log-scraping">}})
If you are scraping logs from multiple GCP projects, then this serviceaccount should have above permissions in all the projects you are tyring to scrape.

@ -1,7 +1,8 @@
---
title: Installation
description: Install Promtail
---
# Install Promtail
# Installation
Promtail is distributed as a binary, in a Docker container,
or there is a Helm chart to install it in a Kubernetes cluster.

@ -1,12 +1,13 @@
---
title: Promtail and Log Rotation
description: Promtail and Log Rotation
---
# Promtail and Log Rotation
## Why does log rotation matters?
At any point in time, there may be three processes working on a log file as shown in the image below.
![block_diagram](../logrotation-components.png)
![block_diagram](./logrotation-components.png)
1. Appender - A writer that keeps appending to a log file. This can be your application or some system daemons like Syslog, Docker log driver or Kubelet, etc.
2. Tailer - A reader that reads log lines as they are appended, for example, agents like Promtail.
@ -28,10 +29,10 @@ In both cases, after log rotation, all new log lines are written to the original
These two methods of log rotation are shown in the following images.
### Copy and Truncate
![block_diagram](../logrotation-copy-and-truncate.png)
![block_diagram](./logrotation-copy-and-truncate.png)
### Rename and Create
![block_diagram](../logrotation-rename-and-create.png)
![block_diagram](./logrotation-rename-and-create.png)
Both types of log rotation seem to give the same result. However, there are some subtle differences.
@ -81,7 +82,7 @@ Here `create` mode works like (2) explained above. The `create` mode is optional
### Kubernetes
[Kubernetes Service Discovery in Promtail]({{<relref "./scraping.md">}}#kubernetes-discovery) also uses file-based scraping. Meaning, logs from your pods are stored on the nodes and Promtail scrapes the pod logs from the node files.
[Kubernetes Service Discovery in Promtail]({{<relref "../scraping#kubernetes-discovery">}}) also uses file-based scraping. Meaning, logs from your pods are stored on the nodes and Promtail scrapes the pod logs from the node files.
You can [configure](https://kubernetes.io/docs/concepts/cluster-administration/logging/#log-rotation) the `kubelet` process running on each node to manage log rotation via two configuration settings.
@ -138,4 +139,4 @@ If neither `kubelet` nor `CRI` is configured for rotating logs, then the `logrot
Promtail uses `polling` to watch for file changes. A `polling` mechanism combined with a [copy and truncate](#copy-and-truncate) log rotation may result in losing some logs. As explained earlier in this topic, this happens when the file is truncated before Promtail reads all the log lines from such a file.
Therefore, for a long-term solution, we strongly recommend changing the log rotation strategy to [rename and create](#rename-and-create). Alternatively, as a workaround in the short term, you can tweak the promtail client's `batchsize` [config]({{<relref "./configuration.md">}}/#clients) to set higher values (like 5M or 8M). This gives Promtail more room to read loglines without frequently waiting for push responses from the Loki server.
Therefore, for a long-term solution, we strongly recommend changing the log rotation strategy to [rename and create](#rename-and-create). Alternatively, as a workaround in the short term, you can tweak the promtail client's `batchsize` [config]({{<relref "../configuration#clients">}}) to set higher values (like 5M or 8M). This gives Promtail more room to read loglines without frequently waiting for push responses from the Loki server.

@ -1,5 +1,6 @@
---
title: Pipelines
description: Pipelines
---
# Pipelines
@ -25,13 +26,13 @@ stages:
condition.
Typical pipelines will start with a parsing stage (such as a
[regex](../stages/regex/) or [json](../stages/json/) stage) to extract data
[regex]({{<relref "stages/regex/">}}) or [json]({{<relref "stages/json/">}}) stage) to extract data
from the log line. Then, a series of action stages will be present to do
something with that extracted data. The most common action stage will be a
[labels](../stages/labels/) stage to turn extracted data into a label.
[labels]({{<relref "stages/labels/">}}) stage to turn extracted data into a label.
A common stage will also be the [match](../stages/match/) stage to selectively
apply stages or drop entries based on a [LogQL stream selector and filter expressions](../../../logql/).
A common stage will also be the [match]({{<relref "stages/match/">}}) stage to selectively
apply stages or drop entries based on a [LogQL stream selector and filter expressions]({{<relref "../../logql/">}}).
Note that pipelines can not currently be used to deduplicate logs; Grafana Loki will
receive the same log line multiple times if, for example:
@ -199,26 +200,26 @@ given log entry.
Parsing stages:
- [docker](../stages/docker/): Extract data by parsing the log line using the standard Docker format.
- [cri](../stages/cri/): Extract data by parsing the log line using the standard CRI format.
- [regex](../stages/regex/): Extract data using a regular expression.
- [json](../stages/json/): Extract data by parsing the log line as JSON.
- [docker]({{<relref "stages/docker/">}}): Extract data by parsing the log line using the standard Docker format.
- [cri]({{<relref "stages/cri/">}}): Extract data by parsing the log line using the standard CRI format.
- [regex]({{<relref "stages/regex/">}}): Extract data using a regular expression.
- [json]({{<relref "stages/json/">}}): Extract data by parsing the log line as JSON.
Transform stages:
- [multiline](../stages/multiline/): Merges multiple lines, e.g. stack traces, into multiline blocks.
- [template](../stages/template/): Use Go templates to modify extracted data.
- [multiline]({{<relref "stages/multiline/">}}): Merges multiple lines, e.g. stack traces, into multiline blocks.
- [template]({{<relref "stages/template/">}}): Use Go templates to modify extracted data.
Action stages:
- [timestamp](../stages/timestamp/): Set the timestamp value for the log entry.
- [output](../stages/output/): Set the log line text.
- [labels](../stages/labels/): Update the label set for the log entry.
- [metrics](../stages/metrics/): Calculate metrics based on extracted data.
- [tenant](../stages/tenant/): Set the tenant ID value to use for the log entry.
- [timestamp]({{<relref "stages/timestamp/">}}): Set the timestamp value for the log entry.
- [output]({{<relref "stages/output/">}}): Set the log line text.
- [labels]({{<relref "stages/labels/">}}): Update the label set for the log entry.
- [metrics]({{<relref "stages/metrics/">}}): Calculate metrics based on extracted data.
- [tenant]({{<relref "stages/tenant/">}}): Set the tenant ID value to use for the log entry.
Filtering stages:
- [match](../stages/match/): Conditionally run stages based on the label set.
- [drop](../stages/drop/): Conditionally drop log lines based on several options.
- [limit](../stages/limit/): Conditionally rate limit log lines based on several options.
- [match]({{<relref "stages/match/">}}): Conditionally run stages based on the label set.
- [drop]({{<relref "stages/drop/">}}): Conditionally drop log lines based on several options.
- [limit]({{<relref "stages/limit/">}}): Conditionally rate limit log lines based on several options.

@ -1,7 +1,8 @@
---
title: Scraping
description: Promtail Scraping (Service Discovery)
---
# Promtail Scraping (Service Discovery)
# Scraping
## File Target Discovery
@ -222,7 +223,7 @@ Here `project_id` and `subscription` are the only required fields.
- `project_id` is the GCP project id.
- `subscription` is the GCP pubsub subscription where Promtail can consume log entries from.
Before using `gcplog` target, GCP should be [configured](../gcplog-cloud) with pubsub subscription to receive logs from.
Before using `gcplog` target, GCP should be [configured]({{<relref "gcplog-cloud">}}) with pubsub subscription to receive logs from.
It also supports `relabeling` and `pipeline` stages just like other targets.
@ -256,7 +257,7 @@ section. This server exposes the single endpoint `POST /gcp/api/v1/push`, respon
For Google's PubSub to be able to send logs, **Promtail server must be publicly accessible, and support HTTPS**. For that, Promtail can be deployed
as part of a larger orchestration service like Kubernetes, which can handle HTTPS traffic through an ingress, or it can be hosted behind
a proxy/gateway, offloading the HTTPS to that component and routing the request to Promtail. Once that's solved, GCP can be [configured](../gcplog-cloud)
a proxy/gateway, offloading the HTTPS to that component and routing the request to Promtail. Once that's solved, GCP can be [configured]({{<relref "gcplog-cloud">}})
to send logs to Promtail.
It also supports `relabeling` and `pipeline` stages.
@ -558,5 +559,5 @@ clients:
- [ <client_option> ]
```
Refer to [`client_config`]({{< relref "configuration#client_config" >}}) from the Promtail
Refer to [`client_config`]({{< relref "configuration#clients" >}}) from the Promtail
Configuration reference for all available options.

@ -1,40 +1,41 @@
---
title: Stages
description: Stages
---
# Stages
This section is a collection of all stages Promtail supports in a
[Pipeline](../pipelines/).
[Pipeline]({{<relref "../pipelines/">}}).
Parsing stages:
- [docker](docker/): Extract data by parsing the log line using the standard Docker format.
- [cri](cri/): Extract data by parsing the log line using the standard CRI format.
- [regex](regex/): Extract data using a regular expression.
- [json](json/): Extract data by parsing the log line as JSON.
- [logfmt](logfmt/): Extract data by parsing the log line as logfmt.
- [replace](replace/): Replace data using a regular expression.
- [multiline](multiline/): Merge multiple lines into a multiline block.
- [docker]({{<relref "docker">}}): Extract data by parsing the log line using the standard Docker format.
- [cri]({{<relref "cri">}}): Extract data by parsing the log line using the standard CRI format.
- [regex]({{<relref "regex">}}): Extract data using a regular expression.
- [json]({{<relref "json">}}): Extract data by parsing the log line as JSON.
- [logfmt]({{<relref "logfmt">}}): Extract data by parsing the log line as logfmt.
- [replace]({{<relref "replace">}}): Replace data using a regular expression.
- [multiline]({{<relref "multiline">}}): Merge multiple lines into a multiline block.
Transform stages:
- [template](template/): Use Go templates to modify extracted data.
- [pack](pack/): Packs a log line in a JSON object allowing extracted values and labels to be placed inside the log line.
- [decolorize](decolorize/): Strips ANSI color sequences from the log line.
- [template]({{<relref "template">}}): Use Go templates to modify extracted data.
- [pack]({{<relref "pack">}}): Packs a log line in a JSON object allowing extracted values and labels to be placed inside the log line.
- [decolorize]({{<relref "decolorize">}}): Strips ANSI color sequences from the log line.
Action stages:
- [timestamp](timestamp/): Set the timestamp value for the log entry.
- [output](output/): Set the log line text.
- [labeldrop](labeldrop/): Drop label set for the log entry.
- [labelallow](labelallow/): Allow label set for the log entry.
- [labels](labels/): Update the label set for the log entry.
- [limit](limit/): Limit the rate lines will be sent to Loki.
- [static_labels](static_labels/): Add static-labels to the log entry.
- [metrics](metrics/): Calculate metrics based on extracted data.
- [tenant](tenant/): Set the tenant ID value to use for the log entry.
- [timestamp]({{<relref "timestamp">}}): Set the timestamp value for the log entry.
- [output]({{<relref "output">}}): Set the log line text.
- [labeldrop]({{<relref "labeldrop">}}): Drop label set for the log entry.
- [labelallow]({{<relref "labelallow">}}): Allow label set for the log entry.
- [labels]({{<relref "labels">}}): Update the label set for the log entry.
- [limit]({{<relref "limit">}}): Limit the rate lines will be sent to Loki.
- [static_labels]({{<relref "static_labels">}}): Add static-labels to the log entry.
- [metrics]({{<relref "metrics">}}): Calculate metrics based on extracted data.
- [tenant]({{<relref "tenant">}}): Set the tenant ID value to use for the log entry.
Filtering stages:
- [match](match/): Conditionally run stages based on the label set.
- [drop](drop/): Conditionally drop log lines based on several options.
- [match]({{<relref "match">}}): Conditionally run stages based on the label set.
- [drop]({{<relref "drop">}}): Conditionally drop log lines based on several options.

@ -1,7 +1,8 @@
---
title: cri
description: cri stage
---
# `cri` stage
# cri
The `cri` stage is a parsing stage that reads the log line using the standard CRI logging format.

@ -1,7 +1,8 @@
---
title: decolorize
description: decolorize stage
---
# `decolorize` stage
# decolorize
The `decolorize` stage is a transform stage that lets you strip
ANSI color codes from the log line, thus making it easier to

@ -1,7 +1,8 @@
---
title: docker
description: docker stage
---
# `docker` stage
# docker
The `docker` stage is a parsing stage that reads log lines in the standard
format of Docker log files.

@ -1,7 +1,8 @@
---
title: drop
description: drop stage
---
# `drop` stage
# drop
The `drop` stage is a filtering stage that lets you drop logs based on several options.
@ -106,7 +107,7 @@ Would drop this log line:
#### Drop old log lines
**NOTE** For `older_than` to work, you must be using the [timestamp](../timestamp) stage to set the timestamp from the ingested log line _before_ applying the `drop` stage.
**NOTE** For `older_than` to work, you must be using the [timestamp]({{<relref "timestamp">}}) stage to set the timestamp from the ingested log line _before_ applying the `drop` stage.
Given the pipeline:

@ -1,7 +1,8 @@
---
title: json
description: json stage
---
# `json` stage
# json
The `json` stage is a parsing stage that reads the log line as JSON and accepts
[JMESPath](http://jmespath.org/) expressions to extract data.
@ -32,7 +33,7 @@ This stage uses the Go JSON unmarshaler, which means non-string types like
numbers or booleans will be unmarshaled into those types. The extracted data
can hold non-string values and this stage does not do any type conversions;
downstream stages will need to perform correct type conversion of these values
as necessary. Please refer to the [the `template` stage](../template/) for how
as necessary. Please refer to the [the `template` stage]({{<relref "template">}}) for how
to do this.
If the value extracted is a complex type, such as an array or a JSON object, it

@ -1,7 +1,8 @@
---
title: labelallow
description: labelallow stage
---
# `labelallow` stage
# labelallow
The labelallow stage is an action stage that allows only the provided labels
to be included in the label set that is sent to Loki with the log entry.

@ -1,7 +1,8 @@
---
title: labeldrop
description: labeldrop stage
---
# `labeldrop` stage
# labeldrop
The labeldrop stage is an action stage that drops labels from
the label set that is sent to Loki with the log entry.

@ -1,7 +1,8 @@
---
title: labels
description: labels stage
---
# `labels` stage
# labels
The labels stage is an action stage that takes data from the extracted map and
modifies the label set that is sent to Loki with the log entry.

@ -1,14 +1,15 @@
---
title: limit
description: limit stage
---
# `limit` stage
# limit
The `limit` stage is a rate-limiting stage that throttles logs based on several options.
## Limit stage schema
This pipeline stage places limits on the rate or burst quantity of log lines that Promtail pushes to Loki.
The concept of having distinct burst and rate limits mirrors the approach to limits that can be set for Loki's distributor component: `ingestion_rate_mb` and `ingestion_burst_size_mb`, as defined in [limits_config](../../../../configuration/#limits_config).
The concept of having distinct burst and rate limits mirrors the approach to limits that can be set for Loki's distributor component: `ingestion_rate_mb` and `ingestion_burst_size_mb`, as defined in [limits_config]({{<relref "../../../configuration/#limits_config">}}).
```yaml
limit:
@ -77,4 +78,4 @@ Given the pipeline:
```
Would ratelimit messages originating from each namespace independently.
Any message without namespace label will not be ratelimited.
Any message without namespace label will not be ratelimited.

@ -3,7 +3,7 @@ title: logfmt
menuTitle: logfmt
description: The logfmt parsing stage reads logfmt log lines and extracts the data into labels.
---
# `logfmt` stage
# logfmt
The `logfmt` stage is a parsing stage that reads the log line as [logfmt](https://brandur.org/logfmt) and allows extraction of data into labels.
@ -25,7 +25,7 @@ This stage uses the [go-logfmt](https://github.com/go-logfmt/logfmt) unmarshaler
numbers or booleans will be unmarshaled into those types. The extracted data
can hold non-string values, and this stage does not do any type conversions;
downstream stages will need to perform correct type conversion of these values
as necessary. Please refer to the [`template` stage](../template/) for how
as necessary. Please refer to the [`template` stage]({{<relref "template">}}) for how
to do this.
If the value extracted is a complex type, its value is extracted as a string.
@ -85,4 +85,4 @@ extracted data:
The second stage will parse the value of `extra` from the extracted data as logfmt
and append the following key-value pairs to the set of extracted data:
- `user`: `foo`
- `user`: `foo`

@ -1,10 +1,11 @@
---
title: match
description: match stage
---
# `match` stage
# match
The match stage is a filtering stage that conditionally applies a set of stages
or drop entries when a log entry matches a configurable [LogQL](../../../../logql/)
or drop entries when a log entry matches a configurable [LogQL]({{<relref "../../../logql">}})
stream selector and filter expressions.
## Schema
@ -47,7 +48,7 @@ match:
]
```
Refer to the [Promtail Configuration Reference](../../configuration/) for the
Refer to the [Promtail Configuration Reference]({{<relref "../configuration">}}) for the
schema on the various other stages referenced here.
### Example

@ -1,7 +1,8 @@
---
title: metrics
description: metrics stage
---
# `metrics` stage
# metrics
The `metrics` stage is an action stage that allows for defining and updating
metrics based on data from the extracted map. Note that created metrics are not

@ -1,8 +1,9 @@
---
title: multiline
title: multiline
description: multiline stage
---
# `multiline` stage
# multiline
The `multiline` stage merges multiple lines into a multiline block before passing it on to the next stage in the pipeline.

@ -1,7 +1,8 @@
---
title: output
description: output stage
---
# `output` stage
# output
The `output` stage is an action stage that takes data from the extracted map and
changes the log line that will be sent to Loki.

@ -1,7 +1,8 @@
---
title: pack
description: pack stage
---
# `pack` stage
# pack
The `pack` stage is a transform stage which lets you embed extracted values and labels into the log line by packing the log line and labels inside a JSON object.
@ -57,7 +58,7 @@ This would create a log line
}
```
**Loki 2.2 also includes a new [`unpack` parser]({{< relref "../../../logql/log_queries.md#unpack" >}}) to work with the pack stage.**
**Loki 2.2 also includes a new [`unpack` parser]({{< relref "../../../logql/log_queries/#unpack" >}}) to work with the pack stage.**
For example:

@ -1,7 +1,8 @@
---
title: regex
description: regex stage
---
# `regex` stage
# regex
The `regex` stage is a parsing stage that parses a log line using a regular
expression. Named capture groups in the regex support adding data into the

@ -1,7 +1,8 @@
---
title: replace
description: replace stage
---
# `replace` stage
# replace
The `replace` stage is a parsing stage that parses a log line using a regular
expression and replaces the log line. Named capture groups in the regex support adding data into the

@ -1,7 +1,8 @@
---
title: static_labels
description: static_labels stage
---
# `static_labels` stage
# static_labels
The static_labels stage is an action stage that adds static-labels to the label set that is sent to Loki with the log entry.

@ -1,7 +1,8 @@
---
title: template
description: template stage
---
# `template` stage
# template
The `template` stage is a transform stage that lets use manipulate the values in
the extracted map using [Go's template

@ -1,11 +1,12 @@
---
title: tenant
description: tenant stage
---
# `tenant` stage
# tenant
The tenant stage is an action stage that sets the tenant ID for the log entry
picking it from a field in the extracted data map. If the field is missing, the
default promtail client [`tenant_id`](../../configuration#client_config) will
default promtail client [`tenant_id`]({{<relref "../configuration#clients">}}) will
be used.

@ -1,7 +1,8 @@
---
title: timestamp
description: timestamp stage
---
# `timestamp` stage
# timestamp
The `timestamp` stage is an action stage that can change the timestamp of a log
line before it is sent to Loki. When a `timestamp` stage is not present, the

@ -1,7 +1,8 @@
---
title: Troubleshooting
description: Troubleshooting Promtail
---
# Troubleshooting Promtail
# Troubleshooting
This document describes known failure modes of Promtail on edge cases and the
adopted trade-offs.
@ -11,7 +12,7 @@ adopted trade-offs.
Promtail can be configured to print log stream entries instead of sending them to Loki.
This can be used in combination with [piping data](#pipe-data-to-promtail) to debug or troubleshoot Promtail log parsing.
In dry run mode, Promtail still support reading from a [positions](../configuration#position_config) file however no update will be made to the targeted file, this is to ensure you can easily retry the same set of lines.
In dry run mode, Promtail still support reading from a [positions]({{<relref "../configuration#positions">}}) file however no update will be made to the targeted file, this is to ensure you can easily retry the same set of lines.
To start Promtail in dry run mode use the flag `--dry-run` as shown in the example below:
@ -45,7 +46,7 @@ Enable the inspection output using the `--inspect` command-line option. The `--i
cat my.log | promtail --stdin --dry-run --inspect --client.url http://127.0.0.1:3100/loki/api/v1/push
```
![screenshot](../inspect.png)
![screenshot](./inspect.png)
The output uses color to highlight changes. Additions are in green, modifications in yellow, and removals in red.
@ -74,9 +75,9 @@ This will add labels `k1` and `k2` with respective values `v1` and `v2`.
In pipe mode Promtail also support file configuration using `--config.file`, however do note that positions config is not used and
only **the first scrape config is used**.
[`static_configs:`](../configuration) can be used to provide static labels, although the targets property is ignored.
[`static_configs:`]({{<relref "../configuration">}}) can be used to provide static labels, although the targets property is ignored.
If you don't provide any [`scrape_config:`](../configuration#scrape_config) a default one is used which will automatically adds the following default labels: `{job="stdin",hostname="<detected_hostname>"}`.
If you don't provide any [`scrape_config:`]({{<relref "../configuration#scrape_configs">}}) a default one is used which will automatically adds the following default labels: `{job="stdin",hostname="<detected_hostname>"}`.
For example you could use this config below to parse and add the label `level` on all your piped logs:
@ -196,7 +197,7 @@ from there. This means that if new log entries have been read and pushed to the
ingester between the last sync period and the crash, these log entries will be
sent again to the ingester on Promtail restart.
If Loki is not configured to [accept out-of-order writes](../../../configuration/#accept-out-of-order-writes), Loki will reject all log lines received in
If Loki is not configured to [accept out-of-order writes]({{<relref "../../../configuration/#accept-out-of-order-writes">}}), Loki will reject all log lines received in
what it perceives is out of
order. If Promtail happens to
crash, it may re-send log lines that were sent prior to the crash. The default

Before

Width:  |  Height:  |  Size: 119 KiB

After

Width:  |  Height:  |  Size: 119 KiB

@ -1,9 +1,10 @@
---
title: Community
description: Community
weight: 1100
---
# Community
1. [Governance](governance/)
1. [Getting in Touch](getting-in-touch/)
1. [Contributing](contributing/)
1. [Governance]({{<relref "governance">}})
1. [Getting in Touch]({{<relref "getting-in-touch">}})
1. [Contributing]({{<relref "contributing">}})

@ -1,5 +1,6 @@
---
title: Contributing to Loki
description: Contributing to Loki
---
# Contributing to Loki

@ -1,13 +1,14 @@
---
title: Contacting the Loki Team
description: Contacting the Loki Team
---
# Contacting the Loki Team
For questions regarding Loki:
- Open source Loki users are welcome to post technical questions on the Grafana Labs Community Forums under the Grafana Loki category at [community.grafana.com](https://community.grafana.com). Please be mindful that this is a community-driven support channel moderated by Grafana Labs staff where Loki maintainers and community members answer questions when bandwidth allows. Be sure to review the [Community Guidelines](https://community.grafana.com/guidelines) before posting.
- Users deploying Loki via [Grafana Cloud](https://grafana.com/products/cloud/) can submit support tickets via the [Grafana.com Account Portal](https://grafana.com/login).
- For questions regarding Enterprise support for Loki, you can get in touch with the Grafana Labs team [here](https://grafana.com/contact?pg=docs).
- Users deploying Loki via [Grafana Cloud](/products/cloud/) can submit support tickets via the [Grafana.com Account Portal](/login).
- For questions regarding Enterprise support for Loki, you can get in touch with the Grafana Labs team [here](/contact?pg=docs).
Your feedback is always welcome! To submit feedback or a report a potential bug:

@ -1,5 +1,6 @@
---
title: Governance
description: Governance
---
# Governance
@ -50,24 +51,24 @@ In case a member leaves, the [offboarding](#offboarding) procedure is applied.
The current team members are:
- Aditya C S - [adityacs](https://github.com/adityacs)
- Cyril Tovena - [cyriltovena](https://github.com/cyriltovena) ([Grafana Labs](https://grafana.com/))
- Danny Kopping - [dannykopping](https://github.com/dannykopping) ([Grafana Labs](https://grafana.com/))
- David Kaltschmidt - [davkal](https://github.com/davkal) ([Grafana Labs](https://grafana.com/))
- Edward Welch - [slim-bean](https://github.com/slim-bean) ([Grafana Labs](https://grafana.com/))
- Goutham Veeramachaneni - [gouthamve](https://github.com/gouthamve) ([Grafana Labs](https://grafana.com/))
- Joe Elliott - [joe-elliott](https://github.com/joe-elliott) ([Grafana Labs](https://grafana.com/))
- Karsten Jeschkies - [jeschkies](https://github.com/jeschkies) ([Grafana Labs](https://grafana.com/))
- Kaviraj Kanagaraj - [kavirajk](https://github.com/kavirajk) ([Grafana Labs](https://grafana.com/))
- Cyril Tovena - [cyriltovena](https://github.com/cyriltovena) ([Grafana Labs](/))
- Danny Kopping - [dannykopping](https://github.com/dannykopping) ([Grafana Labs](/))
- David Kaltschmidt - [davkal](https://github.com/davkal) ([Grafana Labs](/))
- Edward Welch - [slim-bean](https://github.com/slim-bean) ([Grafana Labs](/))
- Goutham Veeramachaneni - [gouthamve](https://github.com/gouthamve) ([Grafana Labs](/))
- Joe Elliott - [joe-elliott](https://github.com/joe-elliott) ([Grafana Labs](/))
- Karsten Jeschkies - [jeschkies](https://github.com/jeschkies) ([Grafana Labs](/))
- Kaviraj Kanagaraj - [kavirajk](https://github.com/kavirajk) ([Grafana Labs](/))
- Li Guozhong - [liguozhong](https://github.com/liguozhong) ([Alibaba Cloud](https://alibabacloud.com/))
- Owen Diehl - [owen-d](https://github.com/owen-d) ([Grafana Labs](https://grafana.com/))
- Owen Diehl - [owen-d](https://github.com/owen-d) ([Grafana Labs](/))
- Periklis Tsirakidis - [periklis](https://github.com/periklis) ([Red Hat](https://www.redhat.com/))
- Sandeep Sukhani - [sandeepsukhani](https://github.com/sandeepsukhani) ([Grafana Labs](https://grafana.com/))
- Tom Braack - [sh0rez](https://github.com/sh0rez) ([Grafana Labs](https://grafana.com/))
- Tom Wilkie - [tomwilkie](https://github.com/tomwilkie) ([Grafana Labs](https://grafana.com/))
- Sandeep Sukhani - [sandeepsukhani](https://github.com/sandeepsukhani) ([Grafana Labs](/))
- Tom Braack - [sh0rez](https://github.com/sh0rez) ([Grafana Labs](/))
- Tom Wilkie - [tomwilkie](https://github.com/tomwilkie) ([Grafana Labs](/))
The current Loki SIG Operator team members are:
- Brett Jones - [blockloop](https://github.com/blockloop/) ([InVision](https://www.invisionapp.com/))
- Cyril Tovena - [cyriltovena](https://github.com/cyriltovena) ([Grafana Labs](https://grafana.com/))
- Cyril Tovena - [cyriltovena](https://github.com/cyriltovena) ([Grafana Labs](/))
- Gerard Vanloo - [Red-GV](https://github.com/Red-GV) ([IBM](https://www.ibm.com))
- Periklis Tsirakidis - [periklis](https://github.com/periklis) ([Red Hat](https://www.redhat.com))
- Sashank Agrawal - [sasagarw](https://github.com/sasagarw/) ([Red Hat](https://www.redhat.com))

@ -1,7 +1,8 @@
---
title: Query Frontend
description: Kubernetes Query Frontend Example
---
# Kubernetes Query Frontend Example
# Query Frontend
## Disclaimer

@ -1,5 +1,6 @@
---
title: Promtail Push API
description: Promtail Push API
weight: 20
---
# Promtail Push API
@ -63,7 +64,7 @@ rejected pushes. Users are recommended to do one of the following:
## Implementation
As discussed in this document, this feature will be implemented by copying the
existing [Loki Push API](https://grafana.com/docs/loki/latest/api/#post-lokiapiv1push)
existing [Loki Push API](/docs/loki/latest/api/#post-lokiapiv1push)
and exposing it via Promtail.
## Considered Alternatives

@ -1,10 +1,11 @@
---
title: Write-Ahead Logs
description: Write-Ahead Logs
weight: 30
---
## Write-Ahead Logs
Author: Owen Diehl - [owen-d](https://github.com/owen-d) ([Grafana Labs](https://grafana.com/))
Author: Owen Diehl - [owen-d](https://github.com/owen-d) ([Grafana Labs](/))
Date: 30/09/2020

@ -1,10 +1,11 @@
---
title: Ordering Constraint Removal
description: Ordering Constraint Removal
weight: 40
---
## Ordering Constraint Removal
Author: Owen Diehl - [owen-d](https://github.com/owen-d) ([Grafana Labs](https://grafana.com/))
Author: Owen Diehl - [owen-d](https://github.com/owen-d) ([Grafana Labs](/))
Date: 28/01/2021

@ -1,10 +1,11 @@
---
title: Design documents
description: Design documents
weight: 1300
---
# Design documents
- [Labels from Logs](labels/)
- [Promtail Push API](2020-02-promtail-push-api/)
- [Write-Ahead Logs](2020-09-write-ahead-log/)
- [Ordering Constraint Removal](2021-01-ordering-constraint-removal/)
- [Labels from Logs]({{<relref "labels">}})
- [Promtail Push API]({{<relref "2020-02-Promtail-Push-API">}})
- [Write-Ahead Logs]({{<relref "2020-09-Write-Ahead-Log">}})
- [Ordering Constraint Removal]({{<relref "2021-01-Ordering-Constraint-Removal">}})

@ -1,8 +1,9 @@
---
title: Labels
description: Labels from Logs
weight: 10
---
# Labels from Logs
# Labels
Author: Ed Welch
Date: February 2019

@ -1,8 +1,9 @@
---
title: Fundamentals
description: Grafana Loki Fundamentals
weight: 150
---
# Grafana Loki Fundamentals
# Fundamentals
This section explains fundamental concepts about Grafana Loki:

@ -1,10 +1,11 @@
---
title: Architecture
description: Grafana Loki's Architecture
weight: 200
aliases:
- /docs/loki/latest/architecture/
---
# Grafana Loki's Architecture
# Architecture
## Multi-tenancy
@ -74,7 +75,7 @@ bytes of the log entry.
### Single Store
Loki stores all data in a single object storage backend. This mode of operation became generally available with Loki 2.0 and is fast, cost-effective, and simple, not to mention where all current and future development lies. This mode uses an adapter called [`boltdb_shipper`](../../operations/storage/boltdb-shipper) to store the `index` in object storage (the same way we store `chunks`).
Loki stores all data in a single object storage backend. This mode of operation became generally available with Loki 2.0 and is fast, cost-effective, and simple, not to mention where all current and future development lies. This mode uses an adapter called [`boltdb_shipper`]({{<relref "../../operations/storage/boltdb-shipper">}}) to store the `index` in object storage (the same way we store `chunks`).
### Deprecated: Multi-store
@ -95,7 +96,7 @@ maintenance tasks. It consists of:
> Unlike the other core components of Loki, the chunk store is not a separate
> service, job, or process, but rather a library embedded in the two services
> that need to access Loki data: the [ingester](#ingester) and [querier](#querier).
> that need to access Loki data: the [ingester]({{<relref "components#ingester">}}) and [querier]({{<relref "components#querier">}}).
The chunk store relies on a unified interface to the
"[NoSQL](https://en.wikipedia.org/wiki/NoSQL)" stores (DynamoDB, Bigtable, and
@ -135,7 +136,7 @@ To summarize, the read path works as follows:
## Write Path
![chunk_diagram](chunks_diagram.png)
![chunk_diagram](./chunks_diagram.png)
To summarize, the write path works as follows:

@ -1,10 +1,11 @@
---
title: Components
description: Components
weight: 30
---
# Components
![components_diagram](../loki_architecture_components.svg)
![components_diagram](./loki_architecture_components.svg)
## Distributor
@ -31,7 +32,7 @@ Currently the only way the distributor mutates incoming data is by normalizing l
The distributor can also rate limit incoming logs based on the maximum per-tenant bitrate. It does this by checking a per tenant limit and dividing it by the current number of distributors. This allows the rate limit to be specified per tenant at the cluster level and enables us to scale the distributors up or down and have the per-distributor limit adjust accordingly. For instance, say we have 10 distributors and tenant A has a 10MB rate limit. Each distributor will allow up to 1MB/second before limiting. Now, say another large tenant joins the cluster and we need to spin up 10 more distributors. The now 20 distributors will adjust their rate limits for tenant A to `(10MB / 20 distributors) = 500KB/s`! This is how global limits allow much simpler and safer operation of the Loki cluster.
**Note: The distributor uses the `ring` component under the hood to register itself amongst it's peers and get the total number of active distributors. This is a different "key" than the ingesters use in the ring and comes from the distributor's own [ring configuration](../../../configuration#distributor_config).**
**Note: The distributor uses the `ring` component under the hood to register itself amongst it's peers and get the total number of active distributors. This is a different "key" than the ingesters use in the ring and comes from the distributor's own [ring configuration]({{<relref "../../../configuration#distributor">}}).**
### Forwarding
@ -138,7 +139,7 @@ deduplicated.
### Timestamp Ordering
Loki can be configured to [accept out-of-order writes](../../configuration/#accept-out-of-order-writes).
Loki can be configured to [accept out-of-order writes]({{<relref "../../../configuration/#accept-out-of-order-writes">}}).
When not configured to accept out-of-order writes, the ingester validates that ingested log lines are in order. When an
ingester receives a log line that doesn't follow the expected order, the line
@ -153,7 +154,7 @@ Logs from each unique set of labels are built up into "chunks" in memory and
then flushed to the backing storage backend.
If an ingester process crashes or exits abruptly, all the data that has not yet
been flushed could be lost. Loki is usually configured with a [Write Ahead Log](../../operations/storage/wal) which can be _replayed_ on restart as well as with a `replication_factor` (usually 3) of each log to mitigate this risk.
been flushed could be lost. Loki is usually configured with a [Write Ahead Log]({{<relref "../../../operations/storage/wal">}}) which can be _replayed_ on restart as well as with a `replication_factor` (usually 3) of each log to mitigate this risk.
When not configured to accept out-of-order writes,
all lines pushed to Loki for a given stream (unique combination of
@ -169,7 +170,7 @@ nanosecond timestamps:
different content, the log line is accepted. This means it is possible to
have two different log lines for the same timestamp.
### Handoff - Deprecated in favor of the [WAL](../../operations/storage/wal)
### Handoff - Deprecated in favor of the [WAL]({{<relref "../../../operations/storage/wal">}})
By default, when an ingester is shutting down and tries to leave the hash ring,
it will wait to see if a new ingester tries to enter before flushing and will
@ -223,7 +224,7 @@ Caching log (filter, regexp) queries are under active development.
## Querier
The **querier** service handles queries using the [LogQL](../../logql/) query
The **querier** service handles queries using the [LogQL]({{<relref "../../../logql/">}}) query
language, fetching logs both from the ingesters and from long-term storage.
Queriers query all ingesters for in-memory data before falling back to

@ -1,5 +1,6 @@
---
title: Deployment modes
description: Deployment modes
weight: 20
---
# Deployment modes
@ -28,14 +29,14 @@ This is monolithic mode;
it runs all of Loki’s microservice components inside a single process
as a single binary or Docker image.
![monolithic mode diagram](../monolithic-mode.png)
![monolithic mode diagram](./monolithic-mode.png)
Monolithic mode is useful for getting started quickly to experiment with Loki,
as well as for small read/write volumes of up to approximately 100GB per day.
Horizontally scale up a monolithic mode deployment to more instances
by using a shared object store, and by configuring the
[`memberlist_config` section](../../../configuration/#memberlist_config)
[`ring` section]({{<relref "../../../configuration#common">}})
to share state between all instances.
High availability can be configured by running two Loki instances
@ -54,11 +55,11 @@ Loki provides the simple scalable deployment mode.
This deployment mode can scale to several TBs of logs per day and more.
Consider the microservices mode approach for very large Loki installations.
![simple scalable deployment mode diagram](../simple-scalable.png)
![simple scalable deployment mode diagram](./simple-scalable.png)
In this mode the component microservices of Loki are bundled into two targets:
`-target=read` and `-target=write`.
The BoltDB [compactor](../../../operations/storage/boltdb-shipper/#compactor)
The BoltDB [compactor]({{<relref "../../../operations/storage/boltdb-shipper#compactor">}})
service will run as part of the read target.
There are advantages to separating the read and write paths:
@ -89,7 +90,7 @@ Each process is invoked specifying its `target`:
* ruler
* compactor
![microservices mode diagram](../microservices-mode.png)
![microservices mode diagram](./microservices-mode.png)
Running components as individual microservices allows scaling up
by increasing the quantity of microservices.

@ -1,5 +1,6 @@
---
title: Consistent Hash Rings
description: Consistent Hash Rings
weight: 40
---
# Consistent Hash Rings
@ -34,7 +35,7 @@ These components can optionally be connected into a hash ring:
In an architecture that has three distributors and three ingestors defined,
the hash rings for these components connect the instances of same-type components.
![distributor and ingester rings](../ring-overview.png)
![distributor and ingester rings](./ring-overview.png)
Each node in the ring represents an instance of a component.
Each node has a key-value store that holds communication information
@ -49,7 +50,7 @@ For each node, the key-value store holds:
## Configuring rings
Define [ring configuration](../../../configuration/#ring_config) within the `common.ring_config` block.
Define [ring configuration]({{<relref "../../../configuration/#common">}}) within the `common.ring_config` block.
Use the default `memberlist` key-value store type unless there is
a compelling reason to use a different key-value store type.

@ -1,5 +1,6 @@
---
title: Labels
description: Labels
weight: 300
aliases:
- /docs/loki/latest/getting-started/labels/
@ -8,7 +9,7 @@ aliases:
Labels are key value pairs and can be defined as anything! We like to refer to them as metadata to describe a log stream. If you are familiar with Prometheus, there are a few labels you are used to seeing like `job` and `instance`, and I will use those in the coming examples.
The scrape configs we provide with Grafana Loki define these labels, too. If you are using Prometheus, having consistent labels between Loki and Prometheus is one of Loki's superpowers, making it incredibly [easy to correlate your application metrics with your log data](https://grafana.com/blog/2019/05/06/how-loki-correlates-metrics-and-logs--and-saves-you-money/).
The scrape configs we provide with Grafana Loki define these labels, too. If you are using Prometheus, having consistent labels between Loki and Prometheus is one of Loki's superpowers, making it incredibly [easy to correlate your application metrics with your log data](/blog/2019/05/06/how-loki-correlates-metrics-and-logs--and-saves-you-money/).
## How Loki uses labels
@ -145,7 +146,7 @@ The two previous examples use statically defined labels with a single value; how
__path__: /var/log/apache.log
```
This regex matches every component of the log line and extracts the value of each component into a capture group. Inside the pipeline code, this data is placed in a temporary data structure that allows using it for several purposes during the processing of that log line (at which point that temp data is discarded). Much more detail about this can be found in the [Promtail pipelines](../../clients/promtail/pipelines/) documentation.
This regex matches every component of the log line and extracts the value of each component into a capture group. Inside the pipeline code, this data is placed in a temporary data structure that allows using it for several purposes during the processing of that log line (at which point that temp data is discarded). Much more detail about this can be found in the [Promtail pipelines]({{<relref "../clients/promtail/pipelines">}}) documentation.
From that regex, we will be using two of the capture groups to dynamically set two labels based on content from the log line itself:
@ -201,7 +202,7 @@ Now let's talk about Loki, where the index is typically an order of magnitude sm
Loki will effectively keep your static costs as low as possible (index size and memory requirements as well as static log storage) and make the query performance something you can control at runtime with horizontal scaling.
To see how this works, let's look back at our example of querying your access log data for a specific IP address. We don't want to use a label to store the IP address. Instead we use a [filter expression](../../logql/log_queries#line-filter-expression) to query for it:
To see how this works, let's look back at our example of querying your access log data for a specific IP address. We don't want to use a label to store the IP address. Instead we use a [filter expression]({{<relref "../logql/log_queries#line-filter-expression">}}) to query for it:
```
{job="apache"} |= "11.11.11.11"

@ -1,5 +1,6 @@
---
title: Overview
description: Overview
weight: 100
aliases:
- /docs/loki/latest/overview/
@ -21,7 +22,7 @@ An agent (also called a client) acquires logs,
turns the logs into streams,
and pushes the streams to Loki through an HTTP API.
The Promtail agent is designed for Loki installations,
but many other [Agents](../../clients/) seamlessly integrate with Loki.
but many other [Agents]({{<relref "../../clients">}}) seamlessly integrate with Loki.
![Loki agent interaction](loki-overview-2.png)
@ -30,7 +31,7 @@ Each stream identifies a set of logs associated with a unique set of labels.
A quality set of labels is key to the creation of an index that is both compact
and allows for efficient query execution.
[LogQL](../../logql) is the query language for Loki.
[LogQL]({{<relref "../../logql">}}) is the query language for Loki.
## Loki features

@ -6,7 +6,7 @@ aliases:
- /docs/loki/latest/getting-started/get-logs-into-loki/
---
# Getting started with Grafana Loki
# Getting started
This guide assists the reader to create and use a simple Loki cluster.
The cluster is intended for testing, development, and evaluation;
@ -22,7 +22,7 @@ Grafana provides a way to pose queries against the logs stored in Loki and visua
The test environment uses Docker compose to instantiate these parts, each in its own container:
- One [single scalable deployment](../fundamentals/architecture/deployment-modes/) mode **Loki** instance has:
- One [single scalable deployment]({{<relref "../fundamentals/architecture/deployment-modes">}}) mode **Loki** instance has:
- One Loki read component
- One Loki write component
- **Minio** is Loki's storage back end in the test environment.
@ -62,10 +62,10 @@ The write component returns `ready` when you point a web browser at http://local
## Use Grafana and the test environment
Use [Grafana](https://grafana.com/docs/grafana/latest/) to query and observe the log lines captured in the Loki cluster by navigating a browser to http://localhost:3000.
The Grafana instance has Loki configured as a [datasource](https://grafana.com/docs/grafana/latest/datasources/loki/).
Use [Grafana](/docs/grafana/latest/) to query and observe the log lines captured in the Loki cluster by navigating a browser to http://localhost:3000.
The Grafana instance has Loki configured as a [datasource](/docs/grafana/latest/datasources/loki/).
Click on the Grafana instance's [Explore](https://grafana.com/docs/grafana/latest/explore/) icon to bring up the explore pane.
Click on the Grafana instance's [Explore](/docs/grafana/latest/explore/) icon to bring up the explore pane.
Use the Explore dropdown menu to choose the Loki datasource and bring up the Loki query browser.
@ -97,7 +97,7 @@ To see every log line other than those that contain the value 401:
{container="evaluate-loki_flog_1"} != "401"
```
Refer to [query examples](../logql/query_examples/) for more examples.
Refer to [query examples]({{<relref "../logql/query_examples">}}) for more examples.
## Stop and clean up the test environment

@ -1,5 +1,6 @@
---
title: Installation
description: Installation
weight: 200
---
@ -7,13 +8,13 @@ weight: 200
There are several methods of installing Loki and Promtail:
- [Install using Tanka (recommended)](tanka/)
- [Install using Helm](helm/)
- [Install through Docker or Docker Compose](docker/)
- [Install and run locally](local/)
- [Install from source](install-from-source/)
- [Install using Tanka (recommended)]({{<relref "tanka">}})
- [Install using Helm]({{<relref "helm">}})
- [Install through Docker or Docker Compose]({{<relref "docker">}})
- [Install and run locally]({{<relref "local">}})
- [Install from source]({{<relref "install-from-source">}})
The [Sizing Tool](sizing/) can be used to determine the proper cluster sizing
The [Sizing Tool]({{<relref "sizing">}}) can be used to determine the proper cluster sizing
given an expected ingestion rate and query performance. It targets the Helm
installation on Kubernetes.

@ -1,5 +1,6 @@
---
title: Docker
title: Install Grafana Loki with Docker or Docker Compose
description: Docker
weight: 30
---
# Install Grafana Loki with Docker or Docker Compose

@ -12,7 +12,7 @@ keywords:
- installation
---
# Install Loki using Helm
# Install Grafana Loki with Helm
The [Helm](https://helm.sh/) chart allows you to configure, install, and upgrade Grafana Loki within a Kubernetes cluster.
@ -22,4 +22,4 @@ This guide references the Loki Helm chart version 3.0 or greater and contains th
## Reference
[Values reference](reference)
[Values reference]({{<relref "reference">}})

@ -11,7 +11,7 @@ keywords:
- caching
---
# Components
# Helm Chart Components
This section describes the components installed by the Helm Chart.
@ -25,7 +25,7 @@ This chart includes dashboards for monitoring Loki. These require the scrape con
## Canary
This chart installs the [canary](../../../operations/loki-canary) and its alerts by default. This is another tool to verify the Loki deployment is in a healthy state. It can be disabled with `monitoring.lokiCanary.enabled=false`.
This chart installs the [canary]({{<relref "../../operations/loki-canary">}}) and its alerts by default. This is another tool to verify the Loki deployment is in a healthy state. It can be disabled with `monitoring.lokiCanary.enabled=false`.
## Gateway

@ -11,9 +11,9 @@ keywords:
- minio
---
# Configure Loki's storage
# Configure storage
The [scalable](../install-scalable/) installation requires a managed object store such as AWS S3 or Google Cloud Storage or a self-hosted store such as Minio. The [single binary](../install-monolithic/) installation can only use the filesystem for storage.
The [scalable]({{<relref "../install-scalable">}}) installation requires a managed object store such as AWS S3 or Google Cloud Storage or a self-hosted store such as Minio. The [single binary]({{<relref "../install-monolithic">}}) installation can only use the filesystem for storage.
This guide assumes Loki will be installed in on of the modes above and that a `values.yaml ` has been created.
@ -37,7 +37,7 @@ This guide assumes Loki will be installed in on of the modes above and that a `v
**To grant access to S3 via an IAM role without providing credentials:**
1. Provision an IAM role, policy and S3 bucket as described in [Storage](../../../storage/#aws-deployment-s3-single-store).
1. Provision an IAM role, policy and S3 bucket as described in [Storage]({{<relref "../../../storage/#aws-deployment-s3-single-store">}}).
- If the Terraform module was used note the annotation emitted by `terraform output -raw annotation`.
2. Add the IAM role annotation to the service account in `values.yaml`:

@ -8,11 +8,11 @@ weight: 100
keywords: []
---
# Install the single binary Helm Chart
# Install the Single Binary Helm Chart
This Helm Chart installation runs the Grafana Loki *single binary* within a Kubernetes cluster.
If the storage type is set to `filesystem`, this chart configures Loki to run the `all` target in a [monolithic mode](../../../../fundamentals/architecture/deployment-modes/#monolithic-mode), designed to work with a filesystem storage. It will also configure meta-monitoring of metrics and logs.
If the storage type is set to `filesystem`, this chart configures Loki to run the `all` target in a [monolithic mode]({{<relref "../../../fundamentals/architecture/deployment-modes#monolithic-mode">}}), designed to work with a filesystem storage. It will also configure meta-monitoring of metrics and logs.
It is not possible to install the single binary with a different storage type.

@ -14,7 +14,7 @@ keywords: []
This Helm Chart installation runs the Grafana Loki cluster within a Kubernetes cluster.
If object storge is configured, this chart configures Loki to run `read` and `write` targets in a [scalable mode](../../../../fundamentals/architecture/deployment-modes/#simple-scalable-deployment-mode), highly available architecture (3 replicas of each) designed to work with AWS S3 object storage. It will also configure meta-monitoring of metrics and logs.
If object storge is configured, this chart configures Loki to run `read` and `write` targets in a [scalable mode]({{<relref "../../../fundamentals/architecture/deployment-modes#simple-scalable-deployment-mode">}}), highly available architecture (3 replicas of each) designed to work with AWS S3 object storage. It will also configure meta-monitoring of metrics and logs.
It is not possible to run the scalable mode with the `filesystem` storage.
@ -58,7 +58,7 @@ It is not possible to run the scalable mode with the `filesystem` storage.
insecure: false
```
Consult the [Reference](../reference) for configuring other storage providers.
Consult the [Reference]({{<relref "../reference">}}) for configuring other storage providers.
- Define the AWS S3 credentials in the file.

@ -11,7 +11,7 @@ keywords:
- distributed
---
# Migrating from `loki-distributed`
# Migrate from `loki-distributed` Helm Chart
This guide will walk you through migrating to the `loki` Helm Chart, v3.0 or higher, from the `loki-distributed` Helm Chart (v0.63.2 at time of writing). The process consists of deploying the new `loki` Helm Chart alongside the existing `loki-distributed` installation. By joining the new cluster to the exsiting cluster's ring, you will create one large cluster. This will allow you to manually bring down the `loki-distributed` components in a safe way to avoid any data loss.

@ -12,7 +12,7 @@ keywords:
- simple
---
# Migrating to Three Scalable Targets
# Migrate To Three Scalable Targets
This guide will walk you through migrating from the old, two target, scalable configuration to the new, three target, scalable configuration. This new configuration introduces a `backend` component, and reduces the `read` component to running just a `Querier` and `QueryFrontend`, allowing it to be run as a kubernetes `Deployment` rather than a `StatefulSet`.

@ -1,5 +1,6 @@
---
title: Build from source
description: Build from source
weight: 50
---
# Build from source

@ -1,3 +1,8 @@
---
title: Installation instructions for Istio
description: Installation instructions for Istio
weight: 50
---
# Installation instructions for Istio
The ingestor, querier, etc. might start, but if those changes are not made, you will see logs like

@ -1,8 +1,9 @@
---
title: Local
description: Install and run Grafana Loki locally
weight: 40
---
# Install and run Grafana Loki locally
# Local
In order to log events with Grafana Loki, download and install both Promtail and Loki.
- Loki is the logging engine.
@ -15,7 +16,7 @@ The configuration specifies running Loki as a single binary.
1. Navigate to the [release page](https://github.com/grafana/loki/releases/).
2. Scroll down to the Assets section under the version that you want to install.
3. Download the Loki and Promtail .zip files that correspond to your system.
**Note:** Do not download LogCLI or Loki Canary at this time. [LogCLI](../../getting-started/logcli/) allows you to run Loki queries in a command line interface. [Loki Canary](../../operations/loki-canary/) is a tool to audit Loki performance.
**Note:** Do not download LogCLI or Loki Canary at this time. `LogCLI` allows you to run Loki queries in a command line interface. [Loki Canary]({{<relref "../operations/loki-canary">}}) is a tool to audit Loki performance.
4. Unzip the package contents into the same directory. This is where the two programs will run.
5. In the command line, change directory (`cd` on most systems) to the directory with Loki and Promtail. Copy and paste the commands below into your command line to download generic configuration files.
**Note:** Use the corresponding Git refs that match your downloaded Loki version to get the correct configuration file. For example, if you are using Loki version 2.6.1, you need to use the `https://raw.githubusercontent.com/grafana/loki/v2.6.1/cmd/loki/loki-local-config.yaml` URL to download the configuration file that corresponds to the Loki version you aim to run.
@ -40,7 +41,7 @@ The configuration specifies running Loki as a single binary.
Loki runs and displays Loki logs in your command line and on http://localhost:3100/metrics.
The next step will be running an agent to send logs to Loki.
To do so with Promtail, refer to [get logs into Loki](../../getting-started/get-logs-into-loki/).
To do so with Promtail, refer to the [Promtal configuration]({{<relref "../clients/promtail">}}).
## Release binaries - openSUSE Linux only

@ -16,7 +16,7 @@ keywords: []
This tool helps to generate a Helm Charts `values.yaml` file based on specified
expected ingestion, retention rate and node type. It will always configure a
[scalable](../../fundamentals/architecture/deployment-modes/#simple-scalable-deployment-mode) deployment. The storage needs to be configured after generation.
[scalable]({{<relref "../../fundamentals/architecture/deployment-modes#simple-scalable-deployment-mode">}}) deployment. The storage needs to be configured after generation.
<div id="app">

@ -1,8 +1,9 @@
---
title: Tanka
description: Install Grafana Loki with Tanka
weight: 10
---
# Install Grafana Loki with Tanka
# Tanka
[Tanka](https://tanka.dev) is a reimplementation of
[Ksonnet](https://ksonnet.io) that Grafana Labs created after Ksonnet was
@ -42,7 +43,7 @@ jb install github.com/grafana/loki/production/ksonnet/promtail@main
Revise the YAML contents of `environments/loki/main.jsonnet`, updating these variables:
- Update the `username`, `password`, and the relevant `htpasswd` variable values.
- Update the S3 or GCS variable values, depending on your object storage type. See [storage_config](https://grafana.com/docs/loki/latest/configuration/#storage_config) for more configuration details.
- Update the S3 or GCS variable values, depending on your object storage type. See [storage_config](/docs/loki/latest/configuration/#storage_config) for more configuration details.
- Remove from the configuration the S3 or GCS object storage variables that are not part of your setup.
- Update the value of `boltdb_shipper_shared_store` to the type of object storage you are using. Options are `gcs`, `s3`, `azure`, or `filesystem`. Update the `object_store` variable under the `schema_config` section to the same value.
- Update the Promtail configuration `container_root_path` variable's value to reflect your root path for the Docker daemon. Run `docker info | grep "Root Dir"` to acquire your root path.

@ -1,8 +1,9 @@
---
title: "0001: Introducing LIDs"
description: "0001: Introducing LIDs"
---
# Introduction of LIDs
# 0001: Introducing LIDs
**Author:** Danny Kopping (danny.kopping@grafana.com)
@ -50,4 +51,4 @@ Inspired by Python's [PEP](https://peps.python.org/pep-0001/) and Kafka's [KIP](
Google Docs were considered for this, but they are less useful because:
- they would need to be owned by the Grafana Labs organisation, so that they remain viewable even if the author closes their account
- we already have previous [design documents](../design-documents) in our documentation and, in a recent ([5th Jan 2023](https://docs.google.com/document/d/1MNjiHQxwFukm2J4NJRWyRgRIiK7VpokYyATzJ5ce-O8/edit#heading=h.78vexgrrtw5a)) community call, the community expressed a preference for this type of approach
- we already have previous [design documents]({{<relref "../design-documents">}}) in our documentation and, in a recent ([5th Jan 2023](https://docs.google.com/document/d/1MNjiHQxwFukm2J4NJRWyRgRIiK7VpokYyATzJ5ce-O8/edit#heading=h.78vexgrrtw5a)) community call, the community expressed a preference for this type of approach

@ -1,5 +1,6 @@
---
title: Loki Improvement Documents (LIDs)
description: Loki Improvement Documents (LIDs)
weight: 1400
---
@ -36,4 +37,4 @@ Once a PR is submitted, it will be reviewed by the sponsor, as well as intereste
- A LID is considered completed once it is either rejected or the improvement has been included in a release.
- `CHANGELOG` entries should reference LIDs where applicable.
- Significant changes to the LID process should be proposed [with a LID](https://www.google.com/search?q=recursion).
- LIDs should be shared with the community on the [`#loki` channel on Slack](https://slack.grafana.com) for comment, and the sponsor should wait **at least 2 weeks** before accepting a proposal.
- LIDs should be shared with the community on the [`#loki` channel on Slack](https://slack.grafana.com) for comment, and the sponsor should wait **at least 2 weeks** before accepting a proposal.

@ -1,8 +1,9 @@
---
title: "XXXX: Template"
description: "Template"
---
# Title
# XXXX: Template
> _NOTE: the file should be named `_DRAFT_<your-title>.md` and be placed in the `docs/sources/lids` directory.
Once accepted, it will be assigned a LID number and the file will be renamed by the sponsor.<br>
@ -56,4 +57,4 @@ _Describe the first proposal, what are the benefits and trade-offs that this app
_Describe the nth proposal(s), what are the benefits and trade-offs that these approaches have?_
## Other Notes
## Other Notes

@ -1,5 +1,6 @@
---
title: LogQL
title: "LogQL: Log query language"
description: "LogQL: Log query language"
weight: 700
---
# LogQL: Log query language
@ -10,8 +11,8 @@ LogQL uses labels and operators for filtering.
There are two types of LogQL queries:
- [Log queries](log_queries/) return the contents of log lines.
- [Metric queries](metric_queries/) extend log queries to calculate values
- [Log queries]({{<relref "log_queries">}}) return the contents of log lines.
- [Metric queries]({{<relref "metric_queries">}}) extend log queries to calculate values
based on query results.
## Binary operators

@ -1,5 +1,5 @@
---
title: LoqQL Analyzer
title: LogQL Analyzer
menuTitle: LoqQL Analyzer
description: The LogQL Analyzer is an inline educational tool for experimenting with writing LogQL queries.
weight: 60
@ -18,7 +18,7 @@ A set of example log lines are included for each format.
Use the provided example log lines, or copy and paste your own log lines into the example log lines box.
Use the provided example query, or enter your own query.
The [log stream selector](../log_queries/#log-stream-selector) remains fixed for all possible example queries.
The [log stream selector]({{<relref "./log_queries/#log-stream-selector">}}) remains fixed for all possible example queries.
Modify the remainder of the log line and click on the **Run query** button
to run the entered query against the example log lines.

@ -1,5 +1,6 @@
---
title: Matching IP addresses
description: Matching IP addresses
weight: 40
---

@ -1,12 +1,13 @@
---
title: Log queries
description: Log queries
weight: 10
---
# Log queries
All LogQL queries contain a **log stream selector**.
![parts of a query](../query_components.png)
![parts of a query](./query_components.png)
Optionally, the log stream selector can be followed by a **log pipeline**. A log pipeline is a set of stage expressions that are chained together and applied to the selected log streams. Each expression can filter out, parse, or mutate log lines and their respective labels.
@ -194,7 +195,7 @@ will always run faster than
Line filter expressions are the fastest way to filter logs once the
log stream selectors have been applied.
Line filter expressions have support matching IP addresses. See [Matching IP addresses](../ip/) for details.
Line filter expressions have support matching IP addresses. See [Matching IP addresses]({{<relref "../ip">}}) for details.
### Removing color codes
@ -234,7 +235,7 @@ Using Duration, Number and Bytes will convert the label value prior to comparisi
For instance, `logfmt | duration > 1m and bytes_consumed > 20MB`
If the conversion of the label value fails, the log line is not filtered and an `__error__` label is added. To filters those errors see the [pipeline errors](../#pipeline-errors) section.
If the conversion of the label value fails, the log line is not filtered and an `__error__` label is added. To filters those errors see the [pipeline errors]({{<relref "../#pipeline-errors">}}) section.
You can chain multiple predicates using `and` and `or` which respectively express the `and` and `or` binary operations. `and` can be equivalently expressed by a comma, a space or another pipe. Label filters can be place anywhere in a log pipeline.
@ -265,11 +266,11 @@ To evaluate the logical `and` first, use parenthesis, as in this example:
> Label filter expressions are the only expression allowed after the unwrap expression. This is mainly to allow filtering errors from the metric extraction.
Label filter expressions have support matching IP addresses. See [Matching IP addresses](../ip/) for details.
Label filter expressions have support matching IP addresses. See [Matching IP addresses]({{<relref "../ip">}}) for details.
### Parser expression
Parser expression can parse and extract labels from the log content. Those extracted labels can then be used for filtering using [label filter expressions](#label-filter-expression) or for [metric aggregations](../metric_queries).
Parser expression can parse and extract labels from the log content. Those extracted labels can then be used for filtering using [label filter expressions](#label-filter-expression) or for [metric aggregations]({{<relref "../metric_queries">}}).
Extracted label keys are automatically sanitized by all parsers, to follow Prometheus metric name convention.(They can only contain ASCII letters and digits, as well as underscores and colons. They cannot start with a digit.)
@ -289,7 +290,7 @@ If an extracted label key name already exists in the original log stream, the ex
Loki supports [JSON](#json), [logfmt](#logfmt), [pattern](#pattern), [regexp](#regular-expression) and [unpack](#unpack) parsers.
It's easier to use the predefined parsers `json` and `logfmt` when you can. If you can't, the `pattern` and `regexp` parsers can be used for log lines with an unusual structure. The `pattern` parser is easier and faster to write; it also outperforms the `regexp` parser.
Multiple parsers can be used by a single log pipeline. This is useful for parsing complex logs. There are examples in [Multiple parsers](#multiple-parsers).
Multiple parsers can be used by a single log pipeline. This is useful for parsing complex logs. There are examples in [Multiple parsers]({{<relref "../query_examples#examples-that-use-multiple-parsers">}}).
#### JSON
@ -499,7 +500,7 @@ those labels:
#### unpack
The `unpack` parser parses a JSON log line, unpacking all embedded labels from Promtail's [`pack` stage]({{< relref "../clients/promtail/stages/pack.md" >}}).
The `unpack` parser parses a JSON log line, unpacking all embedded labels from Promtail's [`pack` stage]({{< relref "../../clients/promtail/stages/pack.md" >}}).
**A special property `_entry` will also be used to replace the original log line**.
For example, using `| unpack` with the log line:
@ -541,7 +542,7 @@ If we have the following labels `ip=1.1.1.1`, `status=200` and `duration=3000`(m
The above query will give us the `line` as `1.1.1.1 200 3`
See [template functions](../template_functions/) to learn about available functions in the template format.
See [template functions]({{<relref "../template_functions/">}}) to learn about available functions in the template format.
### Labels format expression

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

@ -1,5 +1,6 @@
---
title: Metric queries
description: Metric queries
weight: 20
---
@ -55,7 +56,7 @@ Examples:
### Unwrapped range aggregations
Unwrapped ranges uses extracted labels as sample values instead of log lines. However to select which label will be used within the aggregation, the log query must end with an unwrap expression and optionally a label filter expression to discard [errors](../#pipeline-errors).
Unwrapped ranges uses extracted labels as sample values instead of log lines. However to select which label will be used within the aggregation, the log query must end with an unwrap expression and optionally a label filter expression to discard [errors]({{<relref "./#pipeline-errors">}}).
The unwrap expression is noted `| unwrap label_identifier` where the label identifier is the label name to use for extracting sample values.
@ -91,7 +92,7 @@ Which can be used to aggregate over distinct labels dimensions by including a `w
`without` removes the listed labels from the result vector, while all other labels are preserved the output. `by` does the opposite and drops labels that are not listed in the `by` clause, even if their label values are identical between all elements of the vector.
See [Unwrap examples](../query_examples/#unwrap-examples) for query examples that use the unwrap expression.
See [Unwrap examples]({{<relref "query_examples/#unwrap-examples">}}) for query examples that use the unwrap expression.
## Built-in aggregation operators
@ -122,7 +123,7 @@ The aggregation operators can either be used to aggregate over all label values
The `without` clause removes the listed labels from the resulting vector, keeping all others.
The `by` clause does the opposite, dropping labels that are not listed in the clause, even if their label values are identical between all elements of the vector.
See [vector aggregation examples](../query_examples/#vector-aggregation-examples) for query examples that use vector aggregation expressions.
See [vector aggregation examples]({{<relref "query_examples/#vector-aggregation-examples">}}) for query examples that use vector aggregation expressions.
## Functions

@ -1,5 +1,6 @@
---
title: Template functions
description: Template functions
weight: 30
---
@ -714,4 +715,4 @@ Examples:
Example of a query to print how many times XYZ occurs in a line:
```logql
{job="xyzlog"} | line_format `{{ __line__ | count "XYZ"}}`
```
```

@ -1,8 +1,9 @@
---
title: Maintaining
description: Grafana Loki Maintainers' Guide
weight: 1200
---
# Grafana Loki Maintainers' Guide
# Maintaining
This section details information for maintainers of Grafana Loki.

@ -1,7 +1,8 @@
---
title: Releasing Loki Build Image
description: Releasing Loki Build Image
---
# Releasing `loki-build-image`
# Releasing Loki Build Image
The [`loki-build-image`](https://github.com/grafana/loki/tree/master/loki-build-image)
is the Docker image used to run tests and build Grafana Loki binaries in CI.

@ -1,5 +1,6 @@
---
title: Releasing Loki
title: Releasing Grafana Loki
description: Releasing Grafana Loki
---
# Releasing Grafana Loki

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save