[docs] Create top level Send data section, part 2 (#10247)

Part of the database information architecture Epic # 8710

Which issue(s) this PR fixes:
Second half of work for issue # 8741 that was started in PR #10192 
When closed, fixes #8741

Move the following files under Send data:
./sources/clients/promtail/_index.md
./sources/clients/promtail/configuration.md
./sources/clients/promtail/gcplog-cloud.md
./sources/clients/promtail/installation.md
./sources/clients/promtail/logrotation/_index.md
./sources/clients/promtail/pipelines.md
./sources/clients/promtail/scraping.md
./sources/clients/promtail/stages/_index.md
./sources/clients/promtail/stages/cri.md
./sources/clients/promtail/stages/decolorize.md
./sources/clients/promtail/stages/docker.md
./sources/clients/promtail/stages/drop.md
./sources/clients/promtail/stages/json.md
./sources/clients/promtail/stages/labelallow.md
./sources/clients/promtail/stages/labeldrop.md
./sources/clients/promtail/stages/labels.md
./sources/clients/promtail/stages/limit.md
./sources/clients/promtail/stages/logfmt.md
./sources/clients/promtail/stages/match.md
./sources/clients/promtail/stages/metrics.md
./sources/clients/promtail/stages/multiline.md
./sources/clients/promtail/stages/output.md
./sources/clients/promtail/stages/pack.md
./sources/clients/promtail/stages/regex.md
./sources/clients/promtail/stages/replace.md
./sources/clients/promtail/stages/static_labels.md
./sources/clients/promtail/stages/template.md
./sources/clients/promtail/stages/tenant.md
./sources/clients/promtail/stages/timestamp.md
./sources/clients/promtail/troubleshooting/_index.md

This PR also 

- Revises the Clients landing page to clarify supported clients.
- Updates the metadata (descriptions, weights)
- Adds aliases to redirect from the old URLs.
- Updates cross-references broken by the move/renaming.
- A few other small fixes (headings, typos, etc.)


**Special notes for your reviewer**:

Please review the updates to the Clients landing page (now called Send
Data) as I've made some extensive edits to try to clarify
recommended/supported clients.
The file is docs/sources/send-data/_index.md

---------

Co-authored-by: Michel Hollands <42814411+MichelHollands@users.noreply.github.com>
akhilanarayanan/dountilquorum^2
J Stickler 2 years ago committed by GitHub
parent ec35db458b
commit d0545bff2d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 1
      docs/sources/_index.md
  2. 2
      docs/sources/alert/_index.md
  3. 83
      docs/sources/clients/_index.md
  4. 4
      docs/sources/configure/_index.md
  5. 2
      docs/sources/configure/bp-configure.md
  6. 4
      docs/sources/configure/index.template
  7. 2
      docs/sources/get-started/bp-labels.md
  8. 2
      docs/sources/get-started/labels.md
  9. 2
      docs/sources/get-started/overview.md
  10. 2
      docs/sources/operations/_index.md
  11. 2
      docs/sources/operations/authentication.md
  12. 2
      docs/sources/operations/observability.md
  13. 4
      docs/sources/operations/request-validation-rate-limits.md
  14. 2
      docs/sources/query/_index.md
  15. 2
      docs/sources/query/log_queries/_index.md
  16. 2
      docs/sources/reference/_index.md
  17. 2
      docs/sources/reference/api.md
  18. 4
      docs/sources/release-notes/v2-4.md
  19. 101
      docs/sources/send-data/_index.md
  20. 16
      docs/sources/send-data/aws/_index.md
  21. 2
      docs/sources/send-data/docker-driver/_index.md
  22. 6
      docs/sources/send-data/docker-driver/configuration.md
  23. 2
      docs/sources/send-data/fluentbit/_index.md
  24. 6
      docs/sources/send-data/lambda-promtail/_index.md
  25. 2
      docs/sources/send-data/logstash/_index.md
  26. 13
      docs/sources/send-data/promtail/_index.md
  27. 17
      docs/sources/send-data/promtail/cloud/_index.md
  28. 19
      docs/sources/send-data/promtail/cloud/ec2/_index.md
  29. 0
      docs/sources/send-data/promtail/cloud/ec2/promtail-ec2-discovery.png
  30. 0
      docs/sources/send-data/promtail/cloud/ec2/promtail-ec2-final.yaml
  31. 0
      docs/sources/send-data/promtail/cloud/ec2/promtail-ec2-logs.png
  32. 0
      docs/sources/send-data/promtail/cloud/ec2/promtail-ec2.yaml
  33. 0
      docs/sources/send-data/promtail/cloud/ec2/promtail.service
  34. 7
      docs/sources/send-data/promtail/cloud/ecs/_index.md
  35. 0
      docs/sources/send-data/promtail/cloud/ecs/ecs-grafana.png
  36. 0
      docs/sources/send-data/promtail/cloud/ecs/ecs-role.json
  37. 0
      docs/sources/send-data/promtail/cloud/ecs/ecs-task.json
  38. 7
      docs/sources/send-data/promtail/cloud/eks/_index.md
  39. 0
      docs/sources/send-data/promtail/cloud/eks/eventrouter.yaml
  40. 0
      docs/sources/send-data/promtail/cloud/eks/namespace-grafana.png
  41. 0
      docs/sources/send-data/promtail/cloud/eks/values.yaml
  42. 15
      docs/sources/send-data/promtail/cloud/gcp/_index.md
  43. 0
      docs/sources/send-data/promtail/cloud/gcp/gcp-logs-diagram.png
  44. 11
      docs/sources/send-data/promtail/configuration.md
  45. 21
      docs/sources/send-data/promtail/installation.md
  46. 7
      docs/sources/send-data/promtail/logrotation/_index.md
  47. 0
      docs/sources/send-data/promtail/logrotation/logrotation-components.png
  48. 0
      docs/sources/send-data/promtail/logrotation/logrotation-copy-and-truncate.png
  49. 0
      docs/sources/send-data/promtail/logrotation/logrotation-rename-and-create.png
  50. 7
      docs/sources/send-data/promtail/pipelines.md
  51. 15
      docs/sources/send-data/promtail/scraping.md
  52. 11
      docs/sources/send-data/promtail/stages/_index.md
  53. 7
      docs/sources/send-data/promtail/stages/cri.md
  54. 7
      docs/sources/send-data/promtail/stages/decolorize.md
  55. 7
      docs/sources/send-data/promtail/stages/docker.md
  56. 7
      docs/sources/send-data/promtail/stages/drop.md
  57. 7
      docs/sources/send-data/promtail/stages/eventlogmessage.md
  58. 11
      docs/sources/send-data/promtail/stages/geoip.md
  59. 7
      docs/sources/send-data/promtail/stages/json.md
  60. 7
      docs/sources/send-data/promtail/stages/labelallow.md
  61. 7
      docs/sources/send-data/promtail/stages/labeldrop.md
  62. 7
      docs/sources/send-data/promtail/stages/labels.md
  63. 7
      docs/sources/send-data/promtail/stages/limit.md
  64. 8
      docs/sources/send-data/promtail/stages/logfmt.md
  65. 7
      docs/sources/send-data/promtail/stages/match.md
  66. 7
      docs/sources/send-data/promtail/stages/metrics.md
  67. 6
      docs/sources/send-data/promtail/stages/multiline.md
  68. 7
      docs/sources/send-data/promtail/stages/output.md
  69. 7
      docs/sources/send-data/promtail/stages/pack.md
  70. 7
      docs/sources/send-data/promtail/stages/regex.md
  71. 7
      docs/sources/send-data/promtail/stages/replace.md
  72. 7
      docs/sources/send-data/promtail/stages/sampling.md
  73. 7
      docs/sources/send-data/promtail/stages/static_labels.md
  74. 7
      docs/sources/send-data/promtail/stages/template.md
  75. 7
      docs/sources/send-data/promtail/stages/tenant.md
  76. 7
      docs/sources/send-data/promtail/stages/timestamp.md
  77. 10
      docs/sources/send-data/promtail/troubleshooting/_index.md
  78. 0
      docs/sources/send-data/promtail/troubleshooting/inspect.png
  79. 2
      docs/sources/setup/_index.md
  80. 2
      docs/sources/setup/install/local.md
  81. 4
      docs/sources/setup/upgrade/_index.md
  82. 2
      docs/sources/storage/_index.md
  83. 2
      docs/sources/visualize/grafana.md

@ -3,6 +3,7 @@ title: Grafana Loki documentation
description: "Technical documentation for Grafana Loki"
aliases:
- /docs/loki/
weight: 100
---
# Grafana Loki documentation

@ -5,7 +5,7 @@ description: Learn how the rule evaluates queries for alerting.
aliases:
- ./rules/
- ./alerting/
weight: 850
weight: 800
keywords:
- loki
- alert

@ -1,83 +0,0 @@
---
title: Clients
description: Grafana Loki clients
weight: 600
---
# Clients
Grafana Loki works with the following clients for sending logs:
- [Promtail]({{< relref "./promtail" >}})
- [Docker driver]({{< relref "../send-data/docker-driver" >}}).
- [Fluentd]({{< relref "../send-data/fluentd" >}})
- [Fluent Bit]({{< relref "../send-data/fluentbit" >}})
- [Logstash]({{< relref "../send-data/logstash" >}})
- [Lambda Promtail]({{< relref "../send-data/lambda-promtail" >}})
There are also a number of third-party clients, see [Unofficial clients](#unofficial-clients).
The [xk6-loki extension](https://github.com/grafana/xk6-loki) permits [load testing Loki]({{< relref "../send-data/k6" >}}).
## Picking a client
While all clients can be used simultaneously to cover multiple use cases, which
client is initially picked to send logs depends on your use case.
### Promtail
Promtail is the client of choice when you're running Kubernetes, as you can
configure it to automatically scrape logs from pods running on the same node
that Promtail runs on. Promtail and Prometheus running together in Kubernetes
enables powerful debugging: if Prometheus and Promtail use the same labels,
users can use tools like Grafana to switch between metrics and logs based on the
label set.
Promtail is also the client of choice on bare-metal since it can be configured
to tail logs from all files given a host path. It is the easiest way to send
logs to Loki from plain-text files (e.g., things that log to `/var/log/*.log`).
Lastly, Promtail works well if you want to extract metrics from logs such as
counting the occurrences of a particular message.
### Docker Logging Driver
When using Docker and not Kubernetes, the Docker logging driver for Loki should
be used as it automatically adds labels appropriate to the running container.
### Fluentd and Fluent Bit
The Fluentd and Fluent Bit plugins are ideal when you already have Fluentd deployed
and you already have configured `Parser` and `Filter` plugins.
Fluentd also works well for extracting metrics from logs when using its
Prometheus plugin.
### Logstash
If you are already using logstash and/or beats, this will be the easiest way to start.
By adding our output plugin you can quickly try Loki without doing big configuration changes.
### Lambda Promtail
This is a workflow combining the Promtail push-api [scrape config]({{< relref "./promtail/configuration#loki_push_api" >}}) and the [lambda-promtail]({{< relref "../send-data/lambda-promtail" >}}) AWS Lambda function which pipes logs from Cloudwatch to Loki.
This is a good choice if you're looking to try out Loki in a low-footprint way or if you wish to monitor AWS lambda logs in Loki.
## Unofficial clients
Note that the Loki API is not stable yet, so breaking changes might occur
when using or writing a third-party client.
- [promtail-client](https://github.com/afiskon/promtail-client) (Go)
- [push-to-loki.py](https://github.com/sleleko/devops-kb/blob/master/python/push-to-loki.py) (Python 3)
- [python-logging-loki](https://pypi.org/project/python-logging-loki/) (Python 3)
- [Serilog-Sinks-Loki](https://github.com/JosephWoodward/Serilog-Sinks-Loki) (C#)
- [NLog-Targets-Loki](https://github.com/corentinaltepe/nlog.loki) (C#)
- [loki-logback-appender](https://github.com/loki4j/loki-logback-appender) (Java)
- [Log4j2 appender for Loki](https://github.com/tkowalcz/tjahzi) (Java)
- [mjaron-tinyloki-java](https://github.com/mjfryc/mjaron-tinyloki-java) (Java)
- [LokiLogger.jl](https://github.com/JuliaLogging/LokiLogger.jl) (Julia)
- [winston-loki](https://github.com/JaniAnttonen/winston-loki) (JS)
- [ilogtail](https://github.com/alibaba/ilogtail) (Go)
- [Vector Loki Sink](https://vector.dev/docs/reference/configuration/sinks/loki/)
- [Cribl Loki Destination](https://docs.cribl.io/stream/destinations-loki)

@ -1,11 +1,11 @@
---
title: Grafana Loki configuration parameters
menuTitle: Configuration parameters
menuTitle: Configure
description: Configuration reference for the parameters used to configure Grafana Loki.
aliases:
- ../configuration
- ../configure
weight: 500
weight: 400
---
# Grafana Loki configuration parameters

@ -46,7 +46,7 @@ What can we do about this? What if this was because the sources of these logs we
{job="syslog", instance="host2"} 00:00:02 i'm a syslog! <- Accepted, still in order for stream 2
```
But what if the application itself generated logs that were out of order? Well, I'm afraid this is a problem. If you are extracting the timestamp from the log line with something like [the Promtail pipeline stage](/docs/loki/latest/clients/promtail/stages/timestamp/), you could instead _not_ do this and let Promtail assign a timestamp to the log lines. Or you can hopefully fix it in the application itself.
But what if the application itself generated logs that were out of order? Well, I'm afraid this is a problem. If you are extracting the timestamp from the log line with something like [the Promtail pipeline stage](/docs/loki/latest/send-data/promtail/stages/timestamp/), you could instead _not_ do this and let Promtail assign a timestamp to the log lines. Or you can hopefully fix it in the application itself.
It's also worth noting that the batching nature of the Loki push API can lead to some instances of out of order errors being received which are really false positives. (Perhaps a batch partially succeeded and was present; or anything that previously succeeded would return an out of order entry; or anything new would be accepted.)

@ -1,11 +1,11 @@
---
title: Grafana Loki configuration parameters
menuTitle: Configuration parameters
menuTitle: Configure
description: Configuration reference for the parameters used to configure Grafana Loki.
aliases:
- ../configuration
- ../configure
weight: 500
weight: 400
---
# Grafana Loki configuration parameters

@ -38,7 +38,7 @@ Try to keep values bounded to as small a set as possible. We don't have perfect
## Be aware of dynamic labels applied by clients
Loki has several client options: [Promtail](/grafana/loki/blob/main/docs/sources/clients/promtail) (which also supports systemd journal ingestion and TCP-based syslog ingestion), [Fluentd](https://github.com/grafana/loki/tree/main/clients/cmd/fluentd), [Fluent Bit](https://github.com/grafana/loki/tree/main/clients/cmd/fluent-bit), a [Docker plugin](/blog/2019/07/15/lokis-path-to-ga-docker-logging-driver-plugin-support-for-systemd/), and more!
Loki has several client options: [Promtail](/grafana/loki/blob/main/docs/sources/send-data/promtail) (which also supports systemd journal ingestion and TCP-based syslog ingestion), [Fluentd](https://github.com/grafana/loki/tree/main/send-data/cmd/fluentd), [Fluent Bit](https://github.com/grafana/loki/tree/main/send-data/cmd/fluent-bit), a [Docker plugin](/blog/2019/07/15/lokis-path-to-ga-docker-logging-driver-plugin-support-for-systemd/), and more!
Each of these come with ways to configure what labels are applied to create log streams. But be aware of what dynamic labels might be applied.
Use the Loki series API to get an idea of what your log streams look like and see if there might be ways to reduce streams and cardinality.

@ -148,7 +148,7 @@ The two previous examples use statically defined labels with a single value; how
__path__: /var/log/apache.log
```
This regex matches every component of the log line and extracts the value of each component into a capture group. Inside the pipeline code, this data is placed in a temporary data structure that allows using it for several purposes during the processing of that log line (at which point that temp data is discarded). Much more detail about this can be found in the [Promtail pipelines]({{< relref "../clients/promtail/pipelines" >}}) documentation.
This regex matches every component of the log line and extracts the value of each component into a capture group. Inside the pipeline code, this data is placed in a temporary data structure that allows using it for several purposes during the processing of that log line (at which point that temp data is discarded). Much more detail about this can be found in the [Promtail pipelines]({{< relref "../send-data/promtail/pipelines" >}}) documentation.
From that regex, we will be using two of the capture groups to dynamically set two labels based on content from the log line itself:

@ -24,7 +24,7 @@ An agent (also called a client) acquires logs,
turns the logs into streams,
and pushes the streams to Loki through an HTTP API.
The Promtail agent is designed for Loki installations,
but many other [Agents]({{< relref "../clients" >}}) seamlessly integrate with Loki.
but many other [Agents]({{< relref "../send-data" >}}) seamlessly integrate with Loki.
![Loki agent interaction](../loki-overview-2.png "Loki agent interaction")

@ -1,7 +1,7 @@
---
title: Operations
description: Operations
weight: 800
weight: 900
---
# Operations

@ -20,4 +20,4 @@ of populating this value should be handled by the authenticating reverse proxy.
Read the [multi-tenancy]({{< relref "./multi-tenancy" >}}) documentation for more information.
For information on authenticating Promtail, please see the docs for [how to
configure Promtail]({{< relref "../clients/promtail/configuration" >}}).
configure Promtail]({{< relref "../send-data/promtail/configuration" >}}).

@ -91,7 +91,7 @@ Most of these metrics are counters and should continuously increase during norma
If Promtail uses any pipelines with metrics stages, those metrics will also be
exposed by Promtail at its `/metrics` endpoint. See Promtail's documentation on
[Pipelines]({{< relref "../clients/promtail/pipelines" >}}) for more information.
[Pipelines]({{< relref "../send-data/promtail/pipelines" >}}) for more information.
An example Grafana dashboard was built by the community and is available as
dashboard [10004](/dashboards/10004).

@ -31,7 +31,7 @@ One solution if you're seeing samples dropped due to `rate_limited` is simply to
Note that you'll want to make sure your Loki cluster has sufficient resources provisioned to be able to accommodate these higher limits. Otherwise your cluster may experience performance degradation as it tries to handle this higher volume of log lines to ingest.
Another option to address samples being dropped due to `rate_limits` is simply to decrease the rate of log lines being sent to your Loki cluster. Consider collecting logs from fewer targets or setting up `drop` stages in Promtail to filter out certain log lines. Promtail's [limits configuration](/docs/loki/latest/clients/promtail/configuration/#limits_config) also gives you the ability to control the volume of logs Promtail remote writes to your Loki cluster.
Another option to address samples being dropped due to `rate_limits` is simply to decrease the rate of log lines being sent to your Loki cluster. Consider collecting logs from fewer targets or setting up `drop` stages in Promtail to filter out certain log lines. Promtail's [limits configuration](/docs/loki/latest/send-data/promtail/configuration/#limits_config) also gives you the ability to control the volume of logs Promtail remote writes to your Loki cluster.
| Property | Value |
@ -51,7 +51,7 @@ Each stream has a rate-limit applied to it to prevent individual streams from ov
This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. The config options to adjust are `per_stream_rate_limit` and `per_stream_rate_limit_burst`.
Another option you could consider to decrease the rate of samples dropped due to `per_stream_rate_limit` is to split the stream that is getting rate limited into several smaller streams. A third option is to use Promtail's [limit stage](/docs/loki/latest/clients/promtail/stages/limit/#limit-stage) to limit the rate of samples sent to the stream hitting the `per_stream_rate_limit`.
Another option you could consider to decrease the rate of samples dropped due to `per_stream_rate_limit` is to split the stream that is getting rate limited into several smaller streams. A third option is to use Promtail's [limit stage](/docs/loki/latest/send-data/promtail/stages/limit/#limit-stage) to limit the rate of samples sent to the stream hitting the `per_stream_rate_limit`.
We typically recommend setting `per_stream_rate_limit` no higher than 5MB, and `per_stream_rate_limit_burst` no higher than 20MB.

@ -4,7 +4,7 @@ menuTItle: Query
description: LogQL, Loki's query language for logs.
aliases:
- ./logql
weight: 700
weight: 600
---
# LogQL: Log query language

@ -568,7 +568,7 @@ those labels:
#### unpack
The `unpack` parser parses a JSON log line, unpacking all embedded labels from Promtail's [`pack` stage]({{< relref "../../clients/promtail/stages/pack.md" >}}).
The `unpack` parser parses a JSON log line, unpacking all embedded labels from Promtail's [`pack` stage]({{< relref "../../send-data/promtail/stages/pack.md" >}}).
**A special property `_entry` will also be used to replace the original log line**.
For example, using `| unpack` with the log line:

@ -2,7 +2,7 @@
title: Loki reference topics
menuTitle: Reference
description: Reference topics for Loki.
weight: 1100
weight: 1000
---
# Loki reference topics

@ -84,7 +84,7 @@ These endpoints are exposed by the compactor:
- [`GET /loki/api/v1/delete`](#list-log-deletion-requests)
- [`DELETE /loki/api/v1/delete`](#request-cancellation-of-a-delete-request)
A [list of clients]({{< relref "../clients" >}}) can be found in the clients documentation.
A [list of clients]({{< relref "../send-data" >}}) can be found in the clients documentation.
## Matrix, vector, and streams

@ -17,9 +17,9 @@ Loki 2.4 focuses on two items:
* Scaling Loki is now easier with a hybrid deployment mode that falls between our single binary and our microservices. The [Simple scalable deployment]({{< relref "../get-started/deployment-modes" >}}) scales Loki with new `read` and `write` targets. Where previously you would have needed Kubernetes and the microservices approach to start tapping into Loki’s potential, it’s now possible to do this in a simpler way.
* The new [`common` section]({{< relref "../configure#common" >}}) results in a 70% smaller Loki configuration. Pair that with updated defaults and Loki comes out of the box with more appropriate defaults and limits. Check out the [example local configuration](https://github.com/grafana/loki/blob/main/cmd/loki/loki-local-config.yaml) as the new reference for running Loki.
* [**Recording rules**]({{< relref "../alert#recording-rules" >}}) are no longer an experimental feature. We've given them a more resilient implementation which leverages the existing write ahead log code in Prometheus.
* The new [**Promtail Kafka Consumer**]({{< relref "../clients/promtail/scraping#kafka" >}}) can easily get your logs out of Kafka and into Loki.
* The new [**Promtail Kafka Consumer**]({{< relref "../send-data/promtail/scraping#kafka" >}}) can easily get your logs out of Kafka and into Loki.
* There are **nice LogQL enhancements**, thanks to the amazing Loki community. LogQL now has [group_left and group_right]({{< relref "../query#many-to-one-and-one-to-many-vector-matches" >}}). And, the `label_format` and `line_format` functions now support [working with dates and times]({{< relref "../query/template_functions#now" >}}).
* Another great community contribution allows Promtail to [**accept ndjson and plaintext log files over HTTP**]({{< relref "../clients/promtail/configuration#loki_push_api" >}}).
* Another great community contribution allows Promtail to [**accept ndjson and plaintext log files over HTTP**]({{< relref "../send-data/promtail/configuration#loki_push_api" >}}).
All in all, about 260 PR’s went into Loki 2.4, and we thank everyone for helping us make the best Loki yet.

@ -1,88 +1,59 @@
---
menuTitle: Send data
title: Send log data to Loki
description: Grafana Loki clients
weight: 600
menuTitle: Send data
description: List of clients that can be used to send log data to Loki.
aliases:
- ./clients/
weight: 500
---
# Send log data to Loki
You can use the following clients to send logs to Grafana Loki:
- [Grafana Agent](/docs/agent/latest/)
- [Promtail]({{< relref "../clients/promtail" >}})
-- [Promtail on AWS EC2]({{< relref "./aws/ec2" >}})
-- [Promtail on AWS ECS]({{< relref "./aws/ecs" >}})
-- [Promtail on AWS EKS]({{< relref "./aws/eks" >}})
- [Docker Driver]({{< relref "./docker-driver" >}})
- [Fluentd]({{< relref "./fluentd" >}})
- [Fluent Bit]({{< relref "./fluentbit" >}})
- [Logstash]({{< relref "./logstash" >}})
- [Lambda Promtail]({{< relref "./lambda-promtail" >}})
There are also a number of third-party clients, for a list see [Unofficial clients](#unofficial-clients).
# Send log data to Loki
The [xk6-loki extension](https://github.com/grafana/xk6-loki) permits [load testing Loki]({{< relref "./k6" >}}).
There are a number of different clients available to send log data to Loki.
While all clients can be used simultaneously to cover multiple use cases, which client is initially picked to send logs depends on your use case.
## Picking a client
## Grafana Clients
While all clients can be used simultaneously to cover multiple use cases, which
client is initially picked to send logs depends on your use case.
The following clients are developed and supported (for those customers who have purchased a support contract) by Grafana Labs for sending logs to Loki:
### Promtail
- [Grafana Agent](/docs/agent/latest/) - The Grafana Agent is the recommended client for the Grafana stack. It can collect telemetry data for metrics, logs, traces, and continuous profiles and is fully compatible with the Prometheus, OpenTelemetry, and Grafana open source ecosystems.
- [Promtail]({{< relref "./promtail" >}}) - Promtail is the client of choice when you're running Kubernetes, as you can configure it to automatically scrape logs from pods running on the same node that Promtail runs on. Promtail and Prometheus running together in Kubernetes enables powerful debugging: if Prometheus and Promtail use the same labels, users can use tools like Grafana to switch between metrics and logs based on the label set.
Promtail is also the client of choice on bare-metal since it can be configured to tail logs from all files given a host path. It is the easiest way to send logs to Loki from plain-text files (for example, things that log to `/var/log/*.log`).
Lastly, Promtail works well if you want to extract metrics from logs such as counting the occurrences of a particular message.
- [xk6-loki extension](https://github.com/grafana/xk6-loki) - The k6-loki extension lets you perform [load testing on Loki]({{< relref "./k6" >}}).
Promtail is the client of choice when you're running Kubernetes, as you can
configure it to automatically scrape logs from Pods running on the same node
that Promtail runs on. Promtail and Prometheus running together in Kubernetes
enables powerful debugging: if Prometheus and Promtail use the same labels,
users can use tools like Grafana to switch between metrics and logs based on the
label set.
## Third-party clients
Promtail is also the client of choice on bare-metal since it can be configured
to tail logs from all files given a host path. It is the easiest way to send
logs to Loki from plain-text files (for example, things that log to `/var/log/*.log`).
The following clients have been developed by the Loki community or other third-parties and can be used to send log data to Loki.
Lastly, Promtail works well if you want to extract metrics from logs such as
counting the occurrences of a particular message.
{{% admonition type="note" %}}
Grafana Labs cannot provide support for third-party clients. Once an issue has been determined to be with the client and not Loki, it is the responsibility of the customer to work with the associated vendor or project for bug fixes to these clients.
{{% /admonition %}}
### Docker Logging Driver
The following are popular third-party Loki clients:
When using Docker and not Kubernetes, the Docker logging driver for Loki should
- [Docker Driver]({{< relref "./docker-driver" >}}) - When using Docker and not Kubernetes, the Docker logging driver for Loki should
be used as it automatically adds labels appropriate to the running container.
### Fluentd and Fluent Bit
The Fluentd and Fluent Bit plugins are ideal when you already have Fluentd deployed
- [Fluent Bit]({{< relref "./fluentbit" >}}) - The Fluent Bit plugin is ideal when you already have Fluentd deployed
and you already have configured `Parser` and `Filter` plugins.
Fluentd also works well for extracting metrics from logs when using its
Prometheus plugin.
### Logstash
If you are already using logstash and/or beats, this will be the easiest way to start.
- [Fluentd]({{< relref "./fluentd" >}}) - The Fluentd plugin is ideal when you already have Fluentd deployed
and you already have configured `Parser` and `Filter` plugins. Fluentd also works well for extracting metrics from logs when using itsPrometheus plugin.
- [Lambda Promtail]({{< relref "./lambda-promtail" >}}) - This is a workflow combining the Promtail push-api [scrape config]({{< relref "./promtail/configuration#loki_push_api" >}}) and the [lambda-promtail]({{< relref "./lambda-promtail" >}}) AWS Lambda function which pipes logs from Cloudwatch to Loki. This is a good choice if you're looking to try out Loki in a low-footprint way or if you wish to monitor AWS lambda logs in Loki
- [Logstash]({{< relref "./logstash" >}}) - If you are already using logstash and/or beats, this will be the easiest way to start.
By adding our output plugin you can quickly try Loki without doing big configuration changes.
### Lambda Promtail
This is a workflow combining the Promtail push-api [scrape config]({{< relref "../clients/promtail/configuration#loki_push_api" >}}) and the [lambda-promtail]({{< relref "./lambda-promtail" >}}) AWS Lambda function which pipes logs from Cloudwatch to Loki.
This is a good choice if you're looking to try out Loki in a low-footprint way or if you wish to monitor AWS lambda logs in Loki.
## Unofficial clients
Note that the Loki API is not stable yet, so breaking changes might occur
when using or writing a third-party client.
These third-party clients also enable sending logs to Loki:
- [Cribl Loki Destination](https://docs.cribl.io/stream/destinations-loki)
- [ilogtail](https://github.com/alibaba/ilogtail) (Go)
- [Log4j2 appender for Loki](https://github.com/tkowalcz/tjahzi) (Java)
- [loki-logback-appender](https://github.com/loki4j/loki-logback-appender) (Java)
- [LokiLogger.jl](https://github.com/JuliaLogging/LokiLogger.jl) (Julia)
- [mjaron-tinyloki-java](https://github.com/mjfryc/mjaron-tinyloki-java) (Java)
- [NLog-Targets-Loki](https://github.com/corentinaltepe/nlog.loki) (C#)
- [promtail-client](https://github.com/afiskon/promtail-client) (Go)
- [push-to-loki.py](https://github.com/sleleko/devops-kb/blob/master/python/push-to-loki.py) (Python 3)
- [python-logging-loki](https://pypi.org/project/python-logging-loki/) (Python 3)
- [Serilog-Sinks-Loki](https://github.com/JosephWoodward/Serilog-Sinks-Loki) (C#)
- [NLog-Targets-Loki](https://github.com/corentinaltepe/nlog.loki) (C#)
- [loki-logback-appender](https://github.com/loki4j/loki-logback-appender) (Java)
- [Log4j2 appender for Loki](https://github.com/tkowalcz/tjahzi) (Java)
- [mjaron-tinyloki-java](https://github.com/mjfryc/mjaron-tinyloki-java) (Java)
- [LokiLogger.jl](https://github.com/JuliaLogging/LokiLogger.jl) (Julia)
- [winston-loki](https://github.com/JaniAnttonen/winston-loki) (JS)
- [ilogtail](https://github.com/alibaba/ilogtail) (Go)
- [Vector Loki Sink](https://vector.dev/docs/reference/configuration/sinks/loki/)
- [Cribl Loki Destination](https://docs.cribl.io/stream/destinations-loki)
- [winston-loki](https://github.com/JaniAnttonen/winston-loki) (JS)

@ -1,16 +0,0 @@
---
title: Sending logs from Amazon Web Services
menuTitle: Promtail on AWS
description: Tutorials for sending logs from Amazon Web Services to Loki
aliases:
- ../clients/aws/
weight: 300
---
# Sending logs from Amazon Web Services
Sending logs from AWS services to Grafana Loki is a little different depending on the AWS service you are using:
* [Elastic Compute Cloud (EC2)]({{< relref "./ec2" >}})
* [Elastic Container Service (ECS)]({{< relref "./ecs" >}})
* [Elastic Kubernetes Service (EKS)]({{< relref "./eks" >}})

@ -73,4 +73,4 @@ The driver keeps all logs in memory and will drop log entries if Loki is not rea
The wait time can be lowered by setting `loki-retries=2`, `loki-max-backoff_800ms`, `loki-timeout=1s` and `keep-file=true`. This way the daemon will be locked only for a short time and the logs will be persisted locally when the Loki client is unable to re-connect.
To avoid this issue, use the Promtail [Docker target]({{< relref "../../clients/promtail/configuration#docker" >}}) or [Docker service discovery]({{< relref "../../clients/promtail/configuration#docker_sd_config" >}}).
To avoid this issue, use the Promtail [Docker target]({{< relref "../../send-data/promtail/configuration#docker" >}}) or [Docker service discovery]({{< relref "../../send-data/promtail/configuration#docker_sd_config" >}}).

@ -1,6 +1,6 @@
---
title: Docker driver client configuration
menuTitle: Configuration
menuTitle: Configure Docker driver
description: Configuring the Docker driver client
aliases:
- ../../clients/docker-driver/configuration/
@ -211,8 +211,8 @@ To specify additional logging driver options, you can use the --log-opt NAME=VAL
| `loki-min-backoff` | No | `500ms` | The minimum amount of time to wait before retrying a batch. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-max-backoff` | No | `5m` | The maximum amount of time to wait before retrying a batch. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-retries` | No | `10` | The maximum amount of retries for a log batch. Setting it to `0` will retry indefinitely. |
| `loki-pipeline-stage-file` | No | | The location of a pipeline stage configuration file ([example](https://github.com/grafana/loki/blob/main/clients/cmd/docker-driver/pipeline-example.yaml)). Pipeline stages allows to parse log lines to extract more labels, [see associated documentation]({{< relref "../../clients/promtail/stages" >}}). |
| `loki-pipeline-stages` | No | | The pipeline stage configuration provided as a string [see pipeline stages](#pipeline-stages) and [associated documentation]({{< relref "../../clients/promtail/stages" >}}). |
| `loki-pipeline-stage-file` | No | | The location of a pipeline stage configuration file ([example](https://github.com/grafana/loki/blob/main/clients/cmd/docker-driver/pipeline-example.yaml)). Pipeline stages allows to parse log lines to extract more labels, [see associated documentation]({{< relref "../../send-data/promtail/stages" >}}). |
| `loki-pipeline-stages` | No | | The pipeline stage configuration provided as a string [see pipeline stages](#pipeline-stages) and [associated documentation]({{< relref "../../send-data/promtail/stages" >}}). |
| `loki-relabel-config` | No | | A [Prometheus relabeling configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) allowing you to rename labels [see relabeling](#relabeling). |
| `loki-tenant-id` | No | | Set the tenant id (http header`X-Scope-OrgID`) when sending logs to Loki. It can be overridden by a pipeline stage. |
| `loki-tls-ca-file` | No | | Set the path to a custom certificate authority. |

@ -47,7 +47,7 @@ helm upgrade --install loki-stack grafana/loki-stack \
### AWS Elastic Container Service (ECS)
You can use fluent-bit Loki Docker image as a Firelens log router in AWS ECS.
For more information about this see our [AWS documentation]({{< relref "../aws/ecs" >}})
For more information about this see our [AWS documentation]({{< relref "../promtail/cloud/ecs" >}})
### Local

@ -9,7 +9,7 @@ weight: 700
# Lambda Promtail client
Grafana Loki includes [Terraform](https://www.terraform.io/) and [CloudFormation](https://aws.amazon.com/cloudformation/) for shipping Cloudwatch, Cloudtrail, VPC Flow Logs and loadbalancer logs to Loki via a [lambda function](https://aws.amazon.com/lambda/). This is done via [lambda-promtail](https://github.com/grafana/loki/blob/main/tools/lambda-promtail) which processes cloudwatch events and propagates them to Loki (or a Promtail instance) via the push-api [scrape config]({{< relref "../../clients/promtail/configuration#loki_push_api" >}}).
Grafana Loki includes [Terraform](https://www.terraform.io/) and [CloudFormation](https://aws.amazon.com/cloudformation/) for shipping Cloudwatch, Cloudtrail, VPC Flow Logs and loadbalancer logs to Loki via a [lambda function](https://aws.amazon.com/lambda/). This is done via [lambda-promtail](https://github.com/grafana/loki/blob/main/tools/lambda-promtail) which processes cloudwatch events and propagates them to Loki (or a Promtail instance) via the push-api [scrape config]({{< relref "../../send-data/promtail/configuration#loki_push_api" >}}).
## Deployment
@ -60,7 +60,7 @@ To add tenant id add `-var "tenant_id=value"`.
Note that the creation of a subscription filter on Cloudwatch in the provided Terraform file only accepts an array of log group names.
It does **not** accept strings for regex filtering on the logs contents via the subscription filters. We suggest extending the Terraform file to do so.
Or, have lambda-promtail write to Promtail and use [pipeline stages](/docs/loki/latest/clients/promtail/stages/drop/).
Or, have lambda-promtail write to Promtail and use [pipeline stages](/docs/loki/latest/send-data/promtail/stages/drop/).
CloudFormation:
```
@ -126,7 +126,7 @@ Triggering lambda-promtail through SQS allows handling on-failure recovery of th
## Propagated Labels
Incoming logs can have seven special labels assigned to them which can be used in [relabeling]({{< relref "../../clients/promtail/configuration#relabel_configs" >}}) or later stages in a Promtail [pipeline]({{< relref "../../clients/promtail/pipelines" >}}):
Incoming logs can have seven special labels assigned to them which can be used in [relabeling]({{< relref "../../send-data/promtail/configuration#relabel_configs" >}}) or later stages in a Promtail [pipeline]({{< relref "../../send-data/promtail/pipelines" >}}):
- `__aws_log_type`: Where this log came from (Cloudwatch, Kinesis or S3).
- `__aws_cloudwatch_log_group`: The associated Cloudwatch Log Group for this log.

@ -3,7 +3,7 @@ title: Logstash plugin
menuTitle:
description: Instructions to install, configure, and use the Logstash plugin to send logs to Loki.
aliases:
- ../clients/logstash/
- ../send-data/a/logstash/
weight: 800
---
# Logstash plugin

@ -1,9 +1,12 @@
---
title: Promtail
description: Promtail
weight: 10
title: Promtail agent
menuTitle: Promtail
description: How to use the Promtail agent to ship logs to Loki
aliases:
- ../clients/promtail/
weight: 200
---
# Promtail
# Promtail agent
Promtail is an agent which ships the contents of local logs to a private Grafana Loki
instance or [Grafana Cloud](/oss/loki). It is usually
@ -118,7 +121,7 @@ can be written with the syslog protocol to the configured port.
## AWS
If you need to run Promtail on Amazon Web Services EC2 instances, you can use our [detailed tutorial]({{< relref "../../send-data/aws/ec2" >}}).
If you need to run Promtail on Amazon Web Services EC2 instances, you can use our [detailed tutorial]({{< relref "./cloud/ec2" >}}).
## Labeling and parsing

@ -0,0 +1,17 @@
---
title: Sending logs from the cloud
menuTitle: Configure for cloud
description: Tutorials for sending logs from cloud services with Promtail.
aliases: []
weight: 300
---
# Sending logs from the cloud
Sending logs from cloud services to Grafana Loki is a little different depending on the AWS service you are using. The following tutorials walk you through configuring cloud services to send logs to Loki.
- [Amazon Elastic Compute Cloud (EC2)]({{< relref "./ec2" >}})
- [Amazon Elastic Container Service (ECS)]({{< relref "./ecs" >}})
- [Amazon Elastic Kubernetes Service (EKS)]({{< relref "./eks" >}})
- [Google Cloud Platform (GCP)]({{< relref "./gcp" >}})

@ -3,12 +3,13 @@ title: Run the Promtail client on AWS EC2
menuTitle: Promtail on EC2
description: Tutorial for running Promtail client on AWS EC2
aliases:
- ../../clients/aws/ec2/
- ../../../clients/aws/ec2/
weight: 100
---
# Run the Promtail client on AWS EC2
In this tutorial we're going to setup [Promtail]({{< relref "../../../clients/promtail" >}}) on an AWS EC2 instance and configure it to sends all its logs to a Grafana Loki instance.
In this tutorial we're going to setup [Promtail]({{< relref "../../../../send-data/promtail" >}}) on an AWS EC2 instance and configure it to sends all its logs to a Grafana Loki instance.
## Requirements
@ -41,7 +42,7 @@ aws ec2 create-security-group --group-name promtail-ec2 --description "promtail
}
```
Now let's authorize inbound access for SSH and [Promtail]({{< relref "../../../clients/promtail" >}}) server:
Now let's authorize inbound access for SSH and [Promtail]({{< relref "../../../../send-data/promtail" >}}) server:
```bash
aws ec2 authorize-security-group-ingress --group-id sg-02c489bbdeffdca1d --protocol tcp --port 22 --cidr 0.0.0.0/0
@ -81,7 +82,7 @@ ssh ec2-user@ec2-13-59-62-37.us-east-2.compute.amazonaws.com
## Setting up Promtail
First let's make sure we're running as root by using `sudo -s`.
Next we'll download, install and give executable right to [Promtail]({{< relref "../../../clients/promtail" >}}).
Next we'll download, install and give executable right to [Promtail]({{< relref "../../../../send-data/promtail" >}}).
```bash
mkdir /opt/promtail && cd /opt/promtail
@ -90,7 +91,7 @@ unzip "promtail-linux-amd64.zip"
chmod a+x "promtail-linux-amd64"
```
Now we're going to download the [Promtail configuration]({{< relref "../../../clients/promtail" >}}) file below and edit it, don't worry we will explain what those means.
Now we're going to download the [Promtail configuration]({{< relref "../../../../send-data/promtail" >}}) file below and edit it, don't worry we will explain what those means.
The file is also available as a gist at [cyriltovena/promtail-ec2.yaml][config gist].
```bash
@ -133,11 +134,11 @@ scrape_configs:
target_label: __host__
```
The **server** section indicates Promtail to bind his http server to 3100. Promtail serves HTTP pages for [troubleshooting]({{< relref "../../../clients/promtail/troubleshooting" >}}) service discovery and targets.
The **server** section indicates Promtail to bind his http server to 3100. Promtail serves HTTP pages for [troubleshooting]({{< relref "../../../../send-data/promtail/troubleshooting" >}}) service discovery and targets.
The **clients** section allow you to target your loki instance, if you're using GrafanaCloud simply replace `<user id>` and `<api secret>` with your credentials. Otherwise just replace the whole URL with your custom Loki instance.(e.g `http://my-loki-instance.my-org.com/loki/api/v1/push`)
[Promtail]({{< relref "../../../clients/promtail" >}}) uses the same [Prometheus **scrape_configs**][prometheus scrape config]. This means if you already own a Prometheus instance the config will be very similar and easy to grasp.
[Promtail]({{< relref "../../../../send-data/promtail" >}}) uses the same [Prometheus **scrape_configs**][prometheus scrape config]. This means if you already own a Prometheus instance the config will be very similar and easy to grasp.
Since we're running on AWS EC2 we want to uses EC2 service discovery, this will allows us to scrape metadata about the current instance (and even your custom tags) and attach those to our logs. This way managing and querying on logs will be much easier.
@ -229,7 +230,7 @@ Jul 08 15:48:57 ip-172-31-45-69.us-east-2.compute.internal promtail-linux-amd64[
Jul 08 15:48:57 ip-172-31-45-69.us-east-2.compute.internal promtail-linux-amd64[2732]: level=info ts=2020-07-08T15:48:57.56029474Z caller=main.go:67 msg="Starting Promtail" version="(version=1.6.0, branch=HEAD, revision=12c7eab8)"
```
You can now verify in Grafana that Loki has correctly received your instance logs by using the [LogQL]({{< relref "../../../query" >}}) query `{zone="us-east-2"}`.
You can now verify in Grafana that Loki has correctly received your instance logs by using the [LogQL]({{< relref "../../../../query" >}}) query `{zone="us-east-2"}`.
![Grafana Loki logs][ec2 logs]
@ -258,7 +259,7 @@ Note that you can use [relabeling][relabeling] to convert systemd labels to matc
That's it, save the config and you can `reboot` the machine (or simply restart the service `systemctl restart promtail.service`).
Let's head back to Grafana and verify that your Promtail logs are available in Grafana by using the [LogQL]({{< relref "../../../query" >}}) query `{unit="promtail.service"}` in Explore. Finally make sure to checkout [live tailing][live tailing] to see logs appearing as they are ingested in Loki.
Let's head back to Grafana and verify that your Promtail logs are available in Grafana by using the [LogQL]({{< relref "../../../../query" >}}) query `{unit="promtail.service"}` in Explore. Finally make sure to checkout [live tailing][live tailing] to see logs appearing as they are ingested in Loki.
[promtail]: ../../promtail/README
[aws cli]: https://aws.amazon.com/cli/

@ -3,9 +3,10 @@ title: Run the Promtail client on AWS ECS
menuTitle: Promtail on ECS
description: Tutorial for running Promtail client on AWS Elastic Container Service (ECS)
aliases:
- ../../clients/aws/ecs/
- ../../../clients/aws/ecs/
weight: 100
---
# Run the Promtail client on AWS ECS
[ECS][ECS] is the fully managed container orchestration service by Amazon. Combined with [Fargate][Fargate] you can run your container workload without the need to provision your own compute resources. In this tutorial we will see how you can leverage [Firelens][Firelens] an AWS log router to forward all your logs and your workload metadata to a Grafana Loki instance.
@ -225,8 +226,8 @@ That's it ! Make sure to checkout LogQL to learn more about Loki powerful query
[ecs iam]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html
[arn]: https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
[task]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html
[fluentd loki]: https://grafana.com/docs/loki/latest/clients/fluentd/
[fluentbit loki]: https://grafana.com/docs/loki/latest/clients/fluentbit/
[fluentd loki]: https://grafana.com/docs/loki/latest/send-data/fluentd/
[fluentbit loki]: https://grafana.com/docs/loki/latest/send-data/fluentbit/
[fluentbit]: https://fluentbit.io/
[fluentd]: https://www.fluentd.org/
[fluentbit loki image]: https://hub.docker.com/r/grafana/fluent-bit-plugin-loki

Before

Width:  |  Height:  |  Size: 420 KiB

After

Width:  |  Height:  |  Size: 420 KiB

@ -3,9 +3,10 @@ title: Run the Promtail client on AWS EKS
menuTitle: Promtail on EKS
description: Tutorial for running Promtail client on AWS EKS
aliases:
- ../../clients/aws/eks/
- ../../../clients/aws/eks/
weight: 100
---
# Run the Promtail client on AWS EKS
In this tutorial we'll see how to set up Promtail on [EKS][eks]. Amazon Elastic Kubernetes Service (Amazon [EKS][eks]) is a fully managed Kubernetes service, using Promtail we'll get full visibility into our cluster logs. We'll start by forwarding pods logs then nodes services and finally Kubernetes events.
@ -44,7 +45,7 @@ Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.8-eks-fd1
## Adding Promtail DaemonSet
To ship all your pods logs we're going to set up [Promtail]({{< relref "../../../clients/promtail" >}}) as a DaemonSet in our cluster. This means it will run on each nodes of the cluster, we will then configure it to find the logs of your containers on the host.
To ship all your pods logs we're going to set up [Promtail]({{< relref "../../../../send-data/promtail" >}}) as a DaemonSet in our cluster. This means it will run on each nodes of the cluster, we will then configure it to find the logs of your containers on the host.
What's nice about Promtail is that it uses the same [service discovery as Prometheus][prometheus conf], you should make sure the `scrape_configs` of Promtail matches the Prometheus one. Not only this is simpler to configure, but this also means Metrics and Logs will have the same metadata (labels) attached by the Prometheus service discovery. When querying Grafana you will be able to correlate metrics and logs very quickly, you can read more about this on our [blogpost][correlate].
@ -243,7 +244,7 @@ If you want to push this further you can check out [Joe's blog post][blog annota
[kubelet]: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=The%20kubelet%20works%20in%20terms,PodSpecs%20are%20running%20and%20healthy.
[blog events]: https://grafana.com/blog/2019/08/21/how-grafana-labs-effectively-pairs-loki-and-kubernetes-events/
[labels post]: https://grafana.com/blog/2020/04/21/how-labels-in-loki-can-make-log-queries-faster-and-easier/
[pipeline]: https://grafana.com/docs/loki/latest/clients/promtail/pipelines/
[pipeline]: https://grafana.com/docs/loki/latest/send-data/promtail/pipelines/
[final config]: values.yaml
[blog annotations]: https://grafana.com/blog/2019/12/09/how-to-do-automatic-annotations-with-grafana-and-loki/
[kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/

@ -1,8 +1,13 @@
---
title: Cloud setup GCP Logs
description: Cloud setup GCP logs
title: Run the Promtail client on Google Cloud Platform
menuTitle: Promtail on GCP
description: Tutorial for running Promtail client on Google Cloud Platform
aliases:
- ../../../clients/promtail/gcplog-cloud/
weight:
---
# Cloud setup GCP Logs
# Run the Promtail client on Google Cloud Platform
This document explains how one can setup Google Cloud Platform to forward its cloud resource logs from a particular GCP project into Google Pubsub topic so that is available for Promtail to consume.
@ -14,7 +19,7 @@ There are two flavours of how to configure this:
Overall, the setup between GCP, Promtail and Loki will look like the following:
<img src="../gcp-logs-diagram.png" width="1200px"/>
<img src="./gcp-logs-diagram.png" width="1200px"/>
## Roles and Permission
@ -231,7 +236,7 @@ We need a service account with the following permissions:
This enables Promtail to read log entries from the pubsub subscription created before.
You can find an example for Promtail scrape config for `gcplog` [here]({{< relref "./scraping#gcp-log-scraping" >}})
You can find an example for Promtail scrape config for `gcplog` [here]({{< relref "../../scraping#gcp-log-scraping" >}})
If you are scraping logs from multiple GCP projects, then this serviceaccount should have above permissions in all the projects you are tyring to scrape.

@ -1,8 +1,13 @@
---
title: Configuration
description: Configuring Promtail
title: Configure Promtail
menuTitle: Configuration reference
description: Configuration parameters for the Promtail agent.
aliases:
- ../../clients/promtail/configuration/
weight: 200
---
# Configuration
# Configure Promtail
Promtail is configured in a YAML file (usually referred to as `config.yaml`)
which contains information on the Promtail server, where positions are stored,

@ -1,25 +1,30 @@
---
title: Installation
description: Install Promtail
title: Install Promtail
menuTitle: Install Promtail
description: Installation instructions for the Promtail client.
aliases:
- ../../clients/promtail/installation/
weight: 100
---
# Installation
# Install Promtail
Promtail is distributed as a binary, in a Docker container,
or there is a Helm chart to install it in a Kubernetes cluster.
## Binary
## Install the binary
Every Grafana Loki release includes binaries for Promtail which can be found on the
[Releases page](https://github.com/grafana/loki/releases) as part of the release assets.
## Docker
## Install using Docker
```bash
# modify tag to most recent version
docker pull grafana/promtail:2.0.0
```
## Helm
## Install using Helm
Make sure that Helm is installed.
See [Installing Helm](https://helm.sh/docs/intro/install/).
@ -41,9 +46,7 @@ Finally, Promtail can be deployed with:
helm upgrade --install promtail grafana/promtail
```
## Kubernetes
### DaemonSet (recommended)
## Install as Kubernetes daemonSet (recommended)
A `DaemonSet` will deploy Promtail on every node within a Kubernetes cluster.

@ -1,7 +1,12 @@
---
title: Promtail and Log Rotation
title: Promtail and Log Rotation
menuTitle: Configure log rotation
description: Promtail and Log Rotation
aliases:
- ../../clients/promtail/logrotation/
weight: 500
---
# Promtail and Log Rotation
## Why does log rotation matter?

@ -1,7 +1,12 @@
---
title: Pipelines
description: Pipelines
menuTitle:
description: How to use Promtail pipelines to transform single log lines, labels, and timestamps.
aliases:
- ../../clients/promtail/pipelines/
weight: 600
---
# Pipelines
A detailed look at how to set up Promtail to process your log lines, including

@ -1,8 +1,13 @@
---
title: Scraping
description: Promtail Scraping (Service Discovery)
title: Configuring Promtail for service discovery
menuTitle: Configure service discovery
description: Configuring Promtail for service discovery
aliases:
- ../../clients/promtail/scraping/
weight: 400
---
# Scraping
# Configuring Promtail for service discovery
## File Target Discovery
@ -227,7 +232,7 @@ Here `project_id` and `subscription` are the only required fields.
- `project_id` is the GCP project id.
- `subscription` is the GCP pubsub subscription where Promtail can consume log entries from.
Before using `gcplog` target, GCP should be [configured]({{< relref "./gcplog-cloud" >}}) with pubsub subscription to receive logs from.
Before using `gcplog` target, GCP should be [configured]({{< relref "./cloud/gcp" >}}) with pubsub subscription to receive logs from.
It also supports `relabeling` and `pipeline` stages just like other targets.
@ -263,7 +268,7 @@ section. This server exposes the single endpoint `POST /gcp/api/v1/push`, respon
For Google's PubSub to be able to send logs, **Promtail server must be publicly accessible, and support HTTPS**. For that, Promtail can be deployed
as part of a larger orchestration service like Kubernetes, which can handle HTTPS traffic through an ingress, or it can be hosted behind
a proxy/gateway, offloading the HTTPS to that component and routing the request to Promtail. Once that's solved, GCP can be [configured]({{< relref "./gcplog-cloud" >}})
a proxy/gateway, offloading the HTTPS to that component and routing the request to Promtail. Once that's solved, GCP can be [configured]({{< relref "./cloud/gcp" >}})
to send logs to Promtail.
It also supports `relabeling` and `pipeline` stages.

@ -1,8 +1,13 @@
---
title: Stages
description: Stages
title: Prometheus pipeline stages
menuTitle: Pipeline stages
description: Overview of the Promtail pipeline stages.
aliases:
- ../../clients/promtail/stages/
weight: 700
---
# Stages
# Prometheus pipeline stages
This section is a collection of all stages Promtail supports in a
[Pipeline]({{< relref "../pipelines" >}}).

@ -1,7 +1,12 @@
---
title: cri
description: cri stage
menuTitle:
description: The 'cri' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/cri/
weight:
---
# cri
The `cri` stage is a parsing stage that reads the log line using the standard CRI logging format.

@ -1,7 +1,12 @@
---
title: decolorize
description: decolorize stage
menuTitle:
description: The 'decolorize' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/decolorize/
weight:
---
# decolorize
The `decolorize` stage is a transform stage that lets you strip

@ -1,7 +1,12 @@
---
title: docker
description: docker stage
menuTitle:
description: The 'decolorize' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/docker/
weight:
---
# docker
The `docker` stage is a parsing stage that reads log lines in the standard

@ -1,7 +1,12 @@
---
title: drop
description: drop stage
menuTitle:
description: The 'drop' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/drop/
weight:
---
# drop
The `drop` stage is a filtering stage that lets you drop logs based on several options.

@ -1,7 +1,12 @@
---
title: eventlogmessage
description: eventlogmessage stage
menuTitle:
description: The 'eventlogmessage' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/eventlogmessage/
weight:
---
# eventlogmessage
The `eventlogmessage` stage is a parsing stage that extracts data from the Message string that appears in the Windows Event Log.

@ -1,12 +1,15 @@
---
title: geoip
description: geoip stage
menuTitle:
description: The 'geoip' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/geoip/
weight:
---
# geoip
The `geoip` stage is a parsing stage that reads an ip address and
# geoip
populates the labelset with geoip fields. [Maxmind's GeoIP2 database](https://www.maxmind.com/en/home) is used for the lookup.
The `geoip` stage is a parsing stage that reads an ip address and populates the labelset with geoip fields. [Maxmind's GeoIP2 database](https://www.maxmind.com/en/home) is used for the lookup.
Populated fields for City db:

@ -1,7 +1,12 @@
---
title: json
description: json stage
menuTitle:
description: The 'json' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/json/
weight:
---
# json
The `json` stage is a parsing stage that reads the log line as JSON and accepts

@ -1,7 +1,12 @@
---
title: labelallow
description: labelallow stage
menuTitle:
description: The 'labelallow' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/labelallow/
weight:
---
# labelallow
The labelallow stage is an action stage that allows only the provided labels

@ -1,7 +1,12 @@
---
title: labeldrop
description: labeldrop stage
menuTitle:
description: The 'labeldrop' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/labeldrop/
weight:
---
# labeldrop
The labeldrop stage is an action stage that drops labels from

@ -1,7 +1,12 @@
---
title: labels
description: labels stage
menuTitle:
description: The 'labels' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/labels/
weight:
---
# labels
The labels stage is an action stage that takes data from the extracted map and

@ -1,7 +1,12 @@
---
title: limit
description: limit stage
menuTitle:
description: The 'limit' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/limit/
weight:
---
# limit
The `limit` stage is a rate-limiting stage that throttles logs based on several options.

@ -1,8 +1,12 @@
---
title: logfmt
menuTitle: logfmt
description: The logfmt parsing stage reads logfmt log lines and extracts the data into labels.
menuTitle:
description: The 'logfmt' Promtail pipeline stage. The logfmt parsing stage reads logfmt log lines and extracts the data into labels.
aliases:
- ../../../clients/promtail/stages/logfmt/
weight:
---
# logfmt
The `logfmt` stage is a parsing stage that reads the log line as [logfmt](https://brandur.org/logfmt) and allows extraction of data into labels.

@ -1,7 +1,12 @@
---
title: match
description: match stage
menuTitle:
description: The 'match' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/match/
weight:
---
# match
The match stage is a filtering stage that conditionally applies a set of stages

@ -1,7 +1,12 @@
---
title: metrics
description: metrics stage
menuTitle:
description: The 'metrics' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/metrics/
weight:
---
# metrics
The `metrics` stage is an action stage that allows for defining and updating

@ -1,6 +1,10 @@
---
title: multiline
description: multiline stage
menuTitle:
description: The 'multiline' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/multiline/
weight:
---
# multiline

@ -1,7 +1,12 @@
---
title: output
description: output stage
menuTitle:
description: The 'output' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/output/
weight:
---
# output
The `output` stage is an action stage that takes data from the extracted map and

@ -1,7 +1,12 @@
---
title: pack
description: pack stage
menuTitle:
description: The 'pack' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/pack/
weight:
---
# pack
The `pack` stage is a transform stage which lets you embed extracted values and labels into the log line by packing the log line and labels inside a JSON object.

@ -1,7 +1,12 @@
---
title: regex
description: regex stage
menuTitle:
description: The 'regex' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/regex/
weight:
---
# regex
The `regex` stage is a parsing stage that parses a log line using a regular

@ -1,7 +1,12 @@
---
title: replace
description: replace stage
menuTitle:
description: The 'replace' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/replace/
weight:
---
# replace
The `replace` stage is a parsing stage that parses a log line using a regular

@ -1,7 +1,12 @@
---
title: sampling
description: sampling stage
menuTitle:
description: The 'sampling' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/sampling/
weight:
---
# sampling
The `sampling` stage is a stage that sampling logs.

@ -1,7 +1,12 @@
---
title: static_labels
description: static_labels stage
menuTitle:
description: The 'static_labels' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/static_labels/
weight:
---
# static_labels
The static_labels stage is an action stage that adds static-labels to the label set that is sent to Loki with the log entry.

@ -1,7 +1,12 @@
---
title: template
description: template stage
menuTitle:
description: The 'template' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/template/
weight:
---
# template
The `template` stage is a transform stage that lets use manipulate the values in

@ -1,7 +1,12 @@
---
title: tenant
description: tenant stage
menuTitle:
description: The 'tenant' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/tenant/
weight:
---
# tenant
The tenant stage is an action stage that sets the tenant ID for the log entry

@ -1,7 +1,12 @@
---
title: timestamp
description: timestamp stage
menuTitle:
description: The 'timestamp' Promtail pipeline stage.
aliases:
- ../../../clients/promtail/stages/timestamp/
weight:
---
# timestamp
The `timestamp` stage is an action stage that can change the timestamp of a log

@ -1,8 +1,12 @@
---
title: Troubleshooting
description: Troubleshooting Promtail
title: Troubleshooting Promtail
menuTitle: Troubleshooting
description: Troubleshooting the Promtail agent
aliases:
- ../../clients/promtail/troubleshooting/
weight: 800
---
# Troubleshooting
# Troubleshooting Promtail
This document describes known failure modes of Promtail on edge cases and the
adopted trade-offs.

@ -1,6 +1,6 @@
---
title: Setup Loki
menuTitle: Setup
menuTitle: Set up
description: How to install and upgrade Loki, and how to migrate configurations.
weight: 300
---

@ -59,7 +59,7 @@ The configuration specifies running Loki as a single binary.
Loki runs and displays Loki logs in your command line and on http://localhost:3100/metrics.
The next step will be running an agent to send logs to Loki.
To do so with Promtail, refer to the [Promtail configuration]({{< relref "../../clients/promtail" >}}).
To do so with Promtail, refer to the [Promtail configuration]({{< relref "../../send-data/promtail" >}}).
## Release binaries - openSUSE Linux only

@ -411,7 +411,7 @@ This histogram reports the distribution of log line sizes by file. It has 8 buck
This creates a lot of series and we don't think this metric has enough value to offset the amount of series genereated so we are removing it.
While this isn't a direct replacement, two metrics we find more useful are size and line counters configured via pipeline stages, an example of how to configure these metrics can be found in the [metrics pipeline stage docs](/docs/loki/latest/clients/promtail/stages/metrics/#counter)
While this isn't a direct replacement, two metrics we find more useful are size and line counters configured via pipeline stages, an example of how to configure these metrics can be found in the [metrics pipeline stage docs](/docs/loki/latest/send-data/promtail/stages/metrics/#counter)
#### `added Docker target` log message has been demoted from level=error to level=info
@ -927,7 +927,7 @@ If you happen to have `results_cache.max_freshness` set, use `limits_config.max_
### Promtail config removed
The long deprecated `entry_parser` config in Promtail has been removed, use [pipeline_stages]({{< relref "../../clients/promtail/configuration#pipeline_stages" >}}) instead.
The long deprecated `entry_parser` config in Promtail has been removed, use [pipeline_stages]({{< relref "../../send-data/promtail/configuration#pipeline_stages" >}}) instead.
### Upgrading schema to use boltdb-shipper and/or v11 schema

@ -1,7 +1,7 @@
---
title: Storage
description: Storage
weight: 1010
weight: 475
---
# Storage

@ -5,7 +5,7 @@ description: Visualize your log data with Grafana
aliases:
- ../getting-started/grafana/
- ../operations/grafana/
weight: 825
weight: 725
keywords:
- visualize
- grafana

Loading…
Cancel
Save