@ -16,7 +16,7 @@ Too many label value combinations leads to too many streams. The penalties for t
To avoid those issues, don't add a label for something until you know you need it! Use filter expressions ( |= “text”, |~ “regex”, …) and brute force those logs. It works -- and it's fast.
From early on, we have set a label dynamically using promtail pipelines for `level`. This seemed intuitive for us as we often wanted to only show logs for `level=”error”`; however, we are re-evaluating this now as writing a query. `{app=”loki”} |= “level=error”` is proving to be just as fast for many of our applications as `{app=”loki”,level=”error”}`.
From early on, we have set a label dynamically using Promtail pipelines for `level`. This seemed intuitive for us as we often wanted to only show logs for `level=”error”`; however, we are re-evaluating this now as writing a query. `{app=”loki”} |= “level=error”` is proving to be just as fast for many of our applications as `{app=”loki”,level=”error”}`.
This may seem surprising, but if applications have medium to low volume, that label causes one application's logs to be split into up to five streams, which means 5x chunks being stored. And loading chunks has an overhead associated with it. Imagine now if that query were `{app=”loki”,level!=”debug”}`. That would have to load **way** more chunks than `{app=”loki”} != “level=debug”`.
@ -98,7 +98,7 @@ What can we do about this? What if this was because the sources of these logs we
{job=”syslog”, instance=”host2”} 00:00:02 i’m a syslog! <-Accepted,stillinorderforstream2
```
But what if the application itself generated logs that were out of order? Well, I'm afraid this is a problem. If you are extracting the timestamp from the log line with something like [the promtail pipeline stage](https://grafana.com/docs/loki/latest/clients/promtail/stages/timestamp/), you could instead _not_ do this and let Promtail assign a timestamp to the log lines. Or you can hopefully fix it in the application itself.
But what if the application itself generated logs that were out of order? Well, I'm afraid this is a problem. If you are extracting the timestamp from the log line with something like [the Promtail pipeline stage](https://grafana.com/docs/loki/latest/clients/promtail/stages/timestamp/), you could instead _not_ do this and let Promtail assign a timestamp to the log lines. Or you can hopefully fix it in the application itself.
But I want Loki to fix this! Why can’t you buffer streams and re-order them for me?! To be honest, because this would add a lot of memory overhead and complication to Loki, and as has been a common thread in this post, we want Loki to be simple and cost-effective. Ideally we would want to improve our clients to do some basic buffering and sorting as this seems a better place to solve this problem.
@ -56,7 +56,7 @@ By adding our output plugin you can quickly try Loki without doing big configura
### Lambda Promtail
This is a workflow combining the promtail push-api [scrape config](promtail/configuration#loki_push_api_config) and the [lambda-promtail](lambda-promtail/) AWS Lambda function which pipes logs from Cloudwatch to Loki.
This is a workflow combining the Promtail push-api [scrape config](promtail/configuration#loki_push_api_config) and the [lambda-promtail](lambda-promtail/) AWS Lambda function which pipes logs from Cloudwatch to Loki.
This is a good choice if you're looking to try out Loki in a low-footprint way or if you wish to monitor AWS lambda logs in Loki.
Now we're going to download the [promtail configuration](../../promtail/) file below and edit it, don't worry we will explain what those means.
Now we're going to download the [Promtail configuration](../../promtail/) file below and edit it, don't worry we will explain what those means.
The file is also available as a gist at [cyriltovena/promtail-ec2.yaml][config gist].
```bash
@ -139,7 +139,7 @@ scrape_configs:
target_label: __host__
```
The **server** section indicates promtail to bind his http server to 3100. Promtail serves HTTP pages for [troubleshooting](../../promtail/troubleshooting) service discovery and targets.
The **server** section indicates Promtail to bind his http server to 3100. Promtail serves HTTP pages for [troubleshooting](../../promtail/troubleshooting) service discovery and targets.
The **clients** section allow you to target your loki instance, if you're using GrafanaCloud simply replace `<user id>` and `<api secret>` with your credentials. Otherwise just replace the whole URL with your custom Loki instance.(e.g `http://my-loki-instance.my-org.com/loki/api/v1/push`)
@ -153,11 +153,11 @@ Finally the [`relabeling_configs`][relabel] section has three purposes:
1. Selecting the labels discovered you want to attach to your targets. In our case here, we're keeping `instance_id` as instance, the tag `Name` as name and the `zone` of the instance. Make sure to check out the Prometheus [`ec2_sd_config`][ec2_sd_config] documentation for the full list of available labels.
2. Choosing where promtail should find log files to tail, in our example we want to include all log files that exist in `/var/log` using the glob `/var/log/**.log`. If you need to use multiple glob, you can simply add another job in your `scrape_configs`.
2. Choosing where Promtail should find log files to tail, in our example we want to include all log files that exist in `/var/log` using the glob `/var/log/**.log`. If you need to use multiple glob, you can simply add another job in your `scrape_configs`.
3. Ensuring discovered targets are only for the machine Promtail currently runs on. This is achieve by adding the label `__host__` using the incoming metadata `__meta_ec2_private_dns_name`. If it doesn't match the current `HOSTNAME` environnement variable, the target will be dropped.
Alright we should be ready to fire up promtail, we're going to run it using the flag `--dry-run`. This is perfect to ensure everything is correctly, specially when you're still playing around with the configuration. Don't worry when using this mode, Promtail won't send any logs and won't remember any file positions.
Alright we should be ready to fire up Promtail, we're going to run it using the flag `--dry-run`. This is perfect to ensure everything is correctly, specially when you're still playing around with the configuration. Don't worry when using this mode, Promtail won't send any logs and won't remember any file positions.
Loki includes an [AWS SAM](https://aws.amazon.com/serverless/sam/) package template for shipping Cloudwatch logs to Loki via a [set of promtails](https://github.com/grafana/loki/tree/master/tools/lambda-promtail). This is done via an intermediary [lambda function](https://aws.amazon.com/lambda/) which processes cloudwatch events and propagates them to a promtail instance (or set of instances behind a load balancer) via the push-api [scrape config](../promtail/configuration#loki_push_api_config).
Loki includes an [AWS SAM](https://aws.amazon.com/serverless/sam/) package template for shipping Cloudwatch logs to Loki via a [set of Promtails](https://github.com/grafana/loki/tree/master/tools/lambda-promtail). This is done via an intermediary [lambda function](https://aws.amazon.com/lambda/) which processes cloudwatch events and propagates them to a Promtail instance (or set of instances behind a load balancer) via the push-api [scrape config](../promtail/configuration#loki_push_api_config).
## Uses
@ -13,22 +13,22 @@ This workflow is intended to be an effective approach for monitoring ephemeral j
Ephemeral jobs can quite easily run afoul of cardinality best practices. During high request load, an AWS lambda function might balloon in concurrency, creating many log streams in Cloudwatch. However, these may only be active for a very short while. This creates a problem for combining these short-lived log streams in Loki because timestamps may not strictly increase across multiple log streams. The other obvious route is creating labels based on log streams, which is also undesirable because it leads to cardinality problems via many low-throughput log streams.
Instead we can pipeline Cloudwatch logs to a set of promtails, which can mitigate these problem in two ways:
Instead we can pipeline Cloudwatch logs to a set of Promtails, which can mitigate these problem in two ways:
1) Using promtail's push api along with the `use_incoming_timestamp: false` config, we let promtail determine the timestamp based on when it ingests the logs, not the timestamp assigned by cloudwatch. Obviously, this means that we lose the origin timestamp because promtail now assigns it, but this is a relatively small difference in a real time ingestion system like this.
2) In conjunction with (1), promtail can coalesce logs across Cloudwatch log streams because it's no longer susceptible to `out-of-order` errors when combining multiple sources (lambda invocations).
1) Using Promtail's push api along with the `use_incoming_timestamp: false` config, we let Promtail determine the timestamp based on when it ingests the logs, not the timestamp assigned by cloudwatch. Obviously, this means that we lose the origin timestamp because Promtail now assigns it, but this is a relatively small difference in a real time ingestion system like this.
2) In conjunction with (1), Promtail can coalesce logs across Cloudwatch log streams because it's no longer susceptible to `out-of-order` errors when combining multiple sources (lambda invocations).
One important aspect to keep in mind when running with a set of promtails behind a load balancer is that we're effectively moving the cardinality problems from the `number_of_log_streams` -> `number_of_promtails`. You'll need to assign a promtail specific label on each promtail so that you don't run into `out-of-order` errors when the promtails send data for the same log groups to Loki. This can easily be done via a config like `--client.external-labels=promtail=${HOSTNAME}` passed to promtail.
One important aspect to keep in mind when running with a set of Promtails behind a load balancer is that we're effectively moving the cardinality problems from the `number_of_log_streams` -> `number_of_promtails`. You'll need to assign a Promtail specific label on each Promtail so that you don't run into `out-of-order` errors when the Promtails send data for the same log groups to Loki. This can easily be done via a config like `--client.external-labels=promtail=${HOSTNAME}` passed to Promtail.
### Proof of concept Loki deployments
For those using Cloudwatch and wishing to test out Loki in a low-risk way, this workflow allows piping Cloudwatch logs to Loki regardless of the event source (EC2, Kubernetes, Lambda, ECS, etc) without setting up a set of promtail daemons across their infrastructure. However, running promtail as a daemon on your infrastructure is the best-practice deployment strategy in the long term for flexibility, reliability, performance, and cost.
For those using Cloudwatch and wishing to test out Loki in a low-risk way, this workflow allows piping Cloudwatch logs to Loki regardless of the event source (EC2, Kubernetes, Lambda, ECS, etc) without setting up a set of Promtail daemons across their infrastructure. However, running Promtail as a daemon on your infrastructure is the best-practice deployment strategy in the long term for flexibility, reliability, performance, and cost.
Note: Propagating logs from Cloudwatch to Loki means you'll still need to _pay_ for Cloudwatch.
## Propagated Labels
Incoming logs will have three special labels assigned to them which can be used in [relabeling](../promtail/configuration/#relabel_config) or later stages in a promtail [pipeline](../promtail/pipelines/):
Incoming logs will have three special labels assigned to them which can be used in [relabeling](../promtail/configuration/#relabel_config) or later stages in a Promtail [pipeline](../promtail/pipelines/):
- `__aws_cloudwatch_log_group`: The associated Cloudwatch Log Group for this log.
- `__aws_cloudwatch_log_stream`: The associated Cloudwatch Log Stream for this log.
@ -38,7 +38,7 @@ Incoming logs will have three special labels assigned to them which can be used
### Promtail labels
As stated earlier, this workflow moves the worst case stream cardinality from `number_of_log_streams` -> `number_of_log_groups` * `number_of_promtails`. For this reason, each promtail must have a unique label attached to logs it processes (ideally via something like `--client.external-labels=promtail=${HOSTNAME}`) and it's advised to run a small number of promtails behind a load balancer according to your throughput and redundancy needs.
As stated earlier, this workflow moves the worst case stream cardinality from `number_of_log_streams` -> `number_of_log_groups` * `number_of_promtails`. For this reason, each Promtail must have a unique label attached to logs it processes (ideally via something like `--client.external-labels=promtail=${HOSTNAME}`) and it's advised to run a small number of Promtails behind a load balancer according to your throughput and redundancy needs.
This trade-off is very effective when you have a large number of log streams but want to aggregate them by the log group. This is very common in AWS Lambda, where log groups are the "application" and log streams are the individual application containers which are spun up and down at a whim, possibly just for a single function invocation.
@ -46,11 +46,11 @@ This trade-off is very effective when you have a large number of log streams but
#### Availability
For availability concerns, run a set of promtails behind a load balancer.
For availability concerns, run a set of Promtails behind a load balancer.
#### Batching
Since promtail batches writes to Loki for performance, it's possible that promtail will receive a log, issue a successful `204` http status code for the write, then be killed at a later time before it writes upstream to Loki. This should be rare, but is a downside this workflow has.
Since Promtail batches writes to Loki for performance, it's possible that Promtail will receive a log, issue a successful `204` http status code for the write, then be killed at a later time before it writes upstream to Loki. This should be rare, but is a downside this workflow has.
### Templating
@ -58,7 +58,7 @@ The current SAM template is rudimentary. If you need to add vpc configs, extra l
## Example Promtail Config
Note: this should be run in conjunction with a promtail-specific label attached, ideally via a flag argument like `--client.external-labels=promtail=${HOSTNAME}`. It will receive writes via the push-api on ports `3500` (http) and `3600` (grpc).
Note: this should be run in conjunction with a Promtail-specific label attached, ideally via a flag argument like `--client.external-labels=promtail=${HOSTNAME}`. It will receive writes via the push-api on ports `3500` (http) and `3600` (grpc).
@ -84,7 +84,7 @@ Where default_value is the value to use if the environment variable is undefined
# WARNING: If one of the remote Loki servers fails to respond or responds
# with any error which is retryable, this will impact sending logs to any
# other configured remote Loki servers. Sending is done on a single thread!
# It is generally recommended to run multiple promtail clients in parallel
# It is generally recommended to run multiple Promtail clients in parallel
# if you want to send to multiple remote Loki instances.
clients:
- [<client_config>]
@ -150,7 +150,7 @@ The `server` block configures Promtail's behavior as an HTTP server:
# Base path to server all API routes from (e.g., /v1/).
[http_path_prefix: <string>]
# Target managers check flag for promtail readiness, if set to false the check is ignored
# Target managers check flag for Promtail readiness, if set to false the check is ignored
[health_check_target: <bool> | default = true]
```
@ -324,7 +324,7 @@ consulagent_sd_configs:
[Pipeline](../pipelines/) stages are used to transform log entries and their labels. The pipeline is executed after the discovery process finishes. The `pipeline_stages` object consists of a list of stages which correspond to the items listed below.
In most cases, you extract data from logs with `regex` or `json` stages. The extracted data is transformed into a temporary map object. The data can then be used by promtail e.g. as values for `labels` or as an `output`. Additionally any other stage aside from `docker` and `cri` can access the extracted data.
In most cases, you extract data from logs with `regex` or `json` stages. The extracted data is transformed into a temporary map object. The data can then be used by Promtail e.g. as values for `labels` or as an `output`. Additionally any other stage aside from `docker` and `cri` can access the extracted data.
```yaml
- [
@ -696,17 +696,17 @@ labels:
### syslog
The `syslog` block configures a syslog listener allowing users to push
logs to promtail with the syslog protocol.
logs to Promtail with the syslog protocol.
Currently supported is [IETF Syslog (RFC5424)](https://tools.ietf.org/html/rfc5424)
with and without octet counting.
The recommended deployment is to have a dedicated syslog forwarder like **syslog-ng** or **rsyslog**
in front of promtail. The forwarder can take care of the various specifications
in front of Promtail. The forwarder can take care of the various specifications
and transports that exist (UDP, BSD syslog, ...).
[Octet counting](https://tools.ietf.org/html/rfc6587#section-3.4.1) is recommended as the
message framing method. In a stream with [non-transparent framing](https://tools.ietf.org/html/rfc6587#section-3.4.2),
promtail needs to wait for the next message to catch multi-line messages,
Promtail needs to wait for the next message to catch multi-line messages,
therefore delays between messages can occur.
See recommended output configurations for
@ -714,7 +714,7 @@ See recommended output configurations for
[rsyslog](../scraping#rsyslog-output-configuration). Both configurations enable
IETF Syslog with octet-counting.
You may need to increase the open files limit for the promtail process
You may need to increase the open files limit for the Promtail process
if many clients are connected. (`ulimit -Sn`)
```yaml
@ -733,7 +733,7 @@ label_structured_data: <bool>
labels:
[ <labelname>: <labelvalue> ... ]
# Whether promtail should pass on the timestamp from the incoming syslog message.
# Whether Promtail should pass on the timestamp from the incoming syslog message.
# When false, or if no timestamp is present on the syslog message, Promtail will assign the current timestamp to the log when it was processed.
# Default is false
use_incoming_timestamp: <bool>
@ -772,8 +772,8 @@ Note the `server` configuration is the same as [server](#server)
labels:
[ <labelname>: <labelvalue> ... ]
# If promtail should pass on the timestamp from the incoming log or not.
# When false promtail will assign the current timestamp to the log when it was processed
# If Promtail should pass on the timestamp from the incoming log or not.
# When false Promtail will assign the current timestamp to the log when it was processed
It's fairly difficult to tail Docker files on a standalone machine because they are in different locations for every OS. We recommend the [Docker logging driver](../../docker-driver/) for local Docker installs or Docker Compose.
If running in a Kubernetes environment, you should look at the defined configs which are in [helm](https://github.com/grafana/helm-charts/blob/main/charts/promtail/templates/configmap.yaml) and [jsonnet](https://github.com/grafana/loki/tree/master/production/ksonnet/promtail/scrape_config.libsonnet), these leverage the prometheus service discovery libraries (and give promtail it's name) for automatically finding and tailing pods. The jsonnet config explains with comments what each section is for.
If running in a Kubernetes environment, you should look at the defined configs which are in [helm](https://github.com/grafana/helm-charts/blob/main/charts/promtail/templates/configmap.yaml) and [jsonnet](https://github.com/grafana/loki/tree/master/production/ksonnet/promtail/scrape_config.libsonnet), these leverage the prometheus service discovery libraries (and give Promtail it's name) for automatically finding and tailing pods. The jsonnet config explains with comments what each section is for.
## Example Static Config
While promtail may have been named for the prometheus service discovery code, that same code works very well for tailing logs without containers or container environments directly on virtual machines or bare metal.
While Promtail may have been named for the prometheus service discovery code, that same code works very well for tailing logs without containers or container environments directly on virtual machines or bare metal.
```yaml
server:
@ -1332,7 +1332,7 @@ server:
grpc_listen_port: 0
positions:
filename: /var/log/positions.yaml # This location needs to be writeable by promtail.
filename: /var/log/positions.yaml # This location needs to be writeable by Promtail.
@ -1353,7 +1353,7 @@ If you are rotating logs, be careful when using a wildcard pattern like `*.log`,
## Example Static Config without targets
While promtail may have been named for the prometheus service discovery code, that same code works very well for tailing logs without containers or container environments directly on virtual machines or bare metal.
While Promtail may have been named for the prometheus service discovery code, that same code works very well for tailing logs without containers or container environments directly on virtual machines or bare metal.
```yaml
server:
@ -1361,7 +1361,7 @@ server:
grpc_listen_port: 0
positions:
filename: /var/log/positions.yaml # This location needs to be writeable by promtail.
filename: /var/log/positions.yaml # This location needs to be writeable by Promtail.
Please note the `job_name` must be provided and must be unique between multiple `loki_push_api` scrape_configs, it will be used to register metrics.
A new server instance is created so the `http_listen_port` and `grpc_listen_port` must be different from the promtail `server` config section (unless it's disabled)
A new server instance is created so the `http_listen_port` and `grpc_listen_port` must be different from the Promtail `server` config section (unless it's disabled)
You can set `grpc_listen_port` to `0` to have a random port assigned if not using httpgrpc.
This document explain how one can setup Google Cloud Platform to forward its cloud resource logs from a particular GCP project into Google Pubsub topic so that is available for Loki promtail to consume.
This document explain how one can setup Google Cloud Platform to forward its cloud resource logs from a particular GCP project into Google Pubsub topic so that is available for Loki Promtail to consume.
This document assumes, that reader have `gcloud` installed and have required permissions(as mentioned in #[Roles and Permission] section)
@ -15,7 +15,7 @@ User should have following roles to complete the setup.
## Setup Pubsub Topic
Google Pubsub Topic will act as the queue to persist log messages which then can be read from `promtail`.
Google Pubsub Topic will act as the queue to persist log messages which then can be read from Promtail.
```bash
$ gcloud pubsub topics create $TOPIC_ID
@ -48,7 +48,7 @@ We cover more advanced `log-filter` [below](#Advanced-Log-filter)
## Create Pubsub subscription for Loki
We create subscription for the pubsub topic we create above and `promtail` uses this subscription to consume log messages.
We create subscription for the pubsub topic we create above and Promtail uses this subscription to consume log messages.
@ -70,9 +70,9 @@ For more fine grained options, refer to the `gcloud pubsub subscriptions --help`
We need a service account with following permissions.
- pubsub.subscriber
This enables promtail to read log entries from the pubsub subscription created before.
This enables Promtail to read log entries from the pubsub subscription created before.
you can find example for promtail scrape config for `gcplog` [here](../scraping/#gcplog-scraping)
you can find example for Promtail scrape config for `gcplog` [here](../scraping/#gcplog-scraping)
If you are scraping logs from multiple GCP projects, then this serviceaccount should have above permissions in all the projects you are tyring to scrape.
@ -80,7 +80,7 @@ If you are scraping logs from multiple GCP projects, then this serviceaccount sh
Sometimes you may wish to clear the pending pubsub queue containing logs.
These messages stays in Pubsub Subscription until they're acknowledged. The following command removes log messages without needing to be consumed via promtail or any other pubsub consumer.
These messages stays in Pubsub Subscription until they're acknowledged. The following command removes log messages without needing to be consumed via Promtail or any other pubsub consumer.
@ -51,7 +51,7 @@ There are different types of labels present in Promtail:
for the full list of Kubernetes meta labels.
- The `__path__` label is a special label which Promtail uses after discovery to
figure out where the file to read is located. Wildcards are allowed, for example `/var/log/*.log` to get all files with a `log` extension in the specified directory, and `/var/log/**/*.log` for matching files and directories recursively. For a full list of options check out the docs for the [library](https://github.com/bmatcuk/doublestar) promtail uses.
figure out where the file to read is located. Wildcards are allowed, for example `/var/log/*.log` to get all files with a `log` extension in the specified directory, and `/var/log/**/*.log` for matching files and directories recursively. For a full list of options check out the docs for the [library](https://github.com/bmatcuk/doublestar) Promtail uses.
- The label `filename` is added for every file found in `__path__` to ensure the
uniqueness of the streams. It is set to the absolute path of the file the line
@ -203,7 +203,7 @@ Configs are set in `gcplog` section in `scrape_config`
Here `project_id` and `subscription` are the only required fields.
- `project_id` is the GCP project id.
- `subscription` is the GCP pubsub subscription where promtail can consume log entries from.
- `subscription` is the GCP pubsub subscription where Promtail can consume log entries from.
Before using `gcplog` target, GCP should be [configured](../gcplog-cloud) with pubsub subscription to receive logs from.
@ -211,9 +211,9 @@ It also support `relabeling` and `pipeline` stages just like other targets.
When Promtail receives GCP logs the labels that are set on the GCP resources are available as internal labels. Like in the example above, the `__project_id` label from a GCP resource was transformed into a label called `project` through `relabel_configs`. See [Relabeling](#relabeling) for more information.
Log entries scraped by `gcplog` will add an additional label called `promtail_instance`. This label uniquely identifies each promtail instance trying to scrape gcplog (from a single `subscription_id`).
Log entries scraped by `gcplog` will add an additional label called `promtail_instance`. This label uniquely identifies each Promtail instance trying to scrape gcplog (from a single `subscription_id`).
We need this unique identifier to avoid out-of-order errors from Loki servers.
Because say two promtail instances rewrite timestamp of log entries(with same labelset) at the same time may reach Loki servers at different times can cause Loki servers to reject it.
Because say two Promtail instances rewrite timestamp of log entries(with same labelset) at the same time may reach Loki servers at different times can cause Loki servers to reject it.
This document describes known failure modes of `promtail` on edge cases and the
This document describes known failure modes of Promtail on edge cases and the
adopted trade-offs.
## Dry running
Promtail can be configured to print log stream entries instead of sending them to Loki.
This can be used in combination with [piping data](#pipe-data-to-promtail) to debug or troubleshoot promtail log parsing.
This can be used in combination with [piping data](#pipe-data-to-promtail) to debug or troubleshoot Promtail log parsing.
In dry run mode, Promtail still support reading from a [positions](../configuration#position_config) file however no update will be made to the targeted file, this is to ensure you can easily retry the same set of lines.
In order to receive and process syslog message into promtail, the following changes will be necessary:
In order to receive and process syslog message into Promtail, the following changes will be necessary:
* Review the [promtail syslog-receiver configuration documentation](/docs/clients/promtail/scraping.md#syslog-receiver)
* Review the [Promtail syslog-receiver configuration documentation](/docs/clients/promtail/scraping.md#syslog-receiver)
* Configure the promtail helm chart with the syslog configuration added to the `extraScrapeConfigs` section and associated service definition to listen for syslog messages. For example:
* Configure the Promtail helm chart with the syslog configuration added to the `extraScrapeConfigs` section and associated service definition to listen for syslog messages. For example:
```yaml
extraScrapeConfigs:
@ -149,13 +149,13 @@ syslogService:
port: 1514
```
## Run promtail with systemd-journal support
## Run Promtail with systemd-journal support
In order to receive and process syslog message into promtail, the following changes will be necessary:
In order to receive and process syslog message into Promtail, the following changes will be necessary:
* Review the [promtail systemd-journal configuration documentation](/docs/clients/promtail/scraping.md#journal-scraping-linux-only)
* Review the [Promtail systemd-journal configuration documentation](/docs/clients/promtail/scraping.md#journal-scraping-linux-only)
* Configure the promtail helm chart with the systemd-journal configuration added to the `extraScrapeConfigs` section and volume mounts for the promtail pods to access the log files. For example:
* Configure the Promtail helm chart with the systemd-journal configuration added to the `extraScrapeConfigs` section and volume mounts for the Promtail pods to access the log files. For example: