- We give readers versioned documentation (now), so they don't need to know what version a feature appeared in.
- I couldn't resist fixing some grammar formatting and phrasing in the sections that needed changes.
@ -107,20 +107,20 @@ It's also worth noting that the batching nature of the Loki push API can lead to
## Use `chunk_target_size`
This was added earlier in the [Loki v1.3.0](https://grafana.com/blog/2020/01/22/loki-1.3.0-released/) release, and we've been experimenting with it for several months. We have `chunk_target_size: 1536000` in all our environments now. This instructs Loki to try to fill all chunks to a target _compressed_ size of 1.5MB. These larger chunks are more efficient for Loki to process.
Using `chunk_target_size` instructs Loki to try to fill all chunks to a target _compressed_ size of 1.5MB. These larger chunks are more efficient for Loki to process.
A couple other config variables affect how full a chunk can get. Loki has a default `max_chunk_age` of 1h and `chunk_idle_period` of 30m to limit the amount of memory used as well as the exposure of lost logs if the process crashes.
Other configuration variables affect how full a chunk can get. Loki has a default `max_chunk_age` of 1h and `chunk_idle_period` of 30m to limit the amount of memory used as well as the exposure of lost logs if the process crashes.
Depending on the compression used (we have been using snappy which has less compressibility but faster performance), you need 5-10x or 7.5-10MB of raw log data to fill a 1.5MB chunk. Remembering that a chunk is per stream, the more streams you break up your log files into, the more chunks that sit in memory, and the higher likelihood they get flushed by hitting one of those timeouts mentioned above before they are filled.
Lots of small, unfilled chunks are currently kryptonite for Loki. We are always working to improve this and may consider a compactor to improve this in some situations. But, in general, the guidance should stay about the same: Try your best to fill chunks!
Lots of small, unfilled chunks negatively affect Loki. We are always working to improve this and may consider a compactor to improve this in some situations. But, in general, the guidance should stay about the same: try your best to fill chunks.
If you have an application that can log fast enough to fill these chunks quickly (much less than `max_chunk_age`), then it becomes more reasonable to use dynamic labels to break that up into separate streams.
## Use `-print-config-stderr` or `-log-config-reverse-order`
Starting in version 1.6.0 Loki and Promtail have flags which will dump the entire config object to stderr, or the log file, when they start.
Loki and Promtail have flags which will dump the entire config object to stderr or the log file when they start.
`-print-config-stderr`is nice when running loki directly e.g. `./loki ` as you can get a quick output of the entire Loki config.
`-print-config-stderr`works well when invoking Loki from the command line, as you can get a quick output of the entire Loki configuration.
`-log-config-reverse-order` is the flag we run Loki with in all our environments, the config entries are reversed so that the order of configs reads correctly top to bottom when viewed in Grafana's Explore.
`-log-config-reverse-order` is the flag we run Loki with in all our environments. The configuration entries are reversed, so that the order of the configuration reads correctly top to bottom when viewed in Grafana's Explore.
@ -28,7 +28,7 @@ When finished, `loki-config.yaml` and `promtail-config.yaml` are downloaded in t
Navigate to http://localhost:3100/metrics to view the metrics and http://localhost:3100/ready for readiness.
As of v1.6.0, image is configured to run by default as user loki with UID `10001` and GID `10001`. You can use a different user, specially if you are using bind mounts, by specifying the UID with a `docker run` command and using `--user=UID` with numeric UID suited to your needs.
The image is configured to run by default as user loki with UID `10001` and GID `10001`. You can use a different user, specially if you are using bind mounts, by specifying the UID with a `docker run` command and using `--user=UID` with numeric UID suited to your needs.
@ -9,9 +9,6 @@ Make sure you have Helm [installed](https://helm.sh/docs/using_helm/#installing-
Add [Loki's chart repository](https://github.com/grafana/helm-charts) to Helm:
> **PLEASE NOTE** On 2020/12/11 Loki's Helm charts were moved from their initial location within the
Loki repo and hosted at https://grafana.github.io/loki/charts to their new location at https://github.com/grafana/helm-charts which are hosted at https://grafana.github.io/helm-charts
@ -87,11 +84,11 @@ output above. Then follow the [instructions for adding the Loki Data Source](../
## Run Loki behind HTTPS ingress
If Loki and Promtail are deployed on different clusters you can add an Ingress
in front of Loki. By adding a certificate you create an HTTPS endpoint. For
extra security you can also enable Basic Authentication on the Ingress.
If Loki and Promtail are deployed on different clusters, you can add an Ingress
in front of Loki. By adding a certificate, you create an HTTPS endpoint. For
extra security you can also enable Basic Authentication on Ingress.
In Promtail, set the following values to communicate using HTTPS and basic authentication:
In the Promtail configuration, set the following values to communicate using HTTPS and basic authentication:
```yaml
loki:
@ -127,11 +124,11 @@ spec:
## Run Promtail with syslog support
In order to receive and process syslog message into Promtail, the following changes will be necessary:
In order to receive and process syslog messages in Promtail, the following changes will be necessary:
* Review the [Promtail syslog-receiver configuration documentation](/docs/clients/promtail/scraping.md#syslog-receiver)
* Configure the Promtail helm chart with the syslog configuration added to the `extraScrapeConfigs` section and associated service definition to listen for syslog messages. For example:
* Configure the Promtail Helm chart with the syslog configuration added to the `extraScrapeConfigs` section and associated service definition to listen for syslog messages. For example:
```yaml
extraScrapeConfigs:
@ -155,7 +152,7 @@ In order to receive and process syslog message into Promtail, the following chan
* Review the [Promtail systemd-journal configuration documentation](/docs/clients/promtail/scraping.md#journal-scraping-linux-only)
* Configure the Promtail helm chart with the systemd-journal configuration added to the `extraScrapeConfigs` section and volume mounts for the Promtail pods to access the log files. For example:
* Configure the Promtail Helm chart with the systemd-journal configuration added to the `extraScrapeConfigs` section and volume mounts for the Promtail pods to access the log files. For example:
@ -15,26 +15,28 @@ In order to log events with Loki, you must download and install both Promtail an
**Note:** Do not download LogCLI or Loki Canary at this time. [LogCLI](../../getting-started/logcli/) allows you to run Loki queries in a command line interface. [Loki Canary](../../operations/loki-canary/) is a tool to audit Loki performance.
4. Unzip the package contents into the same directory. This is where the two programs will run.
5. In the command line, change directory (`cd` on most systems) to the directory with Loki and Promtail. Copy and paste the commands below into your command line to download generic configuration files:
Loki runs and displays Loki logs in your command line and on http://localhost:3100/metrics.
Congratulations, Loki is installed and running! Next, you might want edit the Promtail config file to [get logs into Loki](../../getting-started/get-logs-into-loki/).
The next step will be running an agent to send logs to Loki.
To do so with Promtail, refer to [get logs into Loki](../../getting-started/get-logs-into-loki/).