@ -54,7 +54,7 @@ By adding our output plugin you can quickly try Loki without doing big configura
### Lambda Promtail
This is a workflow combining the promtail push-api [scrape config](./promtail/configuration#loki_push_api_config) and the [lambda-promtail](../../tools/lambda-promtail/) AWS Lambda function which pipes logs from Cloudwatch to Loki.
This is a workflow combining the promtail push-api [scrape config](promtail/configuration#loki_push_api_config) and the [lambda-promtail](lambda-promtail/) AWS Lambda function which pipes logs from Cloudwatch to Loki.
This is a good choice if you're looking to try out Loki in a low-footprint way or if you wish to monitor AWS lambda logs in Loki.
@ -99,7 +99,7 @@ Once deployed, the Grafana service will send its logs to Loki.
## Labels
Loki can received a set of labels along with log line. These labels are used to index log entries and query back logs using [LogQL stream selector](../../logql#log-stream-selector).
Loki can received a set of labels along with log line. These labels are used to index log entries and query back logs using [LogQL stream selector](../../../logql/#log-stream-selector).
By default, the Docker driver will add the following labels to each log line:
@ -198,9 +198,9 @@ To specify additional logging driver options, you can use the --log-opt NAME=VAL
| `loki-min-backoff` | No | `100ms` | The minimum amount of time to wait before retrying a batch. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-max-backoff` | No | `10s` | The maximum amount of time to wait before retrying a batch. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". |
| `loki-retries` | No | `10` | The maximum amount of retries for a log batch. |
| `loki-pipeline-stage-file` | No | | The location of a pipeline stage configuration file ([example](https://github.com/grafana/loki/blob/master/cmd/docker-driver/pipeline-example.yaml)). Pipeline stages allows to parse log lines to extract more labels. [see documentation](../promtail/stages/) |
| `loki-pipeline-stages` | No | | The pipeline stage configuration provided as a string [see](#pipeline-stages) and [see documentation](../promtail/stages/) |
| `loki-relabel-config` | No | | A [Prometheus relabeling configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) allowing you to rename labels [see](#relabeling) |
| `loki-pipeline-stage-file` | No | | The location of a pipeline stage configuration file ([example](https://github.com/grafana/loki/blob/master/cmd/docker-driver/pipeline-example.yaml)). Pipeline stages allows to parse log lines to extract more labels, [see associated documentation](../../promtail/stages/). |
| `loki-pipeline-stages` | No | | The pipeline stage configuration provided as a string [see pipeline stages](#pipeline-stages) and [associated documentation](../../promtail/stages/). |
| `loki-relabel-config` | No | | A [Prometheus relabeling configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) allowing you to rename labels [see relabeling](#relabeling). |
| `loki-tenant-id` | No | | Set the tenant id (http header`X-Scope-OrgID`) when sending logs to Loki. It can be overrides by a pipeline stage. |
| `loki-tls-ca-file` | No | | Set the path to a custom certificate authority. |
| `loki-tls-cert-file` | No | | Set the path to a client certificate file. |
@ -21,7 +21,7 @@ docker run -v /var/log:/var/log \
You can run Fluent Bit as a [Daemonset](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) to collect all your Kubernetes workload logs.
To do so you can use our [Fluent Bit helm chart](../../production/helm/fluent-bit/README):
To do so you can use our [Fluent Bit helm chart](https://github.com/grafana/loki/tree/master/production/helm/fluent-bit):
> Make sure [tiller](https://helm.sh/docs/install/) is installed correctly in your cluster
@ -103,7 +103,7 @@ If set to true, it will add all Kubernetes labels to Loki labels automatically a
### LabelMapPath
When using the `Parser` and `Filter` plugins Fluent Bit can extract and add data to the current record/log data. While Loki labels are key value pair, record data can be nested structures.
You can pass a json file that defines how to extract [labels](../../docs/sources/overview#overview-of-loki) from each record. Each json key from the file will be matched with the log record to find label values. Values from the configuration are used as label names.
You can pass a json file that defines how to extract [labels](../../getting-started/labels/) from each record. Each json key from the file will be matched with the log record to find label values. Values from the configuration are used as label names.
Loki includes an [AWS SAM](https://aws.amazon.com/serverless/sam/) package template for shipping Cloudwatch logs to Loki via a set of promtails [here](../../../tools/lambda-promtail/). This is done via an intermediary [lambda function](https://aws.amazon.com/lambda/) which processes cloudwatch events and propagates them to a promtail instance (or set of instances behind a load balancer) via the push-api [scrape config](../promtail/configuration#loki_push_api_config).
Loki includes an [AWS SAM](https://aws.amazon.com/serverless/sam/) package template for shipping Cloudwatch logs to Loki via a set of promtails [here](https://github.com/grafana/loki/tree/master/tools/lambda-promtail). This is done via an intermediary [lambda function](https://aws.amazon.com/lambda/) which processes cloudwatch events and propagates them to a promtail instance (or set of instances behind a load balancer) via the push-api [scrape config](../promtail/configuration#loki_push_api_config).
## Uses
@ -25,7 +25,7 @@ Note: Propagating logs from Cloudwatch to Loki means you'll still need to _pay_
## Propagated Labels
Incoming logs will have three special labels assigned to them which can be used in [relabeling](../promtail#relabel_config) or later stages in a promtail [pipeline](../promtail/pipelines):
Incoming logs will have three special labels assigned to them which can be used in [relabeling](../promtail/configuration/#relabel_config) or later stages in a promtail [pipeline](../promtail/pipelines/):
- `__aws_cloudwatch_log_group`: The associated Cloudwatch Log Group for this log.
- `__aws_cloudwatch_log_stream`: The associated Cloudwatch Log Stream for this log.
This will automatically scrape all pods logs in the cluster and send them to Loki with Kubernetes metadata attached as labels.
You can use the [`values.yaml`](../../../production/helm/loki-stack/values.yaml) file as a starting point for your own configuration.
You can use the [`values.yaml`](https://github.com/grafana/loki/blob/master/production/helm/loki-stack/values.yaml) file as a starting point for your own configuration.
## Usage and Configuration
@ -100,7 +100,7 @@ A logstash event as shown below.
Contains a `message` and `@timestamp` fields, which are respectively used to form the Loki entry log line and timestamp.
> You can use a different property for the log line by using the configuration property [`message_field`](message_field). If you also need to change the timestamp value use the Logstash `date` filter to change the `@timestamp` field.
> You can use a different property for the log line by using the configuration property [`message_field`](#message_field). If you also need to change the timestamp value use the Logstash `date` filter to change the `@timestamp` field.
All other fields (except nested fields) will form the label set (key value pairs) attached to the log line. [This means you're responsible for mutating and dropping high cardinality labels](https://grafana.com/blog/2020/04/21/how-labels-in-loki-can-make-log-queries-faster-and-easier/) such as client IPs.
You can usually do so by using a [`mutate`](https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html) filter.
Loki runs and displays Loki logs in your command line and on http://localhost:3100/metrics.
Congratulations, Loki is installed and running! Next, you might want edit the Promtail config file to [get logs into Loki](../getting-started/get-logs-into-loki/).
Congratulations, Loki is installed and running! Next, you might want edit the Promtail config file to [get logs into Loki](../../getting-started/get-logs-into-loki/).
@ -91,7 +91,7 @@ ttl can be configured using `cache_ttl` config.
### Write Deduplication disabled
Loki does write deduplication of chunks and index using Chunks and WriteDedupe cache respectively, configured with [ChunkStoreConfig](../../configuration/README#chunk_store_config).
Loki does write deduplication of chunks and index using Chunks and WriteDedupe cache respectively, configured with [ChunkStoreConfig](../../../configuration/#chunk_store_config).
The problem with write deduplication when using `boltdb-shipper` though is ingesters only keep uploading boltdb files periodically to make them available to all the other services which means there would be a brief period where some of the services would not have received updated index yet.
The problem due to that is if an ingester which first wrote the chunks and index goes down and all the other ingesters which were part of replication scheme skipped writing those chunks and index due to deduplication, we would end up missing those logs from query responses since only the ingester which had the index went down.
This problem would be faced even during rollouts which is quite common.
@ -57,7 +57,7 @@ You may use any subsitutable services, such as those that implement the S3 API l
### Cassandra
Cassandra can also be utilized for the index store and asides from the experimental [boltdb-shipper](./storage/boltdb-shipper), it's the only non-cloud offering that can be used for the index that's horizontally scalable and has configurable replication. It's a good candidate when you already run Cassandra, are running on-prem, or do not wish to use a managed cloud offering.
Cassandra can also be utilized for the index store and aside from the experimental [boltdb-shipper](../operations/storage/boltdb-shipper/), it's the only non-cloud offering that can be used for the index that's horizontally scalable and has configurable replication. It's a good candidate when you already run Cassandra, are running on-prem, or do not wish to use a managed cloud offering.
### BigTable
@ -69,11 +69,11 @@ DynamoDB is a cloud database offered by AWS. It is a good candidate for a manage
#### Rate Limiting
DynamoDB is susceptible to rate limiting, particularly due to overconsuming what is called [provisioned capacity](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html). This can be controlled via the [provisioning](#Provisioning) configs in the table manager.
DynamoDB is susceptible to rate limiting, particularly due to overconsuming what is called [provisioned capacity](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html). This can be controlled via the [provisioning](#provisioning) configs in the table manager.
### BoltDB
BoltDB is an embedded database on disk. It is not replicated and thus cannot be used for high availability or clustered Loki deployments, but is commonly paired with a `filesystem` chunk store for proof of concept deployments, trying out Loki, and development. There is also an experimental mode, the [boltdb-shipper](./operations/storage/boltdb-shipper), which aims to support clustered deployments using `boltdb` as an index.
BoltDB is an embedded database on disk. It is not replicated and thus cannot be used for high availability or clustered Loki deployments, but is commonly paired with a `filesystem` chunk store for proof of concept deployments, trying out Loki, and development. There is also an experimental mode, the [boltdb-shipper](../operations/storage/boltdb-shipper/), which aims to support clustered deployments using `boltdb` as an index.
## Period Configs
@ -113,7 +113,7 @@ table_manager:
retention_period: 2520h
```
For more information, see the table manager [doc](./operations/storage/table-manager).
For more information, see the table manager [doc](../operations/storage/table-manager/).
### Provisioning
@ -132,7 +132,7 @@ table_manager:
inactive_read_throughput: <int> | Default = 300
```
Note, there are a few other DynamoDB provisioning options including DynamoDB autoscaling and on-demand capacity. See the [docs](./configuration/README#provision_config) for more information.
Note, there are a few other DynamoDB provisioning options including DynamoDB autoscaling and on-demand capacity. See the [docs](../configuration/#provision_config) for more information.
## Upgrading Schemas
@ -168,7 +168,7 @@ With the exception of the `filesystem` chunk store, Loki will not delete old chu
We're interested in adding targeted deletion in future Loki releases (think tenant or stream level granularity) and may include other strategies as well.
For more information, see the configuration [docs](./operations/storage/retention).
For more information, see the configuration [docs](../operations/storage/retention/).