docs: proper fix for #12510 (#12516)

pull/12522/head
J Stickler 1 year ago committed by GitHub
parent 0925f3a1a6
commit 1c5a736641
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 2
      docs/sources/community/design-documents/2020-02-Promtail-Push-API.md
  2. 2
      docs/sources/configure/bp-configure.md
  3. 10
      docs/sources/configure/storage.md
  4. 16
      docs/sources/get-started/_index.md
  5. 12
      docs/sources/get-started/labels/structured-metadata.md
  6. 12
      docs/sources/get-started/quick-start.md
  7. 14
      docs/sources/operations/query-acceleration-blooms.md
  8. 26
      docs/sources/operations/request-validation-rate-limits.md
  9. 18
      docs/sources/operations/storage/_index.md
  10. 20
      docs/sources/operations/storage/retention.md
  11. 2
      docs/sources/operations/troubleshooting.md
  12. 4
      docs/sources/operations/upgrade.md
  13. 2
      docs/sources/query/logcli.md
  14. 4
      docs/sources/release-notes/v2-3.md
  15. 2
      docs/sources/release-notes/v2-5.md
  16. 4
      docs/sources/release-notes/v2-9.md
  17. 4
      docs/sources/send-data/fluentbit/_index.md
  18. 2
      docs/sources/send-data/lambda-promtail/_index.md
  19. 2
      docs/sources/send-data/otel/_index.md
  20. 6
      docs/sources/send-data/promtail/cloud/ecs/_index.md
  21. 2
      docs/sources/send-data/promtail/cloud/eks/_index.md
  22. 2
      docs/sources/setup/install/helm/configure-storage/_index.md
  23. 2
      docs/sources/setup/install/helm/install-scalable/_index.md
  24. 2
      docs/sources/setup/install/tanka.md
  25. 4
      docs/sources/setup/migrate/migrate-to-tsdb/_index.md
  26. 40
      docs/sources/setup/upgrade/_index.md

@ -66,7 +66,7 @@ rejected pushes. Users are recommended to do one of the following:
## Implementation
As discussed in this document, this feature will be implemented by copying the
existing [Loki Push API](/docs/loki/latest/api/#post-lokiapiv1push)
existing [Loki Push API](/docs/loki /<LOKI_VERSION>/api/#post-lokiapiv1push)
and exposing it via Promtail.
## Considered Alternatives

@ -46,7 +46,7 @@ What can we do about this? What if this was because the sources of these logs we
{job="syslog", instance="host2"} 00:00:02 i'm a syslog! <- Accepted, still in order for stream 2
```
But what if the application itself generated logs that were out of order? Well, I'm afraid this is a problem. If you are extracting the timestamp from the log line with something like [the Promtail pipeline stage](/docs/loki/latest/send-data/promtail/stages/timestamp/), you could instead _not_ do this and let Promtail assign a timestamp to the log lines. Or you can hopefully fix it in the application itself.
But what if the application itself generated logs that were out of order? Well, I'm afraid this is a problem. If you are extracting the timestamp from the log line with something like [the Promtail pipeline stage](/docs/loki /<LOKI_VERSION>/send-data/promtail/stages/timestamp/), you could instead _not_ do this and let Promtail assign a timestamp to the log lines. Or you can hopefully fix it in the application itself.
It's also worth noting that the batching nature of the Loki push API can lead to some instances of out of order errors being received which are really false positives. (Perhaps a batch partially succeeded and was present; or anything that previously succeeded would return an out of order entry; or anything new would be accepted.)

@ -12,16 +12,16 @@ even locally on the filesystem. A small index and highly compressed chunks
simplifies the operation and significantly lowers the cost of Loki.
Loki 2.8 introduced TSDB as a new mode for the Single Store and is now the recommended way to persist data in Loki.
More detailed information about TSDB can be found under the [manage section](https://grafana.com/docs/loki/latest/operations/storage/tsdb/).
More detailed information about TSDB can be found under the [manage section](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/tsdb/).
Loki 2.0 introduced an index mechanism named 'boltdb-shipper' and is what we now call [Single Store](#single-store).
This type only requires one store, the object store, for both the index and chunks.
More detailed information about 'boltdb-shipper' can be found under the [manage section](https://grafana.com/docs/loki/latest/operations/storage/boltdb-shipper/).
More detailed information about 'boltdb-shipper' can be found under the [manage section](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/boltdb-shipper/).
Prior to Loki 2.0, chunks and index data were stored in separate backends:
object storage (or filesystem) for chunk data and NoSQL/Key-Value databases for index data. These "multistore" backends have been deprecated, as noted below.
You can find more detailed information about all of the storage options in the [manage section](https://grafana.com/docs/loki/latest/operations/storage/).
You can find more detailed information about all of the storage options in the [manage section](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/).
## Single Store
@ -29,7 +29,7 @@ Single Store refers to using object storage as the storage medium for both Loki'
### TSDB (recommended)
Starting in Loki 2.8, the [TSDB index store](https://grafana.com/docs/loki/latest/operations/storage/tsdb/) improves query performance, reduces TCO and has the same feature parity as "boltdb-shipper".
Starting in Loki 2.8, the [TSDB index store](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/tsdb/) improves query performance, reduces TCO and has the same feature parity as "boltdb-shipper".
### BoltDB (deprecated)
@ -91,7 +91,7 @@ This storage type for chunks is deprecated and may be removed in future major ve
### Cassandra (deprecated)
Cassandra can also be utilized for the index store and aside from the [boltdb-shipper](https://grafana.com/docs/loki/latest/operations/storage/boltdb-shipper/), it's the only non-cloud offering that can be used for the index that's horizontally scalable and has configurable replication. It's a good candidate when you already run Cassandra, are running on-prem, or do not wish to use a managed cloud offering.
Cassandra can also be utilized for the index store and aside from the [boltdb-shipper](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/boltdb-shipper/), it's the only non-cloud offering that can be used for the index that's horizontally scalable and has configurable replication. It's a good candidate when you already run Cassandra, are running on-prem, or do not wish to use a managed cloud offering.
{{< collapse title="Title of hidden content" >}}
This storage type for indexes is deprecated and may be removed in future major versions of Loki.

@ -17,26 +17,26 @@ To collect logs and view your log data generally involves the following steps:
![Loki implementation steps](loki-install.png)
1. Install Loki on Kubernetes in simple scalable mode, using the recommended [Helm chart](https://grafana.com/docs/loki/latest/setup/install/helm/install-scalable/). Supply the Helm chart with your object storage authentication details.
- [Storage options](https://grafana.com/docs/loki/latest/operations/storage/)
- [Configuration reference](https://grafana.com/docs/loki/latest/configure/)
- There are [examples](https://grafana.com/docs/loki/latest/configure/examples/) for specific Object Storage providers that you can modify.
1. Install Loki on Kubernetes in simple scalable mode, using the recommended [Helm chart](https://grafana.com/docs/loki /<LOKI_VERSION>/setup/install/helm/install-scalable/). Supply the Helm chart with your object storage authentication details.
- [Storage options](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/)
- [Configuration reference](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/)
- There are [examples](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/examples/) for specific Object Storage providers that you can modify.
1. Deploy the [Grafana Agent](https://grafana.com/docs/agent/latest/flow/) to collect logs from your applications.
1. On Kubernetes, deploy the Grafana Agent using the Helm chart. Configure Grafana Agent to scrape logs from your Kubernetes cluster, and add your Loki endpoint details. See the following section for an example Grafana Agent Flow configuration file.
1. Add [labels](https://grafana.com/docs/loki/latest/get-started/labels/) to your logs following our [best practices](https://grafana.com/docs/loki/latest/get-started/labels/bp-labels/). Most Loki users start by adding labels which describe where the logs are coming from (region, cluster, environment, etc.).
1. Add [labels](https://grafana.com/docs/loki /<LOKI_VERSION>/get-started/labels/) to your logs following our [best practices](https://grafana.com/docs/loki /<LOKI_VERSION>/get-started/labels/bp-labels/). Most Loki users start by adding labels which describe where the logs are coming from (region, cluster, environment, etc.).
1. Deploy [Grafana](https://grafana.com/docs/grafana/latest/setup-grafana/) or [Grafana Cloud](https://grafana.com/docs/grafana-cloud/quickstart/) and configure a [Loki datasource](https://grafana.com/docs/grafana/latest/datasources/loki/configure-loki-data-source/).
1. Select the [Explore feature](https://grafana.com/docs/grafana/latest/explore/) in the Grafana main menu. To [view logs in Explore](https://grafana.com/docs/grafana/latest/explore/logs-integration/):
1. Pick a time range.
1. Choose the Loki datasource.
1. Use [LogQL](https://grafana.com/docs/loki/latest/query/) in the [query editor](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/), use the Builder view to explore your labels, or select from sample pre-configured queries using the **Kick start your query** button.
1. Use [LogQL](https://grafana.com/docs/loki /<LOKI_VERSION>/query/) in the [query editor](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/), use the Builder view to explore your labels, or select from sample pre-configured queries using the **Kick start your query** button.
**Next steps:** Learn more about Loki’s query language, [LogQL](https://grafana.com/docs/loki/latest/query/).
**Next steps:** Learn more about Loki’s query language, [LogQL](https://grafana.com/docs/loki /<LOKI_VERSION>/query/).
## Example Grafana Agent configuration file to ship Kubernetes Pod logs to Loki
To deploy Grafana Agent to collect Pod logs from your Kubernetes cluster and ship them to Loki, you an use the Grafana Agent Helm chart, and a `values.yaml` file.
1. Install Loki with the [Helm chart](https://grafana.com/docs/loki/latest/setup/install/helm/install-scalable/).
1. Install Loki with the [Helm chart](https://grafana.com/docs/loki /<LOKI_VERSION>/setup/install/helm/install-scalable/).
1. Deploy the Grafana Agent, using the [Grafana Agent Helm chart](https://grafana.com/docs/agent/latest/flow/setup/install/kubernetes/) and this example `values.yaml` file updating the value for `forward_to = [loki.write.endpoint.receiver]`:
```yaml

@ -6,7 +6,7 @@ description: Describes how to enable structure metadata for logs and how to quer
# What is structured metadata
{{% admonition type="warning" %}}
Structured metadata was added to chunk format V4 which is used if the schema version is greater or equal to `13`. See [Schema Config](https://grafana.com/docs/loki/latest/configure/storage/#schema-config) for more details about schema versions.
Structured metadata was added to chunk format V4 which is used if the schema version is greater or equal to `13`. See [Schema Config](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/storage/#schema-config) for more details about schema versions.
{{% /admonition %}}
Selecting proper, low cardinality labels is critical to operating and querying Loki effectively. Some metadata, especially infrastructure related metadata, can be difficult to embed in log lines, and is too high cardinality to effectively store as indexed labels (and therefore reducing performance of the index).
@ -29,12 +29,12 @@ It is an antipattern to extract information that already exists in your log line
## Attaching structured metadata to log lines
You have the option to attach structured metadata to log lines in the push payload along with each log line and the timestamp.
For more information on how to push logs to Loki via the HTTP endpoint, refer to the [HTTP API documentation](https://grafana.com/docs/loki/latest/reference/api/#ingest-logs).
For more information on how to push logs to Loki via the HTTP endpoint, refer to the [HTTP API documentation](https://grafana.com/docs/loki /<LOKI_VERSION>/reference/api/#ingest-logs).
Alternatively, you can use the Grafana Agent or Promtail to extract and attach structured metadata to your log lines.
See the [Promtail: Structured metadata stage](https://grafana.com/docs/loki/latest/send-data/promtail/stages/structured_metadata/) for more information.
See the [Promtail: Structured metadata stage](https://grafana.com/docs/loki /<LOKI_VERSION>/send-data/promtail/stages/structured_metadata/) for more information.
With Loki version 1.2.0, support for structured metadata has been added to the Logstash output plugin. For more information, see [logstash](https://grafana.com/docs/loki/latest/send-data/logstash/).
With Loki version 1.2.0, support for structured metadata has been added to the Logstash output plugin. For more information, see [logstash](https://grafana.com/docs/loki /<LOKI_VERSION>/send-data/logstash/).
{{% admonition type="warning" %}}
There are defaults for how much structured metadata can be attached per log line.
@ -52,7 +52,7 @@ There are defaults for how much structured metadata can be attached per log line
## Querying structured metadata
Structured metadata is extracted automatically for each returned log line and added to the labels returned for the query.
You can use labels of structured metadata to filter log line using a [label filter expression](https://grafana.com/docs/loki/latest/query/log_queries/#label-filter-expression).
You can use labels of structured metadata to filter log line using a [label filter expression](https://grafana.com/docs/loki /<LOKI_VERSION>/query/log_queries/#label-filter-expression).
For example, if you have a label `pod` attached to some of your log lines as structured metadata, you can filter log lines using:
@ -66,7 +66,7 @@ Of course, you can filter by multiple labels of structured metadata at the same
{job="example"} | pod="myservice-abc1234-56789" | trace_id="0242ac120002"
```
Note that since structured metadata is extracted automatically to the results labels, some metric queries might return an error like `maximum of series (50000) reached for a single query`. You can use the [Keep](https://grafana.com/docs/loki/latest/query/log_queries/#keep-labels-expression) and [Drop](https://grafana.com/docs/loki/latest/query/log_queries/#drop-labels-expression) stages to filter out labels that you don't need.
Note that since structured metadata is extracted automatically to the results labels, some metric queries might return an error like `maximum of series (50000) reached for a single query`. You can use the [Keep](https://grafana.com/docs/loki /<LOKI_VERSION>/query/log_queries/#keep-labels-expression) and [Drop](https://grafana.com/docs/loki /<LOKI_VERSION>/query/log_queries/#drop-labels-expression) stages to filter out labels that you don't need.
For example:
```logql

@ -7,7 +7,7 @@ description: How to create and use a simple local Loki cluster for testing and e
# Quickstart to run Loki locally
If you want to experiment with Loki, you can run Loki locally using the Docker Compose file that ships with Loki. It runs Loki in a [monolithic deployment](https://grafana.com/docs/loki/latest/get-started/deployment-modes/#monolithic-mode) mode and includes a sample application to generate logs.
If you want to experiment with Loki, you can run Loki locally using the Docker Compose file that ships with Loki. It runs Loki in a [monolithic deployment](https://grafana.com/docs/loki /<LOKI_VERSION>/get-started/deployment-modes/#monolithic-mode) mode and includes a sample application to generate logs.
The Docker Compose configuration instantiates the following components, each in its own container:
@ -76,7 +76,7 @@ This quickstart assumes you are running Linux.
## Viewing your logs in Grafana
Once you have collected logs, you will want to view them. You can view your logs using the command line interface, [LogCLI](/docs/loki/latest/query/logcli/), but the easiest way to view your logs is with Grafana.
Once you have collected logs, you will want to view them. You can view your logs using the command line interface, [LogCLI](/docs/loki /<LOKI_VERSION>/query/logcli/), but the easiest way to view your logs is with Grafana.
1. Use Grafana to query the Loki data source.
@ -86,7 +86,7 @@ Once you have collected logs, you will want to view them. You can view your log
1. From the Grafana main menu, click the **Explore** icon (1) to launch the Explore tab. To learn more about Explore, refer the [Explore](https://grafana.com/docs/grafana/latest/explore/) documentation.
1. From the menu in the dashboard header, select the Loki data source (2). This displays the Loki query editor. In the query editor you use the Loki query language, [LogQL](https://grafana.com/docs/loki/latest/query/), to query your logs.
1. From the menu in the dashboard header, select the Loki data source (2). This displays the Loki query editor. In the query editor you use the Loki query language, [LogQL](https://grafana.com/docs/loki /<LOKI_VERSION>/query/), to query your logs.
To learn more about the query editor, refer to the [query editor documentation](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/).
1. The Loki query editor has two modes (3):
@ -106,7 +106,7 @@ Once you have collected logs, you will want to view them. You can view your log
{container="evaluate-loki-flog-1"}
```
In Loki, this is called a log stream. Loki uses [labels](https://grafana.com/docs/loki/latest/get-started/labels/) as metadata to describe log streams. Loki queries always start with a label selector. In the query above, the label selector is `container`.
In Loki, this is called a log stream. Loki uses [labels](https://grafana.com/docs/loki /<LOKI_VERSION>/get-started/labels/) as metadata to describe log streams. Loki queries always start with a label selector. In the query above, the label selector is `container`.
1. To view all the log lines which have the container label "grafana":
@ -140,7 +140,7 @@ Once you have collected logs, you will want to view them. You can view your log
1. Select the first choice, **Parse log lines with logfmt parser**, by clicking **Use this query**.
1. On the Explore tab, click **Label browser**, in the dialog select a container and click **Show logs**.
For a thorough introduction to LogQL, refer to the [LogQL reference](https://grafana.com/docs/loki/latest/query/).
For a thorough introduction to LogQL, refer to the [LogQL reference](https://grafana.com/docs/loki /<LOKI_VERSION>/query/).
## Sample queries (code view)
@ -178,7 +178,7 @@ To see every log line that does not contain the value 401:
{container="evaluate-loki-flog-1"} != "401"
```
For more examples, refer to the [query documentation](https://grafana.com/docs/loki/latest/query/query_examples/).
For more examples, refer to the [query documentation](https://grafana.com/docs/loki /<LOKI_VERSION>/query/query_examples/).
## Complete metrics, logs, traces, and profiling example

@ -196,7 +196,7 @@ Loki will check blooms for any log filtering expression within a query that sati
whereas `|~ "f.*oo"` would not be simplifiable.
- The filtering expression is a match (`|=`) or regex match (`|~`) filter. We don’t use blooms for not equal (`!=`) or not regex (`!~`) expressions.
- For example, `|= "level=error"` would use blooms but `!= "level=error"` would not.
- The filtering expression is placed before a [line format expression](https://grafana.com/docs/loki/latest/query/log_queries/#line-format-expression).
- The filtering expression is placed before a [line format expression](https://grafana.com/docs/loki /<LOKI_VERSION>/query/log_queries/#line-format-expression).
- For example, with `|= "level=error" | logfmt | line_format "ERROR {{.err}}" |= "traceID=3ksn8d4jj3"`,
the first filter (`|= "level=error"`) will benefit from blooms but the second one (`|= "traceID=3ksn8d4jj3"`) will not.
@ -213,9 +213,9 @@ Query acceleration introduces a new sharding strategy: `bounded`, which uses blo
processed right away during the planning phase in the query frontend,
as well as evenly distributes the amount of chunks each sharded query will need to process.
[ring]: https://grafana.com/docs/loki/latest/get-started/hash-rings/
[tenant-limits]: https://grafana.com/docs/loki/latest/configure/#limits_config
[gateway-cfg]: https://grafana.com/docs/loki/latest/configure/#bloom_gateway
[compactor-cfg]: https://grafana.com/docs/loki/latest/configure/#bloom_compactor
[microservices]: https://grafana.com/docs/loki/latest/get-started/deployment-modes/#microservices-mode
[ssd]: https://grafana.com/docs/loki/latest/get-started/deployment-modes/#simple-scalable
[ring]: https://grafana.com/docs/loki /<LOKI_VERSION>/get-started/hash-rings/
[tenant-limits]: https://grafana.com/docs/loki /<LOKI_VERSION>/configure/#limits_config
[gateway-cfg]: https://grafana.com/docs/loki /<LOKI_VERSION>/configure/#bloom_gateway
[compactor-cfg]: https://grafana.com/docs/loki /<LOKI_VERSION>/configure/#bloom_compactor
[microservices]: https://grafana.com/docs/loki /<LOKI_VERSION>/get-started/deployment-modes/#microservices-mode
[ssd]: https://grafana.com/docs/loki /<LOKI_VERSION>/get-started/deployment-modes/#simple-scalable

@ -28,11 +28,11 @@ Rate-limits are enforced when Loki cannot handle more requests from a tenant.
This rate-limit is enforced when a tenant has exceeded their configured log ingestion rate-limit.
One solution if you're seeing samples dropped due to `rate_limited` is simply to increase the rate limits on your Loki cluster. These limits can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. The config options to use are `ingestion_rate_mb` and `ingestion_burst_size_mb`.
One solution if you're seeing samples dropped due to `rate_limited` is simply to increase the rate limits on your Loki cluster. These limits can be modified globally in the [`limits_config`](/docs/loki /<LOKI_VERSION>/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki /<LOKI_VERSION>/configuration/#runtime-configuration-file) file. The config options to use are `ingestion_rate_mb` and `ingestion_burst_size_mb`.
Note that you'll want to make sure your Loki cluster has sufficient resources provisioned to be able to accommodate these higher limits. Otherwise your cluster may experience performance degradation as it tries to handle this higher volume of log lines to ingest.
Another option to address samples being dropped due to `rate_limits` is simply to decrease the rate of log lines being sent to your Loki cluster. Consider collecting logs from fewer targets or setting up `drop` stages in Promtail to filter out certain log lines. Promtail's [limits configuration](/docs/loki/latest/send-data/promtail/configuration/#limits_config) also gives you the ability to control the volume of logs Promtail remote writes to your Loki cluster.
Another option to address samples being dropped due to `rate_limits` is simply to decrease the rate of log lines being sent to your Loki cluster. Consider collecting logs from fewer targets or setting up `drop` stages in Promtail to filter out certain log lines. Promtail's [limits configuration](/docs/loki /<LOKI_VERSION>/send-data/promtail/configuration/#limits_config) also gives you the ability to control the volume of logs Promtail remote writes to your Loki cluster.
| Property | Value |
@ -50,9 +50,9 @@ This limit is enforced when a single stream reaches its rate-limit.
Each stream has a rate-limit applied to it to prevent individual streams from overwhelming the set of ingesters it is distributed to (the size of that set is equal to the `replication_factor` value).
This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. The config options to adjust are `per_stream_rate_limit` and `per_stream_rate_limit_burst`.
This value can be modified globally in the [`limits_config`](/docs/loki /<LOKI_VERSION>/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki /<LOKI_VERSION>/configuration/#runtime-configuration-file) file. The config options to adjust are `per_stream_rate_limit` and `per_stream_rate_limit_burst`.
Another option you could consider to decrease the rate of samples dropped due to `per_stream_rate_limit` is to split the stream that is getting rate limited into several smaller streams. A third option is to use Promtail's [limit stage](/docs/loki/latest/send-data/promtail/stages/limit/#limit-stage) to limit the rate of samples sent to the stream hitting the `per_stream_rate_limit`.
Another option you could consider to decrease the rate of samples dropped due to `per_stream_rate_limit` is to split the stream that is getting rate limited into several smaller streams. A third option is to use Promtail's [limit stage](/docs/loki /<LOKI_VERSION>/send-data/promtail/stages/limit/#limit-stage) to limit the rate of samples sent to the stream hitting the `per_stream_rate_limit`.
We typically recommend setting `per_stream_rate_limit` no higher than 5MB, and `per_stream_rate_limit_burst` no higher than 20MB.
@ -71,7 +71,7 @@ This limit is enforced when a tenant reaches their maximum number of active stre
Active streams are held in memory buffers in the ingesters, and if this value becomes sufficiently large then it will cause the ingesters to run out of memory.
This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. To increase the allowable active streams, adjust `max_global_streams_per_user`. Alternatively, the number of active streams can be reduced by removing extraneous labels or removing excessive unique label values.
This value can be modified globally in the [`limits_config`](/docs/loki /<LOKI_VERSION>/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki /<LOKI_VERSION>/configuration/#runtime-configuration-file) file. To increase the allowable active streams, adjust `max_global_streams_per_user`. Alternatively, the number of active streams can be reduced by removing extraneous labels or removing excessive unique label values.
| Property | Value |
|-------------------------|-------------------------|
@ -90,7 +90,7 @@ Validation errors occur when a request violates a validation rule defined by Lok
This error occurs when a log line exceeds the maximum allowable length in bytes. The HTTP response will include the stream to which the offending log line belongs as well as its size in bytes.
This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. To increase the maximum line size, adjust `max_line_size`. We recommend that you do not increase this value above 256kb for performance reasons. Alternatively, Loki can be configured to ingest truncated versions of log lines over the length limit by using the `max_line_size_truncate` option.
This value can be modified globally in the [`limits_config`](/docs/loki /<LOKI_VERSION>/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki /<LOKI_VERSION>/configuration/#runtime-configuration-file) file. To increase the maximum line size, adjust `max_line_size`. We recommend that you do not increase this value above 256kb for performance reasons. Alternatively, Loki can be configured to ingest truncated versions of log lines over the length limit by using the `max_line_size_truncate` option.
| Property | Value |
|-------------------------|------------------|
@ -129,9 +129,9 @@ This validation error is returned when a stream is submitted without any labels.
The `too_far_behind` and `out_of_order` reasons are identical. Loki clusters with `unordered_writes=true` (the default value as of Loki v2.4) use `reason=too_far_behind`. Loki clusters with `unordered_writes=false` use `reason=out_of_order`.
This validation error is returned when a stream is submitted out of order. More details can be found [here](/docs/loki/latest/configuration/#accept-out-of-order-writes) about Loki's ordering constraints.
This validation error is returned when a stream is submitted out of order. More details can be found [here](/docs/loki /<LOKI_VERSION>/configuration/#accept-out-of-order-writes) about Loki's ordering constraints.
The `unordered_writes` config value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file, whereas `max_chunk_age` is a global configuration.
The `unordered_writes` config value can be modified globally in the [`limits_config`](/docs/loki /<LOKI_VERSION>/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki /<LOKI_VERSION>/configuration/#runtime-configuration-file) file, whereas `max_chunk_age` is a global configuration.
This problem can be solved by ensuring that log delivery is configured correctly, or by increasing the `max_chunk_age` value.
@ -148,7 +148,7 @@ It is recommended to resist modifying the default value of `max_chunk_age` as th
If the `reject_old_samples` config option is set to `true` (it is by default), then samples will be rejected with `reason=greater_than_max_sample_age` if they are older than the `reject_old_samples_max_age` value. You should not see samples rejected for `reason=greater_than_max_sample_age` if `reject_old_samples=false`.
This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. This error can be solved by increasing the `reject_old_samples_max_age` value, or investigating why log delivery is delayed for this particular stream. The stream in question will be returned in the body of the HTTP response.
This value can be modified globally in the [`limits_config`](/docs/loki /<LOKI_VERSION>/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki /<LOKI_VERSION>/configuration/#runtime-configuration-file) file. This error can be solved by increasing the `reject_old_samples_max_age` value, or investigating why log delivery is delayed for this particular stream. The stream in question will be returned in the body of the HTTP response.
| Property | Value |
|-------------------------|-------------------|
@ -163,7 +163,7 @@ This value can be modified globally in the [`limits_config`](/docs/loki/latest/c
If a sample's timestamp is greater than the current timestamp, Loki allows for a certain grace period during which samples will be accepted. If the grace period is exceeded, the error will occur.
This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. This error can be solved by increasing the `creation_grace_period` value, or investigating why this particular stream has a timestamp too far into the future. The stream in question will be returned in the body of the HTTP response.
This value can be modified globally in the [`limits_config`](/docs/loki /<LOKI_VERSION>/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki /<LOKI_VERSION>/configuration/#runtime-configuration-file) file. This error can be solved by increasing the `creation_grace_period` value, or investigating why this particular stream has a timestamp too far into the future. The stream in question will be returned in the body of the HTTP response.
| Property | Value |
|-------------------------|-------------------|
@ -178,7 +178,7 @@ This value can be modified globally in the [`limits_config`](/docs/loki/latest/c
If a sample is submitted with more labels than Loki has been configured to allow, it will be rejected with the `max_label_names_per_series` reason. Note that 'series' is the same thing as a 'stream' in Loki - the 'series' term is a legacy name.
This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. This error can be solved by increasing the `max_label_names_per_series` value. The stream to which the offending sample (i.e. the one with too many label names) belongs will be returned in the body of the HTTP response.
This value can be modified globally in the [`limits_config`](/docs/loki /<LOKI_VERSION>/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki /<LOKI_VERSION>/configuration/#runtime-configuration-file) file. This error can be solved by increasing the `max_label_names_per_series` value. The stream to which the offending sample (i.e. the one with too many label names) belongs will be returned in the body of the HTTP response.
| Property | Value |
|-------------------------|-------------------|
@ -193,7 +193,7 @@ This value can be modified globally in the [`limits_config`](/docs/loki/latest/c
If a sample is sent with a label name that has a length in bytes greater than Loki has been configured to allow, it will be rejected with the `label_name_too_long` reason.
This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. This error can be solved by increasing the `max_label_name_length` value, though we do not recommend raising it significantly above the default value of `1024` for performance reasons. The offending stream will be returned in the body of the HTTP response.
This value can be modified globally in the [`limits_config`](/docs/loki /<LOKI_VERSION>/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki /<LOKI_VERSION>/configuration/#runtime-configuration-file) file. This error can be solved by increasing the `max_label_name_length` value, though we do not recommend raising it significantly above the default value of `1024` for performance reasons. The offending stream will be returned in the body of the HTTP response.
| Property | Value |
|-------------------------|-------------------|
@ -208,7 +208,7 @@ This value can be modified globally in the [`limits_config`](/docs/loki/latest/c
If a sample has a label value with a length in bytes greater than Loki has been configured to allow, it will be rejected for the `label_value_too_long` reason.
This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. This error can be solved by increasing the `max_label_value_length` value. The offending stream will be returned in the body of the HTTP response.
This value can be modified globally in the [`limits_config`](/docs/loki /<LOKI_VERSION>/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki /<LOKI_VERSION>/configuration/#runtime-configuration-file) file. This error can be solved by increasing the `max_label_value_length` value. The offending stream will be returned in the body of the HTTP response.
| Property | Value |
|-------------------------|-------------------|

@ -6,7 +6,7 @@ weight:
---
# Manage storage
You can read a high level overview of Loki storage [here](https://grafana.com/docs/loki/latest/configure/storage/)
You can read a high level overview of Loki storage [here](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/storage/)
Grafana Loki needs to store two different types of data: **chunks** and **indexes**.
@ -18,21 +18,21 @@ format](#chunk-format) for how chunks are stored internally.
The **index** stores each stream's label set and links them to the individual
chunks.
Refer to Loki's [configuration](https://grafana.com/docs/loki/latest/configure/) for details on
Refer to Loki's [configuration](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/) for details on
how to configure the storage and the index.
For more information:
- [Table Manager](https://grafana.com/docs/loki/latest/operations/storage/table-manager/)
- [Retention](https://grafana.com/docs/loki/latest/operations/storage/retention/)
- [Logs Deletion](https://grafana.com/docs/loki/latest/operations/storage/logs-deletion/)
- [Table Manager](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/table-manager/)
- [Retention](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/retention/)
- [Logs Deletion](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/logs-deletion/)
## Supported Stores
The following are supported for the index:
- [TSDB](https://grafana.com/docs/loki/latest/operations/storage/tsdb/) index store which stores TSDB index files in the object store. This is the recommended index store for Loki 2.8 and newer.
- [Single Store (boltdb-shipper)](https://grafana.com/docs/loki/latest/operations/storage/boltdb-shipper/) index store which stores boltdb index files in the object store.
- [TSDB](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/tsdb/) index store which stores TSDB index files in the object store. This is the recommended index store for Loki 2.8 and newer.
- [Single Store (boltdb-shipper)](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/boltdb-shipper/) index store which stores boltdb index files in the object store.
- [Amazon DynamoDB](https://aws.amazon.com/dynamodb)
- [Google Bigtable](https://cloud.google.com/bigtable)
- [Apache Cassandra](https://cassandra.apache.org)
@ -76,7 +76,7 @@ When using S3 as object storage, the following permissions are needed:
Resources: `arn:aws:s3:::<bucket_name>`, `arn:aws:s3:::<bucket_name>/*`
See the [AWS deployment section](https://grafana.com/docs/loki/latest/configure/storage/#aws-deployment-s3-single-store) on the storage page for a detailed setup guide.
See the [AWS deployment section](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/storage/#aws-deployment-s3-single-store) on the storage page for a detailed setup guide.
### DynamoDB
@ -134,7 +134,7 @@ Resources: `arn:aws:iam::<aws_account_id>:role/<role_name>`
When using IBM Cloud Object Storage (COS) as object storage, IAM `Writer` role is needed.
See the [IBM Cloud Object Storage section](https://grafana.com/docs/loki/latest/configure/storage/#ibm-deployment-cos-single-store) on the storage page for a detailed setup guide.
See the [IBM Cloud Object Storage section](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/storage/#ibm-deployment-cos-single-store) on the storage page for a detailed setup guide.
## Chunk Format

@ -16,7 +16,7 @@ If you have a lifecycle policy configured on the object store, please ensure tha
Granular retention policies to apply retention at per tenant or per stream level are also supported by the Compactor.
{{% admonition type="note" %}}
The Compactor does not support retention on [legacy index types](https://grafana.com/docs/loki/latest/configure/storage/#index-storage). Please use the [Table Manager](https://grafana.com/docs/loki/latest/operations/storage/table-manager/) when using legacy index types.
The Compactor does not support retention on [legacy index types](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/storage/#index-storage). Please use the [Table Manager](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/table-manager/) when using legacy index types.
Both the Table manager and legacy index types are deprecated and may be removed in future major versions of Loki.
{{% /admonition %}}
@ -100,7 +100,7 @@ Retention is only available if the index period is 24h. Single store TSDB and si
#### Configuring the retention period
Retention period is configured within the [`limits_config`](https://grafana.com/docs/loki/latest/configure/#limits_config) configuration section.
Retention period is configured within the [`limits_config`](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/#limits_config) configuration section.
There are two ways of setting retention policies:
@ -129,7 +129,7 @@ limits_config:
You can only use label matchers in the `selector` field of a `retention_stream` definition. Arbitrary LogQL expressions are not supported.
{{% /admonition %}}
Per tenant retention can be defined by configuring [runtime overrides](https://grafana.com/docs/loki/latest/configure/#runtime-configuration-file). For example:
Per tenant retention can be defined by configuring [runtime overrides](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/#runtime-configuration-file). For example:
```yaml
overrides:
@ -181,13 +181,13 @@ The example configurations defined above will result in the following retention
## Table Manager (deprecated)
Retention through the [Table Manager](https://grafana.com/docs/loki/latest/operations/storage/table-manager/) is
Retention through the [Table Manager](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/table-manager/) is
achieved by relying on the object store TTL feature, and will work for both
[boltdb-shipper](https://grafana.com/docs/loki/latest/operations/storage/boltdb-shipper/) store and chunk/index stores.
[boltdb-shipper](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/boltdb-shipper/) store and chunk/index stores.
In order to enable the retention support, the Table Manager needs to be
configured to enable deletions and a retention period. Please refer to the
[`table_manager`](https://grafana.com/docs/loki/latest/configure/#table_manager)
[`table_manager`](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/#table_manager)
section of the Loki configuration reference for all available options.
Alternatively, the `table-manager.retention-period` and
`table-manager.retention-deletes-enabled` command line flags can be used. The
@ -196,13 +196,13 @@ can be parsed using the Prometheus common model [ParseDuration](https://pkg.go.d
{{% admonition type="warning" %}}
The retention period must be a multiple of the index and chunks table
`period`, configured in the [`period_config`](https://grafana.com/docs/loki/latest/configure/#period_config) block.
See the [Table Manager](https://grafana.com/docs/loki/latest/operations/storage/table-manager/#retention) documentation for
`period`, configured in the [`period_config`](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/#period_config) block.
See the [Table Manager](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/table-manager/#retention) documentation for
more information.
{{% /admonition %}}
{{% admonition type="note" %}}
To avoid querying of data beyond the retention period,`max_query_lookback` config in [`limits_config`](https://grafana.com/docs/loki/latest/configure/#limits_config) must be set to a value less than or equal to what is set in `table_manager.retention_period`.
To avoid querying of data beyond the retention period,`max_query_lookback` config in [`limits_config`](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/#limits_config) must be set to a value less than or equal to what is set in `table_manager.retention_period`.
{{% /admonition %}}
When using S3 or GCS, the bucket storing the chunks needs to have the expiry
@ -223,7 +223,7 @@ intact; you will still be able to see related labels but will be unable to
retrieve the deleted log content.
For further details on the Table Manager internals, refer to the
[Table Manager](https://grafana.com/docs/loki/latest/operations/storage/table-manager/) documentation.
[Table Manager](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/table-manager/) documentation.
## Example Configuration

@ -81,7 +81,7 @@ Loki cache generation number errors(Loki >= 2.6)
- Check the metric `loki_delete_cache_gen_load_failures_total` on `/metrics`, which is an indicator for the occurrence of the problem. If the value is greater than 1, it means that there is a problem with that component.
- Try Http GET request to route: /loki/api/v1/cache/generation_numbers
- If response is equal as `"deletion is not available for this tenant"`, this means the deletion API is not enabled for the tenant. To enable this api, set `allow_deletes: true` for this tenant via the configuration settings. Check more docs: /docs/loki/latest/operations/storage/logs-deletion/
- If response is equal as `"deletion is not available for this tenant"`, this means the deletion API is not enabled for the tenant. To enable this api, set `allow_deletes: true` for this tenant via the configuration settings. Check more docs: /docs/loki /<LOKI_VERSION>/operations/storage/logs-deletion/
## Troubleshooting targets

@ -6,6 +6,6 @@ weight:
# Upgrade
- [Upgrade](https://grafana.com/docs/loki/latest/setup/upgrade/) from one Loki version to a newer version.
- [Upgrade](https://grafana.com/docs/loki /<LOKI_VERSION>/setup/upgrade/) from one Loki version to a newer version.
- [Upgrade Helm](https://grafana.com/docs/loki/latest/setup/upgrade/) from Helm v2.x to Helm v3.x.
- [Upgrade Helm](https://grafana.com/docs/loki /<LOKI_VERSION>/setup/upgrade/) from Helm v2.x to Helm v3.x.

@ -229,7 +229,7 @@ Commands:
For more information about log queries and metric queries, refer to the
LogQL documentation:
https://grafana.com/docs/loki/latest/logql/
https://grafana.com/docs/loki /<LOKI_VERSION>/logql/
labels [<flags>] [<label>]
Find values for a given label.

@ -36,7 +36,7 @@ Without revisiting the decisions and discussions around the somewhat controversi
Lastly several useful additions to the LogQL query language have been included:
* More text/template functions are included for `label_format` and `line_format` with PR [3515](https://github.com/grafana/loki/pull/3515), for more information,see the [documentation for template functions](/docs/loki/latest/logql/template_functions/).
* More text/template functions are included for `label_format` and `line_format` with PR [3515](https://github.com/grafana/loki/pull/3515), for more information,see the [documentation for template functions](/docs/loki /<LOKI_VERSION>/logql/template_functions/).
* Also support for math functions withing `label_format` and `line_format` was included with [3434](https://github.com/grafana/loki/pull/3434).
* Two additional metric functions with some interesting use cases `first_over_time` and `last_over_time` were added in PR [3050](https://github.com/grafana/loki/pull/3050). These can be useful for some down sampling approaches where instead of taking an average, max, or min of samples over a range in a metrics query, you can select the first or last log line to use from that range.
@ -88,4 +88,4 @@ Lists of bug fixes for 2.3.x.
### 2.3.0 bug fixes
* An important fix for leaking resources was patched with [3733](https://github.com/grafana/loki/pull/3733), when queries were canceled a goroutine was left running which would hold memory resources creating a memory leak.
* [3686](https://github.com/grafana/loki/pull/3686) fixes a panic with the frontend when use with downstream URL. **Note** we recommend using the [GRPC Pull Model](/docs/loki/latest/configuration/query-frontend/#grpc-mode-pull-model), better performance and fair scheduling between tenants can be obtained with the GPRC Pull Model.
* [3686](https://github.com/grafana/loki/pull/3686) fixes a panic with the frontend when use with downstream URL. **Note** we recommend using the [GRPC Pull Model](/docs/loki /<LOKI_VERSION>/configuration/query-frontend/#grpc-mode-pull-model), better performance and fair scheduling between tenants can be obtained with the GPRC Pull Model.

@ -64,7 +64,7 @@ Usage reporting helps provide anonymous information on how people use Loki and w
If possible, we ask you to leave the usage reporting feature enabled and help us understand more about Loki! We are also working to figure out how we can share this info with the community so everyone can watch Loki grow.
If you would rather not participate in usage stats reporting, [the feature can be disabled in config](/docs/loki/latest/configuration/#analytics)
If you would rather not participate in usage stats reporting, [the feature can be disabled in config](/docs/loki /<LOKI_VERSION>/configuration/#analytics)
```
analytics:

@ -9,7 +9,7 @@ Grafana Labs is excited to announce the release of Loki 2.9.0 Here's a summary o
## Features and enhancements
- **Structured metadata**: The [Structured Metadata](https://grafana.com/docs/loki/latest/get-started/labels/structured-metadata/) feature, which was introduced as experimental in release 2.9.0, is generally available as of release 2.9.4.
- **Structured metadata**: The [Structured Metadata](https://grafana.com/docs/loki /<LOKI_VERSION>/get-started/labels/structured-metadata/) feature, which was introduced as experimental in release 2.9.0, is generally available as of release 2.9.4.
- **Query Language Improvements**: Several improvements to the query language that speed up line parsing and regex matching. [PR #8646](https://github.com/grafana/loki/pull/8646), [PR #8659](https://github.com/grafana/loki/pull/8659), [PR #8724](https://github.com/grafana/loki/pull/8724), [PR #8734](https://github.com/grafana/loki/pull/8734), [PR #8739](https://github.com/grafana/loki/pull/8739), [PR #8763](https://github.com/grafana/loki/pull/8763), [PR #8890](https://github.com/grafana/loki/pull/8890), [PR #8914](https://github.com/grafana/loki/pull/8914)
@ -26,7 +26,7 @@ Grafana Labs is excited to announce the release of Loki 2.9.0 Here's a summary o
- **logfmt stage improvements**: logfmt parser now performs non-strict parsing by default which helps scan semi-structured log lines. [PR #9626](https://github.com/grafana/loki/pull/9626)
- **Deprecations**
- Legacy index and chunk stores that are not "single store" (such as `tsdb`, `boltdb-shipper`) are deprecated. These storage backends are Cassandra (`cassandra`), DynamoDB (`aws`, `aws-dynamo`), BigTable (`bigtable`, `bigtable-hashed`), GCP (`gcp`, `gcp-columnkey`), and gRPC (`grpc`). See https://grafana.com/docs/loki/latest/configure/storage.md for more information.
- Legacy index and chunk stores that are not "single store" (such as `tsdb`, `boltdb-shipper`) are deprecated. These storage backends are Cassandra (`cassandra`), DynamoDB (`aws`, `aws-dynamo`), BigTable (`bigtable`, `bigtable-hashed`), GCP (`gcp`, `gcp-columnkey`), and gRPC (`grpc`). See https://grafana.com/docs/loki /<LOKI_VERSION>/configure/storage.md for more information.
- The `table-manager` target is deprecated, because it is not used by "single store" implementations.
- The `-boltdb.shipper.compactor.*` CLI flags are deprecated in favor of `-compactor.*`.
- The `-ingester.unordered-writes` CLI flag is deprecated and will always default to `true` in the next major release.

@ -101,7 +101,7 @@ config:
Match kube.*
Url ${FLUENT_LOKI_URL}
Labels {job="fluent-bit"}
LabelKeys level,app # this sets the values for actual Loki streams and the other labels are converted to structured_metadata https://grafana.com/docs/loki/latest/get-started/labels/structured-metadata/
LabelKeys level,app # this sets the values for actual Loki streams and the other labels are converted to structured_metadata https://grafana.com/docs/loki /<LOKI_VERSION>/get-started/labels/structured-metadata/
BatchWait 1
BatchSize 1001024
LineFormat json
@ -117,7 +117,7 @@ helm install fluent-bit fluent/fluent-bit -f values.yaml
By default it will collect all containers logs and extract labels from Kubernetes API (`container_name`, `namespace`, etc..).
If you also want to host your Loki instance inside the cluster install the [official Loki helm chart](https://grafana.com/docs/loki/latest/setup/install/helm/).
If you also want to host your Loki instance inside the cluster install the [official Loki helm chart](https://grafana.com/docs/loki /<LOKI_VERSION>/setup/install/helm/).
### AWS Elastic Container Service (ECS)

@ -60,7 +60,7 @@ To add tenant id add `-var "tenant_id=value"`.
Note that the creation of a subscription filter on Cloudwatch in the provided Terraform file only accepts an array of log group names.
It does **not** accept strings for regex filtering on the logs contents via the subscription filters. We suggest extending the Terraform file to do so.
Or, have lambda-promtail write to Promtail and use [pipeline stages](/docs/loki/latest/send-data/promtail/stages/drop/).
Or, have lambda-promtail write to Promtail and use [pipeline stages](/docs/loki /<LOKI_VERSION>/send-data/promtail/stages/drop/).
CloudFormation:
```

@ -105,7 +105,7 @@ Things to note before ingesting OpenTelemetry logs to Loki:
- Flattening of nested Attributes
While converting Attributes in OTLP to Index labels or Structured Metadata, any nested attribute values are flattened out using `_` as a separator.
It is done in a similar way as to how it is done in the [LogQL json parser](/docs/loki/latest/query/log_queries/#json).
It is done in a similar way as to how it is done in the [LogQL json parser](/docs/loki /<LOKI_VERSION>/query/log_queries/#json).
- Stringification of non-string Attribute values

@ -231,12 +231,12 @@ That's it ! Make sure to checkout LogQL to learn more about Loki powerful query
[ecs iam]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html
[arn]: https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
[task]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html
[fluentd loki]: https://grafana.com/docs/loki/latest/send-data/fluentd/
[fluentbit loki]: https://grafana.com/docs/loki/latest/send-data/fluentbit/
[fluentd loki]: https://grafana.com/docs/loki /<LOKI_VERSION>/send-data/fluentd/
[fluentbit loki]: https://grafana.com/docs/loki /<LOKI_VERSION>/send-data/fluentbit/
[fluentbit]: https://fluentbit.io/
[fluentd]: https://www.fluentd.org/
[fluentbit loki image]: https://hub.docker.com/r/grafana/fluent-bit-plugin-loki
[logql]: https://grafana.com/docs/loki/latest/logql/
[logql]: https://grafana.com/docs/loki /<LOKI_VERSION>/logql/
[alpine]:https://hub.docker.com/_/alpine
[fluentbit output]: https://fluentbit.io/documentation/0.14/output/
[routing]: https://fluentbit.io/documentation/0.13/getting_started/routing.html

@ -270,7 +270,7 @@ If you want to push this further you can check out [Joe's blog post][blog annota
[kubelet]: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=The%20kubelet%20works%20in%20terms,PodSpecs%20are%20running%20and%20healthy.
[blog events]: https://grafana.com/blog/2019/08/21/how-grafana-labs-effectively-pairs-loki-and-kubernetes-events/
[labels post]: https://grafana.com/blog/2020/04/21/how-labels-in-loki-can-make-log-queries-faster-and-easier/
[pipeline]: https://grafana.com/docs/loki/latest/send-data/promtail/pipelines/
[pipeline]: https://grafana.com/docs/loki /<LOKI_VERSION>/send-data/promtail/pipelines/
[final config]: values.yaml
[blog annotations]: https://grafana.com/blog/2019/12/09/how-to-do-automatic-annotations-with-grafana-and-loki/
[kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/

@ -38,7 +38,7 @@ This guide assumes Loki will be installed in one of the modes above and that a `
**To grant access to S3 via an IAM role without providing credentials:**
1. Provision an IAM role, policy and S3 bucket as described in [Storage](https://grafana.com/docs/loki/latest/configure/storage/#aws-deployment-s3-single-store).
1. Provision an IAM role, policy and S3 bucket as described in [Storage](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/storage/#aws-deployment-s3-single-store).
- If the Terraform module was used note the annotation emitted by `terraform output -raw annotation`.
1. Add the IAM role annotation to the service account in `values.yaml`:

@ -92,4 +92,4 @@ It is not recommended to run scalable mode with `filesystem` storage.
```
## Next Steps
Configure an agent to [send log data to Loki](/docs/loki/latest/send-data/).
Configure an agent to [send log data to Loki](/docs/loki /<LOKI_VERSION>/send-data/).

@ -46,7 +46,7 @@ jb install github.com/grafana/loki/production/ksonnet/promtail@main
Revise the YAML contents of `environments/loki/main.jsonnet`, updating these variables:
- Update the `username`, `password`, and the relevant `htpasswd` variable values.
- Update the S3 or GCS variable values, depending on your object storage type. See [storage_config](/docs/loki/latest/configuration/#storage_config) for more configuration details.
- Update the S3 or GCS variable values, depending on your object storage type. See [storage_config](/docs/loki /<LOKI_VERSION>/configuration/#storage_config) for more configuration details.
- Remove from the configuration the S3 or GCS object storage variables that are not part of your setup.
- Update the Promtail configuration `container_root_path` variable's value to reflect your root path for the Docker daemon. Run `docker info | grep "Root Dir"` to acquire your root path.
- Update the `from` value in the Loki `schema_config` section to no more than 14 days prior to the current date. The `from` date represents the first day for which the `schema_config` section is valid. For example, if today is `2021-01-15`, set `from` to `2021-01-01`. This recommendation is based on Loki's default acceptance of log lines up to 14 days in the past. The `reject_old_samples_max_age` configuration variable controls the acceptance range.

@ -11,14 +11,14 @@ keywords:
# Migrate to TSDB
[TSDB]({{< relref "../../../operations/storage/tsdb" >}}) is the recommended index type for Loki and is where the current development lies.
If you are running Loki with [boltb-shipper]({{< relref "../../../operations/storage/boltdb-shipper" >}}) or any of the [legacy index types](https://grafana.com/docs/loki/latest/configure/storage/#index-storage) that have been deprecated,
If you are running Loki with [boltb-shipper]({{< relref "../../../operations/storage/boltdb-shipper" >}}) or any of the [legacy index types](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/storage/#index-storage) that have been deprecated,
we strongly recommend migrating to TSDB.
### Configure TSDB index for an upcoming period
To begin the migration, add a new [period_config]({{< relref "../../../configure#period_config" >}}) entry in your [schema_config]({{< relref "../../../configure#schema_config" >}}).
You can read more about schema config [here](https://grafana.com/docs/loki/latest/configure/storage/#schema-config).
You can read more about schema config [here](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/storage/#schema-config).
{{% admonition type="note" %}}
You must roll out the new `period_config` change to all Loki components in order for it to take effect.

@ -67,20 +67,20 @@ If you introduce a new schema_config entry it may cause additional validation er
{{< /admonition >}}
{{< admonition type="tip" >}}
If you configure `path_prefix` in the `common` config section this can help save a lot of configuration. Refer to the [Common Config Docs](https://grafana.com/docs/loki/latest/configure/#common).
If you configure `path_prefix` in the `common` config section this can help save a lot of configuration. Refer to the [Common Config Docs](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/#common).
{{< /admonition >}}
The **Helm chart** has gone through some significant changes and has a separate upgrade guide: [Upgrading to Helm 6.x](https://grafana.com/docs/loki/latest/setup/upgrade/upgrade-to-6x/).
The **Helm chart** has gone through some significant changes and has a separate upgrade guide: [Upgrading to Helm 6.x](https://grafana.com/docs/loki /<LOKI_VERSION>/setup/upgrade/upgrade-to-6x/).
### Loki
#### Structured Metadata, Open Telemetry, Schemas and Indexes
A flagship feature of Loki 3.0 is native support for the Open Telemetry Protocol (OTLP). This is made possible by a new feature in Loki called [Structured Metadata](https://grafana.com/docs/loki/latest/get-started/labels/structured-metadata/), a place for metadata which doesn't belong in labels or log lines. OTel resources and attributes are often a great example of data which doesn't belong in the index nor in the log line.
A flagship feature of Loki 3.0 is native support for the Open Telemetry Protocol (OTLP). This is made possible by a new feature in Loki called [Structured Metadata](https://grafana.com/docs/loki /<LOKI_VERSION>/get-started/labels/structured-metadata/), a place for metadata which doesn't belong in labels or log lines. OTel resources and attributes are often a great example of data which doesn't belong in the index nor in the log line.
Structured Metadata is enabled by default in Loki 3.0, however, it requires your active schema be using both the `tsdb` index type AND the `v13` storage schema. If you are not using both of these you have two options:
* Upgrade your index version and schema version before updating to 3.0, see [schema config upgrade](https://grafana.com/docs/loki/latest/operations/storage/schema/).
* Upgrade your index version and schema version before updating to 3.0, see [schema config upgrade](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/schema/).
* Disable Structured Metadata (and therefor OTLP support) and upgrade to 3.0 and perform the schema migration after. This can be done by setting `allow_structured_metadata: false` in the `limits_config` section or set the command line argument `-validation.allow-structured-metadata=false`.
#### `service_name` label
@ -112,7 +112,7 @@ The following CLI flags and the corresponding YAML settings to configure shared
- `-boltdb.shipper.shared-store`
- `-tsdb.shipper.shared-store`
Going forward the `object_store` setting in the [period_config](/docs/loki/latest/configure/#period_config) will be used to configure the store for the index.
Going forward the `object_store` setting in the [period_config](/docs/loki /<LOKI_VERSION>/configure/#period_config) will be used to configure the store for the index.
This enforces chunks and index files to reside together in the same storage bucket for a given period.
We are removing the shared store setting in an effort to simplify storage configuration and reduce the possibility for misconfiguration.
@ -137,7 +137,7 @@ The following CLI flags and the corresponding YAML settings to configure a path
- `-boltdb.shipper.shared-store.key-prefix`
- `-tsdb.shipper.shared-store.key-prefix`
Path prefix for storing the index can now be configured by setting `path_prefix` under `index` key in [period_config](/docs/loki/latest/configure/#period_config).
Path prefix for storing the index can now be configured by setting `path_prefix` under `index` key in [period_config](/docs/loki /<LOKI_VERSION>/configure/#period_config).
This enables users to change the path prefix by adding a new period config.
```
period_config:
@ -147,7 +147,7 @@ period_config:
```
{{% admonition type="note" %}}
`path_prefix` only applies to TSDB and BoltDB indexes. This setting has no effect on [legacy indexes](https://grafana.com/docs/loki/latest/configure/storage/#index-storage).
`path_prefix` only applies to TSDB and BoltDB indexes. This setting has no effect on [legacy indexes](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/storage/#index-storage).
{{% /admonition %}}
`path_prefix` defaults to `index/` which is same as the default value of the removed configurations.
@ -162,7 +162,7 @@ The following CLI flags and the corresponding YAML settings to configure the sha
- `-boltdb.shipper.compactor.shared-store`
- `-boltdb.shipper.compactor.shared-store.key-prefix`
Going forward compactor will run compaction and retention on all the object stores configured in [period configs](/docs/loki/latest/configure/#period_config) where the index type is either `tsdb` or `boltdb-shipper`.
Going forward compactor will run compaction and retention on all the object stores configured in [period configs](/docs/loki /<LOKI_VERSION>/configure/#period_config) where the index type is either `tsdb` or `boltdb-shipper`.
#### `delete_request_store` should be explicitly configured
@ -189,7 +189,7 @@ It was used to allow transferring chunks to new ingesters when the old ingester
Alternatives to this setting are:
- **A. (Preferred)** Enable the WAL and rely on the new ingester to replay the WAL.
- Optionally, you can enable `flush_on_shutdown` (`-ingester.flush-on-shutdown`) to flush to long-term storage on shutdowns.
- **B.** Manually flush during shutdowns via [the ingester `/shutdown?flush=true` endpoint](https://grafana.com/docs/loki/latest/reference/api/#flush-in-memory-chunks-and-shut-down).
- **B.** Manually flush during shutdowns via [the ingester `/shutdown?flush=true` endpoint](https://grafana.com/docs/loki /<LOKI_VERSION>/reference/api/#flush-in-memory-chunks-and-shut-down).
#### Removed the `default` section of the runtime overrides config file.
@ -208,18 +208,18 @@ The previous default value `false` is applied.
1. Removed already deprecated `store.max-look-back-period` CLI flag and the corresponding YAML settings. Use `querier.max-query-lookback` config instead.
1. Removes already deprecated `-querier.engine.timeout` CLI flag and the corresponding YAML setting.
1. Also removes the `query_timeout` from the querier YAML section. Instead of configuring `query_timeout` under `querier`, you now configure it in [Limits Config](/docs/loki/latest/configuration/#limits_config).
1. Also removes the `query_timeout` from the querier YAML section. Instead of configuring `query_timeout` under `querier`, you now configure it in [Limits Config](/docs/loki /<LOKI_VERSION>/configuration/#limits_config).
1. `s3.sse-encryption` is removed. AWS now defaults encryption of all buckets to SSE-S3. Use `sse.type` to set SSE type.
1. `ruler.wal-cleaer.period` is removed. Use `ruler.wal-cleaner.period` instead.
1. `experimental.ruler.enable-api` is removed. Use `ruler.enable-api` instead.
1. `split_queries_by_interval` is removed from `query_range` YAML section. You can instead configure it in [Limits Config](/docs/loki/latest/configuration/#limits_config).
1. `split_queries_by_interval` is removed from `query_range` YAML section. You can instead configure it in [Limits Config](/docs/loki /<LOKI_VERSION>/configuration/#limits_config).
1. `frontend.forward-headers-list` CLI flag and its corresponding YAML setting are removed.
1. `frontend.cache-split-interval` CLI flag is removed. Results caching interval is now determined by `querier.split-queries-by-interval`.
1. `querier.worker-parallelism` CLI flag and its corresponding yaml setting are now removed as it does not offer additional value to already existing `querier.max-concurrent`.
We recommend configuring `querier.max-concurrent` to limit the max concurrent requests processed by the queriers.
1. `ruler.evaluation-delay-duration` CLI flag and the corresponding YAML setting are removed.
1. `validation.enforce-metric-name` CLI flag and the corresponding YAML setting are removed.
1. `boltdb.shipper.compactor.deletion-mode` CLI flag and the corresponding YAML setting are removed. You can instead configure the `compactor.deletion-mode` CLI flag or `deletion_mode` YAML setting in [Limits Config](/docs/loki/latest/configuration/#limits_config).
1. `boltdb.shipper.compactor.deletion-mode` CLI flag and the corresponding YAML setting are removed. You can instead configure the `compactor.deletion-mode` CLI flag or `deletion_mode` YAML setting in [Limits Config](/docs/loki /<LOKI_VERSION>/configuration/#limits_config).
1. Compactor CLI flags that use the prefix `boltdb.shipper.compactor.` are removed. You can instead use CLI flags with the `compactor.` prefix.
@ -254,7 +254,7 @@ This new metric will provide a more clear signal that there is an issue with ing
#### Automatic stream sharding is enabled by default
Automatic stream sharding helps keep the write load of high volume streams balanced across ingesters and helps to avoid hot-spotting. Check out the [operations page](https://grafana.com/docs/loki/latest/operations/automatic-stream-sharding/) for more information
Automatic stream sharding helps keep the write load of high volume streams balanced across ingesters and helps to avoid hot-spotting. Check out the [operations page](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/automatic-stream-sharding/) for more information
#### More results caching is enabled by default
@ -266,7 +266,7 @@ All of these are cached to the `results_cache` which is configured in the `query
#### Write dedupe cache is deprecated
Write dedupe cache is deprecated because it not required by the newer single store indexes ([TSDB]({{< relref "../../operations/storage/tsdb" >}}) and [boltdb-shipper]({{< relref "../../operations/storage/boltdb-shipper" >}})).
If you using a [legacy index type](https://grafana.com/docs/loki/latest/configure/storage/#index-storage), consider migrating to TSDB (recommended).
If you using a [legacy index type](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/storage/#index-storage), consider migrating to TSDB (recommended).
#### Embedded cache metric changes
@ -520,7 +520,7 @@ ruler:
#### query-frontend Kubernetes headless service changed to load balanced service
{{% admonition type="note" %}}
This is relevant only if you are using [jsonnet for deploying Loki in Kubernetes](/docs/loki/latest/installation/tanka/).
This is relevant only if you are using [jsonnet for deploying Loki in Kubernetes](/docs/loki /<LOKI_VERSION>/installation/tanka/).
{{% /admonition %}}
The `query-frontend` Kubernetes service was previously headless and was used for two purposes:
@ -560,14 +560,14 @@ These statistics are also displayed when using `--stats` with LogCLI.
### Loki Canary Permission
The new `push` mode to [Loki canary](/docs/loki/latest/operations/loki-canary/) can push logs that are generated by a Loki canary directly to a given Loki URL. Previously, it only wrote to a local file and you needed some agent, such as promtail, to scrape and push it to Loki.
The new `push` mode to [Loki canary](/docs/loki /<LOKI_VERSION>/operations/loki-canary/) can push logs that are generated by a Loki canary directly to a given Loki URL. Previously, it only wrote to a local file and you needed some agent, such as promtail, to scrape and push it to Loki.
So if you run Loki behind some proxy with different authorization policies to read and write to Loki, then auth credentials we pass to Loki canary now needs to have both `READ` and `WRITE` permissions.
### `engine.timeout` and `querier.query_timeout` are deprecated
Previously, we had two configurations to define a query timeout: `engine.timeout` and `querier.query-timeout`.
As they were conflicting and `engine.timeout` isn't as expressive as `querier.query-tiomeout`,
we're deprecating it and moving it to [Limits Config](/docs/loki/latest/configuration/#limits_config) `limits_config.query_timeout` with same default values.
we're deprecating it and moving it to [Limits Config](/docs/loki /<LOKI_VERSION>/configuration/#limits_config) `limits_config.query_timeout` with same default values.
#### `fifocache` has been renamed
@ -1002,10 +1002,10 @@ cortex_chunks_store* -> loki_chunks_store*
Previously, samples generated by recording rules would only be buffered in memory before being remote-written to Prometheus; from this
version, the `ruler` now writes these samples to a per-tenant Write-Ahead Log for durability. More details about the
per-tenant WAL can be found [here](/docs/loki/latest/operations/recording-rules/).
per-tenant WAL can be found [here](/docs/loki /<LOKI_VERSION>/operations/recording-rules/).
The `ruler` now requires persistent storage - see the
[Operations](/docs/loki/latest/operations/recording-rules/#deployment) page for more details about deployment.
[Operations](/docs/loki /<LOKI_VERSION>/operations/recording-rules/#deployment) page for more details about deployment.
### Promtail
@ -1308,7 +1308,7 @@ schema_config:
④ Make sure this matches your existing config (e.g. maybe you were using gcs for your object_store)
⑤ 24h is required for boltdb-shipper
There are more examples on the [Storage description page](https://grafana.com/docs/loki/latest/configure/storage/#examples) including the information you need to setup the `storage` section for boltdb-shipper.
There are more examples on the [Storage description page](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/storage/#examples) including the information you need to setup the `storage` section for boltdb-shipper.
## 1.6.0

Loading…
Cancel
Save