@ -202,18 +202,18 @@ Another great use case is alerting on high cardinality sources. These are things
Creating these alerts in LogQL is attractive because these metrics can be extracted at _query time_, meaning we don't suffer the cardinality explosion in our metrics store.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
As an example, we can use LogQL v2 to help Loki to monitor _itself_, alerting us when specific tenants have queries that take longer than 10s to complete! To do so, we'd use the following query: `sum by (org_id) (rate({job="loki-prod/query-frontend"} |= "metrics.go" | logfmt | duration > 10s [1m])`.
{{% /admonition %}}
{{</admonition>}}
## Interacting with the Ruler
### Lokitool
Because the rule files are identical to Prometheus rule files, we can interact with the Loki Ruler via `lokitool`.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
lokitool is intended to run against multi-tenant Loki. The commands need an `--id=` flag set to the Loki instance ID or set the environment variable `LOKI_TENANT_ID`. If Loki is running in single tenant mode, the required ID is `fake`.
@ -129,9 +129,9 @@ The performance losses against the current approach includes:
Loki regularly combines multiple blocks into a chunk and "flushes" it to storage. In order to ensure that reads over flushed chunks remain as performant as possible, we will re-order a possibly-overlapping set of blocks into a set of blocks that maintain monotonically increasing order between them. From the perspective of the rest of Loki’s components (queriers/rulers fetching chunks from storage), nothing has changed.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
**In the case that data for a stream is ingested in order, this is effectively a no-op, making it well optimized for in-order writes (which is both the requirement and default in Loki currently). Thus, this should have little performance impact on ordered data while enabling Loki to ingest unordered data.**
{{% /admonition %}}
{{</admonition>}}
#### Chunk Durations
@ -153,9 +153,9 @@ The second is simple to implement and an effective way to ensure Loki can ingest
We also cut chunks according to the `sync_period`. The first timestamp ingested past this bound will trigger a cut. This process aids in increasing chunk determinism and therefore our deduplication ratio in object storage because chunks are [content addressed](https://en.wikipedia.org/wiki/Content-addressable_storage). With the removal of our ordering constraint, it's possible that in some cases the synchronization method will not be as effective, such as during concurrent writes to the same stream across this bound.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
**It's important to mention that this is possible today with the current ordering constraint, but we'll be increasing the likelihood by removing it.**
@ -21,18 +21,18 @@ branch is then used for all the Stable Releases, and all Patch Releases for that
The name of the release branch should be `release-VERSION_PREFIX`, such as `release-2.9.x`.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Branches are only made for VERSION_PREFIX; do not create branches for the full VERSION such as `release-2.9.1`.
{{% /admonition %}}
{{</admonition>}}
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Don't create any other branches that are prefixed with `release` when creating PRs or those branches will collide with our automated release build publish rules.
{{% /admonition %}}
{{</admonition>}}
1. Create a label to make backporting PRs to this branch easy.
The name of the label should be `backport release-VERSION_PREFIX`, such as `backport release-2.9.x`.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Note there is space in the label name. The label name must follow this naming convention to trigger CI related jobs.
@ -32,9 +32,9 @@ Query parallelization is limited by the number of instances and the setting `max
The simple scalable deployment is the default configuration installed by the [Loki Helm Chart]({{< relref "../setup/install/helm" >}}). This deployment mode is the easiest way to deploy Loki at scale. It strikes a balance between deploying in [monolithic mode](#monolithic-mode) or deploying each component as a [separate microservice](#microservices-mode).
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
This deployment mode is sometimes referred to by the acronym SSD for simple scalable deployment, not to be confused with solid state drives. Loki uses an object store.
{{% /admonition %}}
{{</admonition>}}
Loki’s simple scalable deployment mode separates execution paths into read, write, and backend targets. These targets can be scaled independently, letting you customize your Loki deployment to meet your business needs for log ingestion and log query so that your infrastructure costs better match how you use Loki.
@ -79,13 +79,13 @@ For release 3.2 the components are:
- Ruler
- Table Manager (deprecated)
{{% admonition type="tip" %}}
{{<admonitiontype="tip">}}
You can see the complete list of targets for your version of Loki by running Loki with the flag `-list-targets`, for example:
```bash
docker run docker.io/grafana/loki:3.2.1 -config.file=/etc/loki/local-config.yaml -list-targets
@ -19,10 +19,10 @@ Labels in Loki perform a very important task: They define a stream. More specifi
If you are familiar with Prometheus, the term used there is series; however, Prometheus has an additional dimension: metric name. Loki simplifies this in that there are no metric names, just labels, and we decided to use streams instead of series.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Structured metadata do not define a stream, but are metadata attached to a log line.
See [structured metadata]({{< relref "./structured-metadata" >}}) for more information.
@ -5,9 +5,9 @@ description: Describes how to enable structure metadata for logs and how to quer
---
# What is structured metadata
{{% admonition type="warning" %}}
{{<admonitiontype="warning">}}
Structured metadata was added to chunk format V4 which is used if the schema version is greater or equal to `13`. See [Schema Config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/storage/#schema-config) for more details about schema versions.
{{% /admonition %}}
{{</admonition>}}
Selecting proper, low cardinality labels is critical to operating and querying Loki effectively. Some metadata, especially infrastructure related metadata, can be difficult to embed in log lines, and is too high cardinality to effectively store as indexed labels (and therefore reducing performance of the index).
@ -36,7 +36,7 @@ See the [Promtail: Structured metadata stage](https://grafana.com/docs/loki/<LOK
With Loki version 1.2.0, support for structured metadata has been added to the Logstash output plugin. For more information, see [logstash](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/logstash/).
{{% admonition type="warning" %}}
{{<admonitiontype="warning">}}
Structured metadata size is taken into account while asserting ingestion rate limiting.
Along with that, there are separate limits on how much structured metadata can be attached per log line.
```
@ -48,7 +48,7 @@ Along with that, there are separate limits on how much structured metadata can b
- hash: 2943214005 # hash of {stream="stdout",pod="loki-canary-9w49x"}
types: filter,limited
```
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Changes to these configurations **do not require a restart**; they are defined in the [runtime configuration file](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#runtime-configuration-file).
{{% /admonition %}}
{{</admonition>}}
The available query types are:
@ -53,9 +53,9 @@ is logged with every query request in the `query-frontend` and `querier` logs, f
This feature is an [experimental feature](/docs/release-life-cycle/). Engineering and on-call support is not available. No SLA is provided.
{{% /admonition %}}
{{</admonition>}}
Loki leverages [bloom filters](https://en.wikipedia.org/wiki/Bloom_filter) to speed up queries by reducing the amount of data Loki needs to load from the store and iterate through.
Loki is often used to run "needle in a haystack" queries; these are queries where a large number of log lines are searched, but only a few log lines match the query.
Single store BoltDB Shipper is a legacy storage option recommended for Loki 2.0 through 2.7.x and is not recommended for new deployments. The [TSDB](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/tsdb/) is the recommended index for Loki 2.8 and newer.
{{% /admonition %}}
{{</admonition>}}
BoltDB Shipper lets you run Grafana Loki without any dependency on NoSQL stores for storing index.
It locally stores the index in BoltDB files instead and keeps shipping those files to a shared object store i.e the same object store which is being used for storing chunks.
It also keeps syncing BoltDB files from shared object store to a configured local directory for getting index entries created by other services of same Loki cluster.
This helps run Loki with one less dependency and also saves costs in storage since object stores are likely to be much cheaper compared to cost of a hosted NoSQL store or running a self hosted instance of Cassandra.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
BoltDB shipper works best with 24h periodic index files. It is a requirement to have the index period set to 24h for either active or upcoming usage of boltdb-shipper.
If boltdb-shipper already has created index files with 7 days period, and you want to retain previous data, add a new schema config using boltdb-shipper with a future date and index files period set to 24h.
{{% /admonition %}}
{{</admonition>}}
## Example Configuration
@ -76,9 +76,9 @@ they both having shipped files for day `18371` and `18372` with prefix `loki_ind
...
```
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Loki also adds a timestamp to names of the files to randomize the names to avoid overwriting files when running Ingesters with same name and not have a persistent storage. Timestamps not shown here for simplification.
{{% /admonition %}}
{{</admonition>}}
Let us talk about more in depth about how both Ingesters and Queriers work when running them with BoltDB Shipper.
@ -89,9 +89,9 @@ and the BoltDB Shipper looks for new and updated files in that directory at 1 mi
When running Loki in microservices mode, there could be multiple ingesters serving write requests.
Each ingester generates BoltDB files locally.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
To avoid any loss of index when an ingester crashes, we recommend running ingesters as a StatefulSet (when using Kubernetes) with a persistent storage for storing index files.
{{% /admonition %}}
{{</admonition>}}
When chunks are flushed, they are available for reads in the object store instantly. The index is not available instantly, since we upload every 15 minutes with the BoltDB shipper.
Ingesters expose a new RPC for letting queriers query the ingester's local index for chunks which were recently flushed, but its index might not be available yet with queriers.
@ -137,9 +137,9 @@ While using `boltdb-shipper` avoid configuring WriteDedupe cache since it is use
Compactor is a BoltDB Shipper specific service that reduces the index size by deduping the index and merging all the files to a single file per table.
We recommend running a Compactor since a single Ingester creates 96 files per day which include a lot of duplicate index entries and querying multiple files per table adds up the overall query latency.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
There should be only one compactor instance running at a time that otherwise could create problems and may lead to data loss.
Unlike the other core components of Loki, the chunk store is not a separate
service, job, or process, but rather a library embedded in the two services
that need to access Loki data: the [ingester]({{< relref "../../get-started/components#ingester" >}}) and [querier]({{< relref "../../get-started/components#querier" >}}).
{{% /admonition %}}
{{</admonition>}}
The chunk store relies on a unified interface to the
"[NoSQL](https://en.wikipedia.org/wiki/NoSQL)" stores (DynamoDB, Bigtable, and
@ -22,9 +22,9 @@ Log entry deletion relies on configuration of the custom logs retention workflow
Enable log entry deletion by setting `retention_enabled` to true in the compactor's configuration and setting and `deletion_mode` to `filter-only` or `filter-and-delete` in the runtime config.
`delete_request_store` also needs to be configured when retention is enabled to process delete requests, this determines the storage bucket that stores the delete requests.
{{% admonition type="warning" %}}
{{<admonitiontype="warning">}}
Be very careful when enabling retention. It is strongly recommended that you also enable versioning on your objects in object storage to allow you to recover from accidental misconfiguration of a retention setting. If you want to enable deletion but not not want to enforce retention, configure the `retention_period` setting with a value of `0s`.
{{% /admonition %}}
{{</admonition>}}
Because it is a runtime configuration, `deletion_mode` can be set per-tenant, if desired.
Retention in Grafana Loki is achieved through the [Compactor](#compactor).
By default the `compactor.retention-enabled` flag is not set, so the logs sent to Loki live forever.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
If you have a lifecycle policy configured on the object store, please ensure that it is longer than the retention period.
{{% /admonition %}}
{{</admonition>}}
Granular retention policies to apply retention at per tenant or per stream level are also supported by the Compactor.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
The Compactor does not support retention on [legacy index types](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/storage/#index-storage). Please use the [Table Manager](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/table-manager/) when using legacy index types.
Both the Table manager and legacy index types are deprecated and may be removed in future major versions of Loki.
{{% /admonition %}}
{{</admonition>}}
## Compactor
The Compactor is responsible for compaction of index files and applying log retention.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Run the Compactor as a singleton (a single instance).
{{% /admonition %}}
{{</admonition>}}
The Compactor loops to apply compaction and retention at every `compactor.compaction-interval`, or as soon as possible if running behind.
Both compaction and retention are idempotent. If the Compactor restarts, it will continue from where it left off.
@ -49,9 +49,9 @@ Chunks cannot be deleted immediately for the following reasons:
- It provides a short window of time in which to cancel chunk deletion in the case of a configuration mistake.
Marker files should be stored on a persistent disk to ensure that the chunks pending for deletion are processed even if the Compactor process restarts.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Grafana Labs recommends running Compactor as a stateful deployment (StatefulSet when using Kubernetes) with a persistent storage for storing marker files.
{{% /admonition %}}
{{</admonition>}}
### Retention Configuration
@ -82,9 +82,9 @@ storage_config:
bucket_name: loki
```
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Retention is only available if the index period is 24h. Single store TSDB and single store BoltDB require 24h index period.
{{% /admonition %}}
{{</admonition>}}
`retention_enabled` should be set to true. Without this, the Compactor will only compact tables.
@ -107,9 +107,9 @@ There are two ways of setting retention policies:
- `retention_period` which is applied globally for all log streams.
- `retention_stream` which is only applied to log streams matching the selector.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
The minimum retention period is 24h.
{{% /admonition %}}
{{</admonition>}}
This example configures global retention that applies to all tenants (unless overridden by configuring per-tenant overrides):
@ -125,9 +125,9 @@ limits_config:
...
```
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
You can only use label matchers in the `selector` field of a `retention_stream` definition. Arbitrary LogQL expressions are not supported.
{{% /admonition %}}
{{</admonition>}}
Per tenant retention can be defined by configuring [runtime overrides](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#runtime-configuration-file). For example:
@ -156,9 +156,9 @@ Retention period for a given stream is decided based on the first match in this
4. The global `retention_period` will be applied if none of the above match.
5. If no global `retention_period` is specified, the default value of `744h` (30days) retention is used.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
The larger the priority value, the higher the priority.
{{% /admonition %}}
{{</admonition>}}
Stream matching uses the same syntax as Prometheus label matching:
@ -194,16 +194,16 @@ Alternatively, the `table-manager.retention-period` and
provided retention period needs to be a duration represented as a string that
can be parsed using the Prometheus common model [ParseDuration](https://pkg.go.dev/github.com/prometheus/common/model#ParseDuration). Examples: `7d`, `1w`, `168h`.
{{% admonition type="warning" %}}
{{<admonitiontype="warning">}}
The retention period must be a multiple of the index and chunks table
`period`, configured in the [`period_config`](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#period_config) block.
See the [Table Manager](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/table-manager/#retention) documentation for
more information.
{{% /admonition %}}
{{</admonition>}}
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
To avoid querying of data beyond the retention period,`max_query_lookback` config in [`limits_config`](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#limits_config) must be set to a value less than or equal to what is set in `table_manager.retention_period`.
{{% /admonition %}}
{{</admonition>}}
When using S3 or GCS, the bucket storing the chunks needs to have the expiry
Table manager is only needed if you are using a multi-store [backend](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/storage/). If you are using either TSDB (recommended), or BoltDB (deprecated) you do not need the Table Manager.
{{% /admonition %}}
{{</admonition>}}
Grafana Loki supports storing indexes and chunks in table-based data storages. When
such a storage type is used, multiple tables are created over the time: each
The `| keep` expression will keep only the specified labels in the pipeline and drop all the other labels.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
The keep stage will not drop the __error__ or __error_details__ labels added by Loki at query time. To drop these labels, refer to [drop](#drop-labels-expression) stage.
Query acceleration using blooms is an [experimental feature](/docs/release-life-cycle/). Engineering and on-call support is not available. No SLA is provided.
{{% /admonition %}}
{{</admonition>}}
If [bloom filters][] are enabled, you can write LogQL queries using [structured metadata][] to benefit from query acceleration.
Loki exposes an HTTP API for pushing, querying, and tailing log data, as well
as for viewing and managing cluster information.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Note that authorization is not part of the Loki API.
Authorization needs to be done separately, for example, using an open-source load-balancer such as NGINX.
{{% /admonition %}}
{{</admonition>}}
## Endpoints
@ -30,9 +30,9 @@ A [list of clients]({{< relref "../send-data" >}}) can be found in the clients d
### Query endpoints
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Requests sent to the query endpoints must use valid LogQL syntax. For more information, see the [LogQL]({{< relref "../query" >}}) section of the documentation.
{{% /admonition %}}
{{</admonition>}}
These HTTP endpoints are exposed by the `querier`, `query-frontend`, `read`, and `all` components:
@ -112,10 +112,10 @@ These HTTP endpoints are exposed by all individual components:
### Deprecated endpoints
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
The following endpoints are deprecated.While they still exist and work, they should not be used for new deployments.
Existing deployments should upgrade to use the supported endpoints.
{{% /admonition %}}
{{</admonition>}}
| Deprecated | Replacement |
| ---------- | ----------- |
@ -154,9 +154,9 @@ The API accepts several formats for timestamps:
- A floating point number is a Unix timestamp with fractions of a second.
- A string in `RFC3339` and `RFC3339Nano` format, as supported by Go's [time](https://pkg.go.dev/time) package.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
When using `/api/v1/push`, you must send the timestamp as a string and not a number, otherwise the endpoint will return a 400 error.
{{% /admonition %}}
{{</admonition>}}
### Statistics
@ -1440,9 +1440,9 @@ Query parameters:
- `request_id=<request_id>`: Identifies the delete request to cancel; IDs are found using the `delete` endpoint.
- `force=<boolean>`: When the `force` query parameter is true, partially completed delete requests will be canceled.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
some data from the request may still be deleted and the deleted request will be listed as 'processed'.
- `MINOR` (roughly once a quarter): these releases include new features which generally do not break backwards-compatibility, but from time to time we might introduce _minor_ breaking changes, and we will specify these in our upgrade docs.
- `PATCH` (roughly once or twice a month): these releases include bug and security fixes which do not break backwards-compatibility.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
While our naming scheme resembles [Semantic Versioning](https://semver.org/), at this time we do not strictly follow its
guidelines to the letter. Our goal is to provide regular releases that are as stable as possible, and we take backwards-compatibility
seriously. As with any software, always read the [release notes](https://grafana.com/docs/loki/<LOKI_VERSION>/release-notes/) and the [upgrade guide](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/upgrade/) whenever
choosing a new version of Loki to install.
{{% /admonition %}}
{{</admonition>}}
New releases are based of a [weekly release](#weekly-releases) which we have vetted for stability over a number of weeks.
@ -78,8 +78,8 @@ List of security fixes for 2.3.x.
* [4020](https://github.com/grafana/loki/pull/4020) **simonswine**: Restrict path segments in TenantIDs (CVE-2021-36156 CVE-2021-36157).
{{% admonition type="note" %}}
Exploitation of this vulnerability requires the ability for an attacker to craft and send directly to Loki an `X-Scope-OrgID` header, end users should not have the ability to create and send this header directly to Loki as it controls access to tenants and is important to control setting of this header for proper tenant isolation and security. We always recommend having a proxy or gateway be responsible for setting the `X-Scope-OrgID`.{{% /admonition %}}
{{<admonitiontype="note">}}
Exploitation of this vulnerability requires the ability for an attacker to craft and send directly to Loki an `X-Scope-OrgID` header, end users should not have the ability to create and send this header directly to Loki as it controls access to tenants and is important to control setting of this header for proper tenant isolation and security. We always recommend having a proxy or gateway be responsible for setting the `X-Scope-OrgID`.{{</admonition>}}
@ -37,9 +37,9 @@ For more information, see [Ingesting logs to Loki using OpenTelemetry Collector]
The following clients have been developed by the Loki community or other third-parties and can be used to send log data to Loki.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Grafana Labs cannot provide support for third-party clients. Once an issue has been determined to be with the client and not Loki, it is the responsibility of the customer to work with the associated vendor or project for bug fixes to these clients.
{{% /admonition %}}
{{</admonition>}}
The following are popular third-party Loki clients:
@ -12,9 +12,9 @@ Grafana Loki officially supports a Docker plugin that will read logs from Docker
containers and ship them to Loki. The plugin can be configured to send the logs
to a private Loki instance or [Grafana Cloud](/oss/loki).
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Docker plugins are not supported on Windows; see the [Docker Engine managed plugin system](https://docs.docker.com/engine/extend) documentation for more information.
{{% /admonition %}}
{{</admonition>}}
Documentation on configuring the Loki Docker Driver can be found on the
The Loki logging driver still uses the json-log driver in combination with sending logs to Loki, this is mainly useful to keep the `docker logs` command working.
You can adjust file size and rotation using the respective log option `max-size` and `max-file`. Keep in mind that default values for these options are not taken from json-log configuration.
You can deactivate this behavior by setting the log option `no-file` to true.
{{% /admonition %}}
{{</admonition>}}
## Change the default logging driver
@ -65,11 +65,11 @@ Options for the logging driver can also be configured with `log-opts` in the
}
}
```
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
log-opt configuration options in daemon.json must be provided as
> strings. Boolean and numeric values (such as the value for loki-batch-size in
> the example above) must therefore be enclosed in quotes (`"`).
{{% /admonition %}}
{{</admonition>}}
After changing `daemon.json`, restart the Docker daemon for the changes to take
effect. All **newly created** containers from that host will then send logs to Loki via the driver.
@ -104,9 +104,9 @@ docker-compose -f docker-compose.yaml up
Once deployed, the Grafana service will send its logs to Loki.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Stack name and service name for each swarm service and project name and service name for each compose service are automatically discovered and sent as Loki labels, this way you can filter by them in Grafana.
{{% /admonition %}}
{{</admonition>}}
## Labels
@ -150,9 +150,9 @@ services:
- "3000:3000"
```
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Note the `loki-pipeline-stages: |` letting you keep the indentation correct.
{{% /admonition %}}
{{</admonition>}}
When using docker run you can also pass the value via a string parameter like such:
@ -99,9 +99,9 @@ Ephemeral jobs can quite easily run afoul of cardinality best practices. During
For those using Cloudwatch and wishing to test out Loki in a low-risk way, this workflow allows piping Cloudwatch logs to Loki regardless of the event source (EC2, Kubernetes, Lambda, ECS, etc) without setting up a set of Promtail daemons across their infrastructure. However, running Promtail as a daemon on your infrastructure is the best-practice deployment strategy in the long term for flexibility, reliability, performance, and cost.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Propagating logs from Cloudwatch to Loki means you'll still need to _pay_ for Cloudwatch.
{{% /admonition %}}
{{</admonition>}}
### VPC Flow logs
@ -165,9 +165,9 @@ Incoming logs can have seven special labels assigned to them which can be used i
### Promtail labels
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
This section is relevant if running Promtail between lambda-promtail and the end Loki deployment and was used to circumvent `out of order` problems prior to the v2.4 Loki release which removed the ordering constraint.
{{% /admonition %}}
{{</admonition>}}
As stated earlier, this workflow moves the worst case stream cardinality from `number_of_log_streams` -> `number_of_log_groups` * `number_of_promtails`. For this reason, each Promtail must have a unique label attached to logs it processes (ideally via something like `--client.external-labels=promtail=${HOSTNAME}`) and it's advised to run a small number of Promtails behind a load balancer according to your throughput and redundancy needs.
@ -195,9 +195,9 @@ The provided Terraform and CloudFormation files are meant to cover the default u
## Example Promtail Config
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
This should be run in conjunction with a Promtail-specific label attached, ideally via a flag argument like `--client.external-labels=promtail=${HOSTNAME}`. It will receive writes via the push-api on ports `3500` (http) and `3600` (grpc).
@ -93,9 +93,9 @@ Since the OpenTelemetry protocol differs from the Loki storage model, here is ho
- service.name
- service.namespace
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Because Loki has a default limit of 15 index labels, we recommend storing only select resource attributes as index labels. Although the default config selects more than 15 Resource Attributes, it should be fine since a few are mutually exclusive.
{{% /admonition %}}
{{</admonition>}}
- Timestamp: One of `LogRecord.TimeUnixNano` or `LogRecord.ObservedTimestamp`, based on which one is set. If both are not set, the ingestion timestamp will be used.
For `older_than` to work, you must be using the [timestamp]({{< relref "./timestamp" >}}) stage to set the timestamp from the ingested log line _before_ applying the `drop` stage.
@ -8,11 +8,11 @@ description: The 'structured_metadata' Promtail pipeline stage
The `structured_metadata` stage is an action stage that takes data from the extracted map and
modifies the [structured metadata]({{< relref "../../../get-started/labels/structured-metadata" >}}) that is sent to Loki with the log entry.
{{% admonition type="warning" %}}
{{<admonitiontype="warning">}}
Structured metadata will be rejected by Loki unless you enable the `allow_structured_metadata` per tenant configuration (in the `limits_config`).
Structured metadata was added to chunk format V4 which is used if the schema version is greater or equal to **13**. (See Schema Config for more details about schema versions. )
@ -20,9 +20,9 @@ we strongly recommend migrating to TSDB.
To begin the migration, add a new [period_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#period_config) entry in your [schema_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#schema_config).
You can read more about schema config [here](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/storage/#schema-config).
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
You must roll out the new `period_config` change to all Loki components in order for it to take effect.
{{% /admonition %}}
{{</admonition>}}
This example adds a new `period_config` which configures Loki to start using the TSDB index for the data ingested starting from `2023-10-20`.
@ -48,10 +48,10 @@ Loki changes the default value of `-ruler.alertmanager-use-v2` from `false` to `
### Experimental Bloom Filters
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Experimental features are subject to rapid change and/or removal, which can introduce breaking changes even between minor version.
They also don't follow the deprecation lifecycle of regular features.
{{% /admonition %}}
{{</admonition>}}
The bloom compactor component, which builds bloom filter blocks for query acceleration, has been removed in favor of two new components: bloom planner and bloom builder.
Please consult the [Query Acceleration with Blooms](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/query-acceleration-blooms/) docs for more information.
@ -78,11 +78,11 @@ All other CLI arguments (and their YAML counterparts) prefixed with `-bloom-comp
## 3.0.0
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
If you have questions about upgrading to Loki 3.0, please join us on the [community Slack](https://slack.grafana.com/) in the `#loki-3` channel.
Or leave a comment on this [Github Issue](https://github.com/grafana/loki/issues/12506).
{{% /admonition %}}
{{</admonition>}}
{{<admonitiontype="tip">}}
If you have not yet [migrated to TSDB](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/migrate/migrate-to-tsdb/), do so before you upgrade to Loki 3.0.
@ -165,11 +165,11 @@ This enforces chunks and index files to reside together in the same storage buck
We are removing the shared store setting in an effort to simplify storage configuration and reduce the possibility for misconfiguration.
{{% admonition type="warning" %}}
{{<admonitiontype="warning">}}
With this change Loki no longer allows storing chunks and indexes for a given period in different storage buckets.
This is a breaking change for setups that store chunks and indexes in different storage buckets by setting `-boltdb.shipper.shared-store` or `-tsdb.shipper.shared-store` to a value
different from `object_store` in `period_config`.
{{% /admonition %}}
{{</admonition>}}
- If you have not configured `-boltdb.shipper.shared-store`,`-tsdb.shipper.shared-store` or their corresponding YAML setting before, no changes are required as part of the upgrade.
- If you have configured `-boltdb.shipper.shared-store` or its YAML setting:
@ -194,9 +194,9 @@ period_config:
period: 24h
```
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
`path_prefix` only applies to TSDB and BoltDB indexes. This setting has no effect on [legacy indexes](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/storage/#index-storage).
{{% /admonition %}}
{{</admonition>}}
`path_prefix` defaults to `index/` which is same as the default value of the removed configurations.
@ -290,7 +290,7 @@ This new metric will provide a more clear signal that there is an issue with ing
#### Changes to default configuration values in 3.0
{{% responsive-table %}}
{{<responsive-table>}}
| configuration | new default | old default | notes |
| `legacy-read-mode` | false | true | Deprecated. It will be removed in the next minor release. |
{{% /responsive-table %}}
{{</responsive-table>}}
#### Automatic stream sharding is enabled by default
@ -529,10 +529,10 @@ limits_config:
retention_period: 744h
```
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
In previous versions, the zero value of `0` or `0s` will result in **immediate deletion of all logs**,
only in 2.8 and forward releases does the zero value disable retention.
{{% /admonition %}}
{{</admonition>}}
#### metrics.go log line `subqueries` replaced with `splits` and `shards`
@ -546,9 +546,9 @@ In 2.8 we no longer include `subqueries` in metrics.go, it does still exist in t
Instead, now you can use `splits` to see how many split by time intervals were created and `shards` to see the total number of shards created for a query.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
Currently not every query can be sharded and a shards value of zero is a good indicator the query was not able to be sharded.
{{% /admonition %}}
{{</admonition>}}
### Promtail
@ -579,9 +579,9 @@ ruler:
#### query-frontend Kubernetes headless service changed to load balanced service
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
This is relevant only if you are using [jsonnet for deploying Loki in Kubernetes](/docs/loki/<LOKI_VERSION>/installation/tanka/).
{{% /admonition %}}
{{</admonition>}}
The `query-frontend` Kubernetes service was previously headless and was used for two purposes:
* Distributing the Loki query requests amongst all the available Query Frontend pods.
@ -1114,9 +1114,9 @@ In Loki 2.2 we changed the internal version of our chunk format from v2 to v3, t
This makes it important to first upgrade to 2.0, 2.0.1, or 2.1 **before** upgrading to 2.2 so that if you need to rollback for any reason you can do so easily.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
2.0 and 2.0.1 are identical in every aspect except 2.0.1 contains the code necessary to read the v3 chunk format. Therefor if you are on 2.0 and ugrade to 2.2, if you want to rollback, you must rollback to 2.0.1.
{{% /admonition %}}
{{</admonition>}}
### Loki Config
@ -1260,14 +1260,14 @@ This likely only affects a small portion of tanka users because the default sche
}
```
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
If you had set `index_period_hours` to a value other than 168h (the previous default) you must update this in the above config `period:` to match what you chose.
{{% /admonition %}}
{{</admonition>}}
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
We have changed the default index store to `boltdb-shipper` it's important to add `using_boltdb_shipper: false,` until you are ready to change (if you want to change)
{{% /admonition %}}
{{</admonition>}}
Changing the jsonnet config to use the `boltdb-shipper` type is the same as [below](#upgrading-schema-to-use-boltdb-shipper-andor-v11-schema) where you need to add a new schema section.
@ -1309,9 +1309,9 @@ _THIS BEING SAID_ we are not expecting problems, our testing so far has not unco
Report any problems via GitHub issues or reach us on the #loki slack channel.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
If are using boltdb-shipper and were running with high availability and separate filesystems, this was a poorly documented and even more experimental mode we toyed with using boltdb-shipper. For now we removed the documentation and also any kind of support for this mode.
{{% /admonition %}}
{{</admonition>}}
To use boltdb-shipper in 2.0 you need a shared storage (S3, GCS, etc), the mode of running with separate filesystem stores in HA using a ring is not officially supported.
@ -1454,9 +1454,9 @@ schema_config:
```
If you are not on `schema: v11` this would be a good opportunity to make that change _in the new schema config_ also.
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
If the current time in your timezone is after midnight UTC already, set the date one additional day forward.
{{% /admonition %}}
{{</admonition>}}
There was also a significant overhaul to how boltdb-shipper internals, this should not be visible to a user but as this
feature is experimental and under development bug are possible!
@ -1515,9 +1515,9 @@ Defaulting to `gcs,bigtable` was confusing for anyone using ksonnet with other s
## 1.5.0
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
The required upgrade path outlined for version 1.4.0 below is still true for moving to 1.5.0 from any release older than 1.4.0 (e.g. 1.3.0 -> 1.5.0 needs to also look at the 1.4.0 upgrade requirements).
{{% /admonition %}}
{{</admonition>}}
### Breaking config changes!
@ -1571,9 +1571,9 @@ Not every environment will allow this capability however, it's possible to restr
#### Filesystem
{{% admonition type="note" %}}
{{<admonitiontype="note">}}
The location Loki is looking for files with the provided config in the docker image has changed.
{{% /admonition %}}
{{</admonition>}}
In 1.4.0 and earlier the included config file in the docker container was using directories: