title: Manage bloom filter building and querying (Experimental)
menuTitle: Bloom filters
description: Describes how to enable and configure query acceleration with bloom filters.
weight:
@ -9,13 +9,12 @@ keywords:
aliases:
- ./query-acceleration-blooms
---
# Bloom filters (Experimental)
# Manage bloom filter building and querying (Experimental)
{{<admonitiontype="warning">}}
In Loki and Grafana Enterprise Logs (GEL), Query acceleration using blooms is an [experimental feature](/docs/release-life-cycle/). Engineering and on-call support is not available. No SLA is provided. Note that this feature is intended for users who are ingesting more than 75TB of logs a month, as it is designed to accelerate queries against large volumes of logs.
In Grafana Cloud, Query acceleration using Bloom filters is enabled as a [public preview](/docs/release-life-cycle/) for select large-scale customers that are ingesting more that 75TB of logs a month. Limited support and no SLA are provided.
In Grafana Cloud, Query acceleration using bloom filters is enabled as a [public preview](/docs/release-life-cycle/) for select large-scale customers that are ingesting more that 75TB of logs a month. Limited support and no SLA are provided.
{{</admonition>}}
Loki leverages [bloom filters](https://en.wikipedia.org/wiki/Bloom_filter) to speed up queries by reducing the amount of data Loki needs to load from the store and iterate through.
title: Audit data propagation latency and correctness using Loki Canary
menuTitle: LokiCanary
description: Describes how to use Loki Canary to audit the log-capturing performance of a Grafana Loki cluster to ensure Loki is ingesting logs without data loss.
weight:
---
# Loki Canary
# Audit data propagation latency and correctness using Loki Canary
Loki Canary is a standalone app that audits the log-capturing performance of a Grafana Loki cluster.
This component emits and periodically queries for logs, making sure that Loki is ingesting logs without any data loss.
description: Describes the Loki mixins, how to configure and install the dashboards, alerts, and recording rules.
title: Install dashboards, alerts, and recording rules
menuTitle: Mixins
description: Describes the Loki mixins, how to configure and install the dashboards, alerts, and recording rules.
weight: 100
---
# Install Loki mixins
# Install dashboards, alerts, and recording rules
Loki is instrumented to expose metrics about itself via the `/metrics` endpoint, designed to be scraped by Prometheus. Each Loki release includes a mixin. The Loki mixin provides a set of Grafana dashboards, Prometheus recording rules and alerts for monitoring Loki.
description: Describes how the Overrides Exporter module exposes tenant limits as Prometheus metrics.
title: Monitor tenant limits using the Overrides Exporter
menuTitle: OverridesExporter
description: Describes how the Overrides Exporter exposes tenant limits as Prometheus metrics.
weight:
---
# Overrides exporter
# Monitor tenant limits using the Overrides Exporter
Loki is a multi-tenant system that supports applying limits to each tenant as a mechanism for resource management. The `overrides-exporter` module exposes these limits as Prometheus metrics in order to help operators better understand tenant behavior.
description: Request Validation and Rate-Limit Errors
title: Enforce rate limits and push request validation
menuTitle: Ratelimits
description: Decribes the different rate limits and push request validation and their error handling.
weight:
---
# Enforce rate limits and push request validation
# Request Validation and Rate-Limit Errors
Loki will reject requests if they exceed a usage threshold (rate-limit error) or if they are invalid (validation error).
Loki will reject requests if they exceed a usage threshold (rate limit error) or if they are invalid (validation error).
All occurrences of these errors can be observed using the `loki_discarded_samples_total` and `loki_discarded_bytes_total` metrics. The sections below describe the various possible reasons specified in the `reason` label of these metrics.
It is recommended that Loki operators set up alerts or dashboards with these metrics to detect when rate-limits or validation errors occur.
It is recommended that Loki operators set up alerts or dashboards with these metrics to detect when ratelimits or validation errors occur.
### Terminology
@ -26,7 +25,7 @@ Rate-limits are enforced when Loki cannot handle more requests from a tenant.
### `rate_limited`
This rate-limit is enforced when a tenant has exceeded their configured log ingestion rate-limit.
This rate limit is enforced when a tenant has exceeded their configured log ingestion rate limit.
One solution if you're seeing samples dropped due to `rate_limited` is simply to increase the rate limits on your Loki cluster. These limits can be modified globally in the [`limits_config`](/docs/loki/<LOKI_VERSION>/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/<LOKI_VERSION>/configuration/#runtime-configuration-file) file. The config options to use are `ingestion_rate_mb` and `ingestion_burst_size_mb`.
@ -46,9 +45,9 @@ Note that you'll want to make sure your Loki cluster has sufficient resources pr
### `per_stream_rate_limit`
This limit is enforced when a single stream reaches its rate-limit.
This limit is enforced when a single stream reaches its ratelimit.
Each stream has a rate-limit applied to it to prevent individual streams from overwhelming the set of ingesters it is distributed to (the size of that set is equal to the `replication_factor` value).
Each stream has a ratelimit applied to it to prevent individual streams from overwhelming the set of ingesters it is distributed to (the size of that set is equal to the `replication_factor` value).
This value can be modified globally in the [`limits_config`](/docs/loki/<LOKI_VERSION>/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/<LOKI_VERSION>/configuration/#runtime-configuration-file) file. The config options to adjust are `per_stream_rate_limit` and `per_stream_rate_limit_burst`.
title: Isolate tenant workflows using shuffle sharding
menuTitle: Shuffle sharding
description: Describes how to isolate tenant workloads from other tenant workloads using shuffle sharding to provide a better sharing of resources.
weight:
---
# Shuffle sharding
# Isolate tenant workflows using shuffle sharding
Shuffle sharding is a resource-management technique used to isolate tenant workloads from other tenant workloads, to give each tenant more of a single-tenant experience when running in a shared cluster.
This technique is explained by AWS in their article [Workload isolation using shuffle-sharding](https://aws.amazon.com/builders-library/workload-isolation-using-shuffle-sharding/).
description: Describes how to migrate from a single ingester StatefulSet to three zone aware ingester StatefulSets
title: Speed up ingester rollout using zone awareness
menuTitle: Zoneaware ingesters
description: Describes how to migrate from a single ingester StatefulSet to three zone aware ingester StatefulSets.
weight:
---
# Zone aware ingesters
# Speed up ingester rollout using zone awareness
The Loki zone aware ingesters are used by Grafana Labs in order to allow for easier rollouts of large Loki deployments. You can think of them as three logical zones, however with some extra Kubernetes configuration you could deploy them in separate zones.
@ -111,4 +110,4 @@ These instructions assume you are using the zone aware ingester jsonnet deployme
1. clean up any remaining temporary config from the migration, for example `multi_zone_ingester_migration_enabled: true` is no longer needed.
1. ensure that all the old default ingester PVC/PV are removed.
1. ensure that all the old default ingester PVC/PV are removed.