From 2cccc1bc6893c0b129bf2c81dbcd449ff837a53d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Erik=20Sj=C3=B6lund?= Date: Fri, 31 Mar 2023 19:15:54 +0200 Subject: [PATCH] Docs: Fix typos and grammar (#8966) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit **What this PR does / why we need it**: Fixes typos and grammar. Most of them were occurrences of `it's` that should be `its`. **Which issue(s) this PR fixes**: None **Special notes for your reviewer**: **Checklist** - [x] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [ ] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` Signed-off-by: Erik Sjölund --- docs/sources/alert/_index.md | 4 ++-- docs/sources/clients/k6/write-scenario.md | 2 +- docs/sources/clients/promtail/configuration.md | 6 +++--- docs/sources/fundamentals/architecture/components/_index.md | 6 +++--- .../installation/helm/migrate-from-distributed/index.md | 2 +- docs/sources/installation/helm/reference.md | 2 +- docs/sources/operations/storage/wal.md | 2 +- docs/sources/storage/_index.md | 4 ++-- docs/sources/upgrading/_index.md | 2 +- production/helm/loki/values.yaml | 6 +++--- 10 files changed, 18 insertions(+), 18 deletions(-) diff --git a/docs/sources/alert/_index.md b/docs/sources/alert/_index.md index 4df0343cb4..7eeb0962d7 100644 --- a/docs/sources/alert/_index.md +++ b/docs/sources/alert/_index.md @@ -164,7 +164,7 @@ Sometimes you want to know whether _any_ instance of something has occurred. Ale ### Alerting on high-cardinality sources -Another great use case is alerting on high cardinality sources. These are things which are difficult/expensive to record as metrics because the potential label set is huge. A great example of this is per-tenant alerting in multi-tenanted systems like Loki. It's a common balancing act between the desire to have per-tenant metrics and the cardinality explosion that ensues (adding a single _tenant_ label to an existing Prometheus metric would increase it's cardinality by the number of tenants). +Another great use case is alerting on high cardinality sources. These are things which are difficult/expensive to record as metrics because the potential label set is huge. A great example of this is per-tenant alerting in multi-tenanted systems like Loki. It's a common balancing act between the desire to have per-tenant metrics and the cardinality explosion that ensues (adding a single _tenant_ label to an existing Prometheus metric would increase its cardinality by the number of tenants). Creating these alerts in LogQL is attractive because these metrics can be extracted at _query time_, meaning we don't suffer the cardinality explosion in our metrics store. @@ -241,7 +241,7 @@ jobs: One option to scale the Ruler is by scaling it horizontally. However, with multiple Ruler instances running they will need to coordinate to determine which instance will evaluate which rule. Similar to the ingesters, the Rulers establish a hash ring to divide up the responsibilities of evaluating rules. -The possible configurations are listed fully in the [configuration documentation]({{}}), but in order to shard rules across multiple Rulers, the rules API must be enabled via flag (`-ruler.enable-api`) or config file parameter. Secondly, the Ruler requires it's own ring be configured. From there the Rulers will shard and handle the division of rules automatically. Unlike ingesters, Rulers do not hand over responsibility: all rules are re-sharded randomly every time a Ruler is added to or removed from the ring. +The possible configurations are listed fully in the [configuration documentation]({{}}), but in order to shard rules across multiple Rulers, the rules API must be enabled via flag (`-ruler.enable-api`) or config file parameter. Secondly, the Ruler requires its own ring to be configured. From there the Rulers will shard and handle the division of rules automatically. Unlike ingesters, Rulers do not hand over responsibility: all rules are re-sharded randomly every time a Ruler is added to or removed from the ring. A full sharding-enabled Ruler example is: diff --git a/docs/sources/clients/k6/write-scenario.md b/docs/sources/clients/k6/write-scenario.md index 0ecc0b4180..9edd007cb0 100644 --- a/docs/sources/clients/k6/write-scenario.md +++ b/docs/sources/clients/k6/write-scenario.md @@ -42,7 +42,7 @@ These parameters can be adjusted in the load test: * The number of virtual users (VUs) VUs can be used to control the amount of parallelism with which logs should - be pushed. Every VU runs it's own loop of iterations. + be pushed. Every VU runs its own loop of iterations. Therfore, the number of VUs has the most impact on the generated log throughput. Since generating logs is CPU-intensive, there is a threshold above which diff --git a/docs/sources/clients/promtail/configuration.md b/docs/sources/clients/promtail/configuration.md index be9f306560..27b41da845 100644 --- a/docs/sources/clients/promtail/configuration.md +++ b/docs/sources/clients/promtail/configuration.md @@ -903,14 +903,14 @@ See [Example Push Config](#example-push-config) The `windows_events` block configures Promtail to scrape windows event logs and send them to Loki. -To subcribe to a specific events stream you need to provide either an `eventlog_name` or an `xpath_query`. +To subscribe to a specific events stream you need to provide either an `eventlog_name` or an `xpath_query`. Events are scraped periodically every 3 seconds by default but can be changed using `poll_interval`. A bookmark path `bookmark_path` is mandatory and will be used as a position file where Promtail will keep record of the last event processed. This file persists across Promtail restarts. -You can set `use_incoming_timestamp` if you want to keep incomming event timestamps. By default Promtail will use the timestamp when +You can set `use_incoming_timestamp` if you want to keep incoming event timestamps. By default Promtail will use the timestamp when the event was read from the event log. Promtail will serialize JSON windows events, adding `channel` and `computer` labels from the event received. @@ -2046,7 +2046,7 @@ The `tracing` block configures tracing for Jaeger. Currently, limited to configu It's fairly difficult to tail Docker files on a standalone machine because they are in different locations for every OS. We recommend the [Docker logging driver]({{}}) for local Docker installs or Docker Compose. -If running in a Kubernetes environment, you should look at the defined configs which are in [helm](https://github.com/grafana/helm-charts/blob/main/charts/promtail/templates/configmap.yaml) and [jsonnet](https://github.com/grafana/loki/tree/master/production/ksonnet/promtail/scrape_config.libsonnet), these leverage the prometheus service discovery libraries (and give Promtail it's name) for automatically finding and tailing pods. The jsonnet config explains with comments what each section is for. +If running in a Kubernetes environment, you should look at the defined configs which are in [helm](https://github.com/grafana/helm-charts/blob/main/charts/promtail/templates/configmap.yaml) and [jsonnet](https://github.com/grafana/loki/tree/master/production/ksonnet/promtail/scrape_config.libsonnet), these leverage the prometheus service discovery libraries (and give Promtail its name) for automatically finding and tailing pods. The jsonnet config explains with comments what each section is for. ## Example Static Config diff --git a/docs/sources/fundamentals/architecture/components/_index.md b/docs/sources/fundamentals/architecture/components/_index.md index 6aca2dcf14..80ec7f86e1 100644 --- a/docs/sources/fundamentals/architecture/components/_index.md +++ b/docs/sources/fundamentals/architecture/components/_index.md @@ -32,11 +32,11 @@ Currently the only way the distributor mutates incoming data is by normalizing l The distributor can also rate limit incoming logs based on the maximum per-tenant bitrate. It does this by checking a per tenant limit and dividing it by the current number of distributors. This allows the rate limit to be specified per tenant at the cluster level and enables us to scale the distributors up or down and have the per-distributor limit adjust accordingly. For instance, say we have 10 distributors and tenant A has a 10MB rate limit. Each distributor will allow up to 1MB/second before limiting. Now, say another large tenant joins the cluster and we need to spin up 10 more distributors. The now 20 distributors will adjust their rate limits for tenant A to `(10MB / 20 distributors) = 500KB/s`! This is how global limits allow much simpler and safer operation of the Loki cluster. -**Note: The distributor uses the `ring` component under the hood to register itself amongst it's peers and get the total number of active distributors. This is a different "key" than the ingesters use in the ring and comes from the distributor's own [ring configuration]({{}}).** +**Note: The distributor uses the `ring` component under the hood to register itself amongst its peers and get the total number of active distributors. This is a different "key" than the ingesters use in the ring and comes from the distributor's own [ring configuration]({{}}).** ### Forwarding -Once the distributor has performed all of it's validation duties, it forwards data to the ingester component which is ultimately responsible for acknowledging the write. +Once the distributor has performed all of its validation duties, it forwards data to the ingester component which is ultimately responsible for acknowledging the write. #### Replication factor @@ -44,7 +44,7 @@ In order to mitigate the chance of _losing_ data on any single ingester, the dis **Caveat: There's also an edge case where we acknowledge a write if 2 of the three ingesters do which means that in the case where 2 writes succeed, we can only lose one ingester before suffering data loss.** -Replication factor isn't the only thing that prevents data loss, though, and arguably these days it's main purpose is to allow writes to continue uninterrupted during rollouts & restarts. The `ingester` component now includes a [write ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) which persists incoming writes to disk to ensure they're not lost as long as the disk isn't corrupted. The complementary nature of replication factor and WAL ensures data isn't lost unless there are significant failures in both mechanisms (i.e. multiple ingesters die and lose/corrupt their disks). +Replication factor isn't the only thing that prevents data loss, though, and arguably these days its main purpose is to allow writes to continue uninterrupted during rollouts & restarts. The `ingester` component now includes a [write ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) which persists incoming writes to disk to ensure they're not lost as long as the disk isn't corrupted. The complementary nature of replication factor and WAL ensures data isn't lost unless there are significant failures in both mechanisms (i.e. multiple ingesters die and lose/corrupt their disks). ### Hashing diff --git a/docs/sources/installation/helm/migrate-from-distributed/index.md b/docs/sources/installation/helm/migrate-from-distributed/index.md index cb106c58e3..635aac0f53 100644 --- a/docs/sources/installation/helm/migrate-from-distributed/index.md +++ b/docs/sources/installation/helm/migrate-from-distributed/index.md @@ -19,7 +19,7 @@ This guide will walk you through migrating to the `loki` Helm Chart, v3.0 or hig We recommend having a Grafana instance available to monitor both the existing and new clusters, to make sure there is no data loss during the migration process. The `loki` chart ships with self-monitoring features, including dashboards. These are useful for monitoring the health of the new cluster as it spins up. -Start by updating your existing Grafana Agent or Promtail config (whatever is scraping logs from your environment) to _exclude_ the new deployment. The new `loki` chart ships with it's own self-monitoring mechanisms, and we want to make sure it's not scraped twice, which would produce duplicate logs. The best way to do this is via a relabel config that will drop logs from the new deployment, for example something like: +Start by updating your existing Grafana Agent or Promtail config (whatever is scraping logs from your environment) to _exclude_ the new deployment. The new `loki` chart ships with its own self-monitoring mechanisms, and we want to make sure it's not scraped twice, which would produce duplicate logs. The best way to do this is via a relabel config that will drop logs from the new deployment, for example something like: ```yaml - source_labels: diff --git a/docs/sources/installation/helm/reference.md b/docs/sources/installation/helm/reference.md index c5b642b81b..b7fece552e 100644 --- a/docs/sources/installation/helm/reference.md +++ b/docs/sources/installation/helm/reference.md @@ -1993,7 +1993,7 @@ false migrate.fromDistributed.memberlistService string - If migrating from a distributed service, provide the distributed deployment's memberlist service DNS so the new deployment can join it's ring. + If migrating from a distributed service, provide the distributed deployment's memberlist service DNS so the new deployment can join its ring.
 ""
 
diff --git a/docs/sources/operations/storage/wal.md b/docs/sources/operations/storage/wal.md index 0244da9160..d8e3705fd8 100644 --- a/docs/sources/operations/storage/wal.md +++ b/docs/sources/operations/storage/wal.md @@ -17,7 +17,7 @@ The Write Ahead Log in Loki takes a few particular tradeoffs compared to other W 1) Corruption/Deletion of the WAL prior to replaying it -In the event the WAL is corrupted/partially deleted, Loki will not be able to recover all of it's data. In this case, Loki will attempt to recover any data it can, but will not prevent Loki from starting. +In the event the WAL is corrupted/partially deleted, Loki will not be able to recover all of its data. In this case, Loki will attempt to recover any data it can, but will not prevent Loki from starting. Note: the Prometheus metric `loki_ingester_wal_corruptions_total` can be used to track and alert when this happens. diff --git a/docs/sources/storage/_index.md b/docs/sources/storage/_index.md index 7d0f27656d..959addd6ae 100644 --- a/docs/sources/storage/_index.md +++ b/docs/sources/storage/_index.md @@ -71,7 +71,7 @@ Cassandra can also be utilized for the index store and aside from the [boltdb-sh ### BigTable -Bigtable is a cloud database offered by Google. It is a good candidate for a managed index store if you're already using it (due to it's heavy fixed costs) or wish to run in GCP. +Bigtable is a cloud database offered by Google. It is a good candidate for a managed index store if you're already using it (due to its heavy fixed costs) or wish to run in GCP. ### DynamoDB @@ -112,7 +112,7 @@ schema_config: period: 168h ``` -For all data ingested before 2020-07-01, Loki used the v10 schema and then switched after that point to the more effective v11. This dramatically simplifies upgrading, ensuring it's simple to take advantages of new storage optimizations. These configs should be immutable for as long as you care about retention. +For all data ingested before 2020-07-01, Loki used the v10 schema and then switched after that point to the more effective v11. This dramatically simplifies upgrading, ensuring it's simple to take advantage of new storage optimizations. These configs should be immutable for as long as you care about retention. ## Table Manager diff --git a/docs/sources/upgrading/_index.md b/docs/sources/upgrading/_index.md index 899f3974e1..80c4511625 100644 --- a/docs/sources/upgrading/_index.md +++ b/docs/sources/upgrading/_index.md @@ -489,7 +489,7 @@ We decided the default would be better to disable this sleep behavior but anyone * [4624](https://github.com/grafana/loki/pull/4624) **chaudum**: Disable chunk transfers in jsonnet lib This changes a few default values, resulting in the ingester WAL now being on by default, -and chunk transfer retries are disabled by default. Note, this now means Loki will depend on local disk by default for it's WAL (write ahead log) directory. This defaults to `wal` but can be overridden via the `--ingester.wal-dir` or via `path_prefix` in the common configuration section. Below are config snippets with the previous defaults, and another with the new values. +and chunk transfer retries are disabled by default. Note, this now means Loki will depend on local disk by default for its WAL (write ahead log) directory. This defaults to `wal` but can be overridden via the `--ingester.wal-dir` or via `path_prefix` in the common configuration section. Below are config snippets with the previous defaults, and another with the new values. Previous defaults: ```yaml diff --git a/production/helm/loki/values.yaml b/production/helm/loki/values.yaml index be257d2a8e..2d113a5207 100644 --- a/production/helm/loki/values.yaml +++ b/production/helm/loki/values.yaml @@ -418,7 +418,7 @@ migrate: # -- Set to true if migrating from a distributed helm chart enabled: false # -- If migrating from a distributed service, provide the distributed deployment's - # memberlist service DNS so the new deployment can join it's ring. + # memberlist service DNS so the new deployment can join its ring. memberlistService: "" serviceAccount: # -- Specifies whether a ServiceAccount should be created @@ -529,11 +529,11 @@ monitoring: labels: {} # -- If defined a MetricsInstance will be created to remote write metrics. remoteWrite: null - # Self monitoring determines whether Loki should scrape it's own logs. + # Self monitoring determines whether Loki should scrape its own logs. # This feature currently relies on the Grafana Agent Operator being installed, # which is installed by default using the grafana-agent-operator sub-chart. # It will create custom resources for GrafanaAgent, LogsInstance, and PodLogs to configure - # scrape configs to scrape it's own logs with the labels expected by the included dashboards. + # scrape configs to scrape its own logs with the labels expected by the included dashboards. selfMonitoring: enabled: true # -- Tenant to use for self monitoring