docs: fix some typos (#12163)

Signed-off-by: wellweek <xiezitai@outlook.com>
pull/12165/head
wellweek 2 years ago committed by GitHub
parent 5fd5e06cfc
commit e71964cca4
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 4
      docs/sources/query/log_queries/_index.md
  2. 4
      docs/sources/query/template_functions.md
  3. 2
      docs/sources/reference/api.md
  4. 4
      docs/sources/send-data/promtail/cloud/ecs/_index.md
  5. 2
      docs/sources/setup/install/helm/monitor-and-alert/with-local-monitoring.md
  6. 2
      docs/sources/setup/migrate/migrate-from-distributed/index.md
  7. 2
      docs/sources/setup/migrate/migrate-to-tsdb/_index.md
  8. 2
      docs/sources/storage/_index.md

@ -230,7 +230,7 @@ String type work exactly like Prometheus label matchers use in [log stream selec
> The string type is the only one that can filter out a log line with a label `__error__`.
Using Duration, Number and Bytes will convert the label value prior to comparision and support the following comparators:
Using Duration, Number and Bytes will convert the label value prior to comparison and support the following comparators:
- `==` or `=` for equality.
- `!=` for inequality.
@ -626,7 +626,7 @@ the result will be
{host="grafana.net", path="/", status="200"} {"level": "info", "method": "GET", "path": "/", "host": "grafana.net", "status": "200"}
```
Similary, this expression can be used to drop `__error__` labels as well. For example, for the query `{job="varlogs"}|json|drop __error__`, with below log line
Similarly, this expression can be used to drop `__error__` labels as well. For example, for the query `{job="varlogs"}|json|drop __error__`, with below log line
```
INFO GET / loki.net 200

@ -367,7 +367,7 @@ Example:
## mul
Mulitply numbers. Supports multiple numbers.
Multiply numbers. Supports multiple numbers.
Signature: `func(a interface{}, v ...interface{}) int64`
@ -415,7 +415,7 @@ Example:
## mulf
Mulitply numbers. Supports multiple numbers
Multiply numbers. Supports multiple numbers
Signature: `func(a interface{}, v ...interface{}) float64`

@ -829,7 +829,7 @@ The `/loki/api/v1/index/volume` and `/loki/api/v1/index/volume_range` endpoints
The `query` should be a valid LogQL stream selector, for example `{job="foo", env=~".+"}`. By default, these endpoints will aggregate into series consisting of all matches for labels included in the query. For example, assuming you have the streams `{job="foo", env="prod", team="alpha"}`, `{job="bar", env="prod", team="beta"}`, `{job="foo", env="dev", team="alpha"}`, and `{job="bar", env="dev", team="beta"}` in your system. The query `{job="foo", env=~".+"}` would return the two metric series `{job="foo", env="dev"}` and `{job="foo", env="prod"}`, each with datapoints representing the accumulate values of chunks for the streams matching that selector, which in this case would be the streams `{job="foo", env="dev", team="alpha"}` and `{job="foo", env="prod", team="alpha"}`, respectively.
There are two parameters which can affect the aggregation strategy. First, a comma-seperated list of `targetLabels` can be provided, allowing volumes to be aggregated by the speficied `targetLabels` only. This is useful for negations. For example, if you said `{team="alpha", env!="dev"}`, the default behavior would include `env` in the aggregation set. However, maybe you're looking for all non-dev jobs for team alpha, and you don't care which env those are in (other than caring that they're not dev jobs). To achieve this, you could specify `targetLabels=team,job`, resulting in a single metric series (in this case) of `{team="alpha", job="foo}`.
There are two parameters which can affect the aggregation strategy. First, a comma-separated list of `targetLabels` can be provided, allowing volumes to be aggregated by the speficied `targetLabels` only. This is useful for negations. For example, if you said `{team="alpha", env!="dev"}`, the default behavior would include `env` in the aggregation set. However, maybe you're looking for all non-dev jobs for team alpha, and you don't care which env those are in (other than caring that they're not dev jobs). To achieve this, you could specify `targetLabels=team,job`, resulting in a single metric series (in this case) of `{team="alpha", job="foo}`.
The other way to change aggregations is with the `aggregateBy` parameter. The default value for this is `series`, which aggregates into combinations of matching key-value pairs. Alternately this can be specified as `labels`, which will aggregate into labels only. In this case, the response will have a metric series with a label name matching each label, and a label value of `""`. This is useful for exploring logs at a high level. For example, if you wanted to know what percentage of your logs had a `team` label, you could query your logs with `aggregateBy=labels` and a query with either an exact or regex match on `team`, or by including `team` in the list of `targetLabels`.

@ -153,7 +153,7 @@ Go ahead and replace the `Host` and `HTTP_User` property with your [GrafanaCloud
We include plain text credentials in `options` for simplicity. However, this exposes credentials in your ECS task definition and in any version-controlled configuration. Mitigate this issue by using a secret store such as [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html), combined with the `secretOptions` configuration option for [injecting sensitive data in a log configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-secrets.html#secrets-logconfig).
All `options` of the `logConfiguration` will be automatically translated into [fluentbit ouput][fluentbit ouput]. For example, the above options will produce this fluent bit `OUTPUT` config section:
All `options` of the `logConfiguration` will be automatically translated into [fluentbit output][fluentbit output]. For example, the above options will produce this fluent bit `OUTPUT` config section:
```conf
[OUTPUT]
@ -238,7 +238,7 @@ That's it ! Make sure to checkout LogQL to learn more about Loki powerful query
[fluentbit loki image]: https://hub.docker.com/r/grafana/fluent-bit-plugin-loki
[logql]: https://grafana.com/docs/loki/latest/logql/
[alpine]:https://hub.docker.com/_/alpine
[fluentbit ouput]: https://fluentbit.io/documentation/0.14/output/
[fluentbit output]: https://fluentbit.io/documentation/0.14/output/
[routing]: https://fluentbit.io/documentation/0.13/getting_started/routing.html
[grafanacloud account]: https://grafana.com/login
[grafana logs firelens]: ./ecs-grafana.png

@ -17,7 +17,7 @@ By default this Helm Chart configures meta-monitoring of metrics (service monito
The `ServiceMonitor` resource works with either the Prometheus Operator or the Grafana Agent Operator, and defines how Loki's metrics should be scraped. Scraping this Loki cluster using the scrape config defined in the `SerivceMonitor` resource is required for the included dashboards to work. A `MetricsInstance` can be configured to write the metrics to a remote Prometheus instance such as Grafana Cloud Metrics.
_Self monitoring_ is enabled by default. This will deploy a `GrafanaAgent`, `LogsInstance`, and `PodLogs` resource which will instruct the Grafana Agent Operator (installed seperately) on how to scrape this Loki cluster's logs and send them back to itself. Scraping this Loki cluster using the scrape config defined in the `PodLogs` resource is required for the included dashboards to work.
_Self monitoring_ is enabled by default. This will deploy a `GrafanaAgent`, `LogsInstance`, and `PodLogs` resource which will instruct the Grafana Agent Operator (installed separately) on how to scrape this Loki cluster's logs and send them back to itself. Scraping this Loki cluster using the scrape config defined in the `PodLogs` resource is required for the included dashboards to work.
Rules and alerts are automatically deployed.

@ -13,7 +13,7 @@ keywords:
# Migrate from `loki-distributed` Helm chart
This guide will walk you through migrating to the `loki` Helm Chart, v3.0 or higher, from the `loki-distributed` Helm Chart (v0.63.2 at time of writing). The process consists of deploying the new `loki` Helm Chart alongside the existing `loki-distributed` installation. By joining the new cluster to the exsiting cluster's ring, you will create one large cluster. This will allow you to manually bring down the `loki-distributed` components in a safe way to avoid any data loss.
This guide will walk you through migrating to the `loki` Helm Chart, v3.0 or higher, from the `loki-distributed` Helm Chart (v0.63.2 at time of writing). The process consists of deploying the new `loki` Helm Chart alongside the existing `loki-distributed` installation. By joining the new cluster to the existing cluster's ring, you will create one large cluster. This will allow you to manually bring down the `loki-distributed` components in a safe way to avoid any data loss.
**Before you begin:**

@ -10,7 +10,7 @@ keywords:
# Migrate to TSDB
[TSDB]({{< relref "../../../operations/storage/tsdb" >}}) is the recommeneded index type for Loki and is where the current development lies.
[TSDB]({{< relref "../../../operations/storage/tsdb" >}}) is the recommended index type for Loki and is where the current development lies.
If you are running Loki with [boltb-shipper]({{< relref "../../../operations/storage/boltdb-shipper" >}}) or any of the [legacy index types]({{< relref "../../../storage#index-storage" >}}) that have been deprecated,
we strongly recommend migrating to TSDB.

@ -326,7 +326,7 @@ This guide assumes a provisioned EKS cluster.
export AWS_REGION=<region of EKS cluster>
```
4. Save the OIDC provider in an enviroment variable:
4. Save the OIDC provider in an environment variable:
```
oidc_provider=$(aws eks describe-cluster --name <EKS cluster> --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")

Loading…
Cancel
Save