Docs: fix broken links (#4770)

* Docs: fix broken links

* Docs: fix another link name
pull/4784/head
Karen Miller 4 years ago committed by GitHub
parent 5ece938f3c
commit 35ebe967ea
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 8
      docs/sources/api/_index.md
  2. 4
      docs/sources/clients/aws/ecs/_index.md
  3. 13
      docs/sources/clients/aws/eks/_index.md
  4. 2
      docs/sources/clients/fluentbit/_index.md
  5. 2
      docs/sources/clients/lambda-promtail/_index.md
  6. 4
      docs/sources/clients/promtail/_index.md
  7. 2
      docs/sources/logql/_index.md
  8. 12
      docs/sources/logql/log_queries.md
  9. 2
      docs/sources/logql/metric_queries.md
  10. 7
      docs/sources/operations/recording-rules.md
  11. 2
      docs/sources/operations/storage/retention.md
  12. 6
      docs/sources/operations/storage/table-manager.md
  13. 6
      docs/sources/rules/_index.md

@ -6,7 +6,7 @@ weight: 900
# Grafana Loki HTTP API
Grafana Loki exposes an HTTP API for pushing, querying, and tailing log data.
Note that [authenticating](../operations/authentication/) against the API is
Note that authenticating against the API is
out of scope for Loki.
## Microservices mode
@ -157,7 +157,7 @@ And `<stream value>` is:
}
```
See [statistics](#Statistics) for information about the statistics returned by Loki.
See [statistics](#statistics) for information about the statistics returned by Loki.
### Examples
@ -302,7 +302,7 @@ And `<stream value>` is:
}
```
See [statistics](#Statistics) for information about the statistics returned by Loki.
See [statistics](#statistics) for information about the statistics returned by Loki.
### Examples
@ -635,7 +635,7 @@ Response:
}
```
See [statistics](#Statistics) for information about the statistics returned by Loki.
See [statistics](#statistics) for information about the statistics returned by Loki.
### Examples

@ -119,7 +119,7 @@ curl https://raw.githubusercontent.com/grafana/loki/master/docs/sources/clients/
},
```
The `log_router` container image is the [Fluent bit Loki docker image][fluentbit loki image] which contains the Loki plugin pre-installed. As you can see the `firelensConfiguration` type is set to `fluentbit` and we've also added `options` to enable ECS log metadata. This will be useful when querying your logs with Loki [LogQL][logql] label matchers.
The `log_router` container image is the [Fluent bit Loki docker image][fluentbit loki image] which contains the Loki plugin pre-installed. As you can see the `firelensConfiguration` type is set to `fluentbit` and we've also added `options` to enable ECS log metadata. This will be useful when querying your logs with Loki LogQL label matchers.
> The `logConfiguration` is mostly there for debugging the fluent-bit container, but feel free to remove that part when you're done testing and configuring.
@ -214,7 +214,7 @@ You can now access the ECS console and you should see your task running. Now let
Using the `Log Labels` dropdown you should be able to discover your workload via the ECS metadata, which is also visible if you expand a log line.
That's it ! Make sure to checkout the [LogQL][logql] to learn more about Loki powerful query language.
That's it ! Make sure to checkout LogQL to learn more about Loki powerful query language.
[create an vpc]: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-subnets-commands-example.html
[ECS]: https://aws.amazon.com/ecs/

@ -123,9 +123,9 @@ You can reach your Grafana instance and start exploring your logs. For example i
## Fetching kubelet logs with systemd
So far we're scrapings logs from containers, but if you want to get more visibility you could also scrape [systemd][systemd] logs from each of your machine. This means you can also get access to `kubelet` logs.
So far we're scrapings logs from containers, but if you want to get more visibility you could also scrape systemd logs from each of your machine. This means you can also get access to `kubelet` logs.
Let's edit our values file again and `extraScrapeConfigs` to add the [systemd][systemd] job:
Let's edit our values file again and `extraScrapeConfigs` to add the systemd job:
```yaml
extraScrapeConfigs:
@ -174,12 +174,12 @@ Let go back to Grafana and type in the query below to fetch all logs related to
{unit="kubelet.service"} |= "Volume"
```
[Filters][Filters] expressions are powerful in [LogQL][LogQL] they help you scan through your logs, in this case it will filter out all your [kubelet][kubelet] logs not having the `Volume` word in it.
Filter expressions are powerful in LogQL they help you scan through your logs, in this case it will filter out all your [kubelet][kubelet] logs not having the `Volume` word in it.
The workflow is simple, you always select a set of labels matchers first, this way you reduce the data you're planing to scan.(such as an application, a namespace or even a cluster).
Then you can apply a set of [Filters][Filters] to find the logs you want.
Then you can apply a set of filters to find the logs you want.
> Promtail also support [syslog][syslog].
Promtail also supports syslog.
## Adding Kubernetes events
@ -244,13 +244,10 @@ If you want to push this further you can check out [Joe's blog post][blog annota
[blog ship log with fargate]: https://aws.amazon.com/blogs/containers/how-to-capture-application-logs-when-using-amazon-eks-on-aws-fargate/
[correlate]: https://grafana.com/blog/2020/03/31/how-to-successfully-correlate-metrics-logs-and-traces-in-grafana/
[default value file]: https://github.com/grafana/helm-charts/blob/main/charts/promtail/values.yaml
[systemd]: ../../../installation/helm#run-promtail-with-systemd-journal-support
[grafana logs namespace]: namespace-grafana.png
[relabel_configs]:https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
[syslog]: ../../../installation/helm#run-promtail-with-syslog-support
[Filters]: https://grafana.com/docs/loki/latest/logql/#line-filter-expression
[kubelet]: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=The%20kubelet%20works%20in%20terms,PodSpecs%20are%20running%20and%20healthy.
[LogQL]: https://grafana.com/docs/loki/latest/logql/
[blog events]: https://grafana.com/blog/2019/08/21/how-grafana-labs-effectively-pairs-loki-and-kubernetes-events/
[labels post]: https://grafana.com/blog/2020/04/21/how-labels-in-loki-can-make-log-queries-faster-and-easier/
[pipeline]: https://grafana.com/docs/loki/latest/clients/promtail/pipelines/

@ -102,7 +102,7 @@ If set to true, it will add all Kubernetes labels to Loki labels automatically a
### LabelMapPath
When using the `Parser` and `Filter` plugins Fluent Bit can extract and add data to the current record/log data. While Loki labels are key value pair, record data can be nested structures.
You can pass a json file that defines how to extract [labels](../../getting-started/labels/) from each record. Each json key from the file will be matched with the log record to find label values. Values from the configuration are used as label names.
You can pass a JSON file that defines how to extract labels from each record. Each json key from the file will be matched with the log record to find label values. Values from the configuration are used as label names.
Considering the record below :

@ -8,7 +8,7 @@ Grafana Loki includes [Terraform](https://www.terraform.io/) and [CloudFormation
## Deployment
lambda-promtail can easily be deployed via provided [Terraform](../../../../tools/lambda-promtail/main.tf) and [CloudFormation](../../../../tools/lambda-promtail/template.yaml) files. The Terraform deployment also pulls variable values defined from [vars.tf](../../../../tools/lambda-promtail/vars.tf).
lambda-promtail can easily be deployed via provided [Terraform](https://github.com/grafana/loki/blob/main/tools/lambda-promtail/main.tf) and [CloudFormation](https://github.com/grafana/loki/blob/main/tools/lambda-promtail/template.yaml) files. The Terraform deployment also pulls variable values defined from [variables.tf](https://github.com/grafana/loki/blob/main/tools/lambda-promtail/variables.tf).
For both deployment types there are a few values that must be defined:
- the write address, a Loki Write API compatible endpoint (Loki or Promtail)

@ -91,8 +91,8 @@ This endpoint returns 200 when Promtail is up and running, and there's at least
### `GET /metrics`
This endpoint returns Promtail metrics for Prometheus. See
"[Operations > Observability](../../operations/observability/)" to get a list
This endpoint returns Promtail metrics for Prometheus. Refer to
[Observing Grafana Loki](../../operations/observability/) for the list
of exported metrics.
### Promtail web server config

@ -37,7 +37,7 @@ Between a vector and a literal, the operator is applied to the value of every da
Between two vectors, a binary arithmetic operator is applied to each entry in the left-hand side vector and its matching element in the right-hand vector.
The result is propagated into the result vector with the grouping labels becoming the output label set. Entries for which no matching entry in the right-hand vector can be found are not part of the result.
Pay special attention to [operator order](#operator-order) when chaining arithmetic operators.
Pay special attention to [operator order](#order-of-operations) when chaining arithmetic operators.
#### Arithmetic Examples

@ -181,7 +181,7 @@ will always run faster than
Line filter expressions are the fastest way to filter logs once the
log stream selectors have been applied.
Line filter expressions have support matching IP addresses. See [Matching IP addresses](ip/) for details.
Line filter expressions have support matching IP addresses. See [Matching IP addresses](../ip/) for details.
### Label filter expression
@ -211,7 +211,7 @@ Using Duration, Number and Bytes will convert the label value prior to comparisi
For instance, `logfmt | duration > 1m and bytes_consumed > 20MB`
If the conversion of the label value fails, the log line is not filtered and an `__error__` label is added. To filters those errors see the [pipeline errors](#pipeline-errors) section.
If the conversion of the label value fails, the log line is not filtered and an `__error__` label is added. To filters those errors see the [pipeline errors](../#pipeline-errors) section.
You can chain multiple predicates using `and` and `or` which respectively express the `and` and `or` binary operations. `and` can be equivalently expressed by a comma, a space or another pipe. Label filters can be place anywhere in a log pipeline.
@ -240,13 +240,13 @@ It will evaluate first `duration >= 20ms or method="GET"`. To evaluate first `me
| duration >= 20ms or (method="GET" and size <= 20KB)
```
> Label filter expressions are the only expression allowed after the [unwrap expression](#unwrapped-range-aggregations). This is mainly to allow filtering errors from the metric extraction (see [errors](#pipeline-errors)).
> Label filter expressions are the only expression allowed after the unwrap expression. This is mainly to allow filtering errors from the metric extraction.
Label filter expressions have support matching IP addresses. See [Matching IP addresses](ip/) for details.
### Parser expression
Parser expression can parse and extract labels from the log content. Those extracted labels can then be used for filtering using [label filter expressions](#label-filter-expression) or for [metric aggregations](#metric-queries).
Parser expression can parse and extract labels from the log content. Those extracted labels can then be used for filtering using [label filter expressions](#label-filter-expression) or for [metric aggregations](../metric_queries).
Extracted label keys are automatically sanitized by all parsers, to follow Prometheus metric name convention.(They can only contain ASCII letters and digits, as well as underscores and colons. They cannot start with a digit.)
@ -263,7 +263,7 @@ In case of errors, for instance if the line is not in the expected format, the l
If an extracted label key name already exists in the original log stream, the extracted label key will be suffixed with the `_extracted` keyword to make the distinction between the two labels. You can forcefully override the original label using a [label formatter expression](#labels-format-expression). However if an extracted key appears twice, only the latest label value will be kept.
Loki supports [JSON](#json), [logfmt](#logfmt), [pattern](#pattern), [regexp](#regexp) and [unpack](#unpack) parsers.
Loki supports [JSON](#json), [logfmt](#logfmt), [pattern](#pattern), [regexp](#regular-expression) and [unpack](#unpack) parsers.
It's easier to use the predefined parsers `json` and `logfmt` when you can. If you can't, the `pattern` and `regexp` parsers can be used for log lines with an unusual structure. The `pattern` parser is easier and faster to write; it also outperforms the `regexp` parser.
Multiple parsers can be used by a single log pipeline. This is useful for parsing complex logs. There are examples in [Multiple parsers](#multiple-parsers).
@ -514,7 +514,7 @@ The `| label_format` expression can rename, modify or add labels. It takes as pa
When both side are label identifiers, for example `dst=src`, the operation will rename the `src` label into `dst`.
The left side can alternatively be a template string (double quoted or backtick), for example `dst="{{.status}} {{.query}}"`, in which case the `dst` label value is replaced by the result of the [text/template](https://golang.org/pkg/text/template/) evaluation. This is the same template engine as the `| line_format` expression, which means labels are available as variables and you can use the same list of [functions](functions/).
The left side can alternatively be a template string (double quoted or backtick), for example `dst="{{.status}} {{.query}}"`, in which case the `dst` label value is replaced by the result of the [text/template](https://golang.org/pkg/text/template/) evaluation. This is the same template engine as the `| line_format` expression, which means labels are available as variables and you can use the same list of functions.
In both cases, if the destination label doesn't exist, then a new one is created.

@ -55,7 +55,7 @@ Examples:
### Unwrapped range aggregations
Unwrapped ranges uses extracted labels as sample values instead of log lines. However to select which label will be used within the aggregation, the log query must end with an unwrap expression and optionally a label filter expression to discard [errors](#pipeline-errors).
Unwrapped ranges uses extracted labels as sample values instead of log lines. However to select which label will be used within the aggregation, the log query must end with an unwrap expression and optionally a label filter expression to discard [errors](../#pipeline-errors).
The unwrap expression is noted `| unwrap label_identifier` where the label identifier is the label name to use for extracting sample values.

@ -71,8 +71,8 @@ so a `Persistent Volume` should be utilised.
### Per-Tenant Limits
Remote-write can be configured at a global level in the base configuration, and certain parameters tuned specifically on
a per-tenant basis. Most of the configuration options [defined here](../configuration/#ruler_config)
have [override options](../configuration/#limits_config) (which can be also applied at runtime!).
a per-tenant basis. Most of the configuration options [defined here](../../configuration/#ruler_config)
have [override options](../../configuration/#limits_config) (which can be also applied at runtime!).
### Tuning
@ -141,4 +141,5 @@ the `loki_ruler_wal_corruptions_repair_failed_total` metric will be incremented.
### Found another failure mode?
Please open an [issue](https://github.com/grafana/loki/issues) and tell us about it!
Please open an [issue](https://github.com/grafana/loki/issues) and tell us about it!

@ -3,7 +3,7 @@ title: Retention
---
# Grafana Loki Storage Retention
Retention in Grafana Loki is achieved either through the [Table Manager](#table-manager) or the [Compactor](#Compactor).
Retention in Grafana Loki is achieved either through the [Table Manager](#table-manager) or the [Compactor](#compactor).
Retention through the [Table Manager](../table-manager/) is achieved by relying on the object store TTL feature, and will work for both [boltdb-shipper](../boltdb-shipper) store and chunk/index store. However retention through the [Compactor](../boltdb-shipper#compactor) is supported only with the [boltdb-shipper](../boltdb-shipper) store.

@ -22,7 +22,7 @@ time range exceeds the retention period.
The Table Manager supports the following backends:
- **Index store**
- [Single Store (boltdb-shipper)](boltdb-shipper/)
- [Single Store (boltdb-shipper)](../boltdb-shipper/)
- [Amazon DynamoDB](https://aws.amazon.com/dynamodb)
- [Google Bigtable](https://cloud.google.com/bigtable)
- [Apache Cassandra](https://cassandra.apache.org)
@ -199,13 +199,13 @@ The Table Manager can be executed in two ways:
### Monolithic mode
When Loki runs in [monolithic mode](../../../architecture#modes-of-operation),
When Loki runs in [monolithic mode](../../../fundamentals/architecture#modes-of-operation),
the Table Manager is also started as component of the entire stack.
### Microservices mode
When Loki runs in [microservices mode](../../../architecture#modes-of-operation),
When Loki runs in [microservices mode](../../../fundamentals/architecture#modes-of-operation),
the Table Manager should be started as separate service named `table-manager`.
You can check out a production grade deployment example at

@ -76,7 +76,7 @@ We support [Prometheus-compatible](https://prometheus.io/docs/prometheus/latest/
> Querying the precomputed result will then often be much faster than executing the original expression every time it is needed. This is especially useful for dashboards, which need to query the same expression repeatedly every time they refresh.
Loki allows you to run [_metric queries_](https://grafana.com/docs/loki/latest/logql/#metric-queries) over your logs, which means
Loki allows you to run [metric queries](../logql/metric_queries) over your logs, which means
that you can derive a numeric aggregation from your logs, like calculating the number of requests over time from your NGINX access log.
### Example
@ -230,7 +230,7 @@ jobs:
One option to scale the Ruler is by scaling it horizontally. However, with multiple Ruler instances running they will need to coordinate to determine which instance will evaluate which rule. Similar to the ingesters, the Rulers establish a hash ring to divide up the responsibilities of evaluating rules.
The possible configurations are listed fully in the [configuration documentation](https://grafana.com/docs/loki/latest/configuration/), but in order to shard rules across multiple Rulers, the rules API must be enabled via flag (`-ruler.enable-api`) or config file parameter. Secondly, the Ruler requires it's own ring be configured. From there the Rulers will shard and handle the division of rules automatically. Unlike ingesters, Rulers do not hand over responsibility: all rules are re-sharded randomly every time a Ruler is added to or removed from the ring.
The possible configurations are listed fully in the [configuration documentation](../configuration/), but in order to shard rules across multiple Rulers, the rules API must be enabled via flag (`-ruler.enable-api`) or config file parameter. Secondly, the Ruler requires it's own ring be configured. From there the Rulers will shard and handle the division of rules automatically. Unlike ingesters, Rulers do not hand over responsibility: all rules are re-sharded randomly every time a Ruler is added to or removed from the ring.
A full sharding-enabled Ruler example is:
@ -255,7 +255,7 @@ ruler:
The Ruler supports six kinds of storage: configdb, azure, gcs, s3, swift, and local. Most kinds of storage work with the sharded Ruler configuration in an obvious way, i.e. configure all Rulers to use the same backend.
The local implementation reads the rule files off of the local filesystem. This is a read-only backend that does not support the creation and deletion of rules through the [Ruler API](https://grafana.com/docs/loki/latest/api/#Ruler). Despite the fact that it reads the local filesystem this method can still be used in a sharded Ruler configuration if the operator takes care to load the same rules to every Ruler. For instance, this could be accomplished by mounting a [Kubernetes ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) onto every Ruler pod.
The local implementation reads the rule files off of the local filesystem. This is a read-only backend that does not support the creation and deletion of rules through the [Ruler API](../api/#ruler). Despite the fact that it reads the local filesystem this method can still be used in a sharded Ruler configuration if the operator takes care to load the same rules to every Ruler. For instance, this could be accomplished by mounting a [Kubernetes ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) onto every Ruler pod.
A typical local configuration might look something like:
```

Loading…
Cancel
Save