docs: fix broken links (#16748)

pull/16759/head^2
J Stickler 2 months ago committed by GitHub
parent 3d60ce6e69
commit 93301ee284
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 20
      docs/sources/community/lids/0005-loki-mixin-configuration-improvements.md
  2. 4
      docs/sources/send-data/alloy/_index.md

@ -17,11 +17,12 @@ draft: false
**Status:** Draft
**Related issues/PRs:**
- https://github.com/grafana/loki/issues/13631
- https://github.com/grafana/loki/issues/15881
- https://github.com/grafana/loki/issues/11820
- https://github.com/grafana/loki/issues/11806
- https://github.com/grafana/loki/issues/7730
- [Issue #15881](https://github.com/grafana/loki/issues/15881)
- [Issue #13631](https://github.com/grafana/loki/issues/13631)
- [Issue #11820](https://github.com/grafana/loki/issues/11820)
- [Issue #11806](https://github.com/grafana/loki/issues/11806)
- [Issue #7730](https://github.com/grafana/loki/issues/7730)
- and more ...
**Thread from [mailing list](https://groups.google.com/forum/#!forum/lokiproject):** N/A
@ -44,7 +45,8 @@ A good example of that would be the "job" label used everywhere:
Usually the job label refer to the task name used to scrape the targets, as per [Prometheus documentation](https://prometheus.io/docs/concepts/jobs_instances/), and
in k8s, if you are not using `prometheus-operator` with `ServiceMonitor`, it's pretty common to have something like this as a scraping config:
```
```yaml
- job_name: "kubernetes-pods" # Can actually be anything you want.
kubernetes_sd_configs:
- role: pod
@ -54,18 +56,20 @@ in k8s, if you are not using `prometheus-operator` with `ServiceMonitor`, it's p
replacement: '${cluster_label}'
...
```
which would scrape all pods and yield something like:
```
up{job="kubernetes-pods", ...}
```
Right off the bat, that makes the dashboards unusable because it's incompatible with what is **hardcoded** in the dashboards and alerts.
## Goals
Ideally, selectors should default to the values required internally by Grafana but remain configurable so users can tailor them to their setup.
A good example of this is how [kubernetes-monitoring/kubernetes-mixin](kubernetes-monitoring) did it:
https://github.com/kubernetes-monitoring/kubernetes-mixin/blob/1fa3b6731c93eac6d5b8c3c3b087afab2baabb42/config.libsonnet#L20-L33
A good example of this is how [kubernetes-monitoring/kubernetes-mixin](https://github.com/kubernetes-monitoring/kubernetes-mixin/blob/1fa3b6731c93eac6d5b8c3c3b087afab2baabb42/config.libsonnet#L20-L33) did it:
Every possible selector is configurable and thus allow for various setup to properly work.
The structure is already there to support this. It just has not been leveraged properly.

@ -51,8 +51,6 @@ Here is a non-exhaustive list of components that can be used to build a log pipe
| Transformer| [loki.process](https://grafana.com/docs/alloy/latest/reference/components/loki.process/) |
| Writer | [loki.write](https://grafana.com/docs/alloy/latest/reference/components/loki.write/) |
| Writer | [otelcol.exporter.loki](https://grafana.com/docs/alloy/latest/reference/components/otelcol.exporter.loki/) |
| Writer | [otelcol.exporter.logging](https://grafana.com/docs/alloy/latest/reference/components/otelcol.exporter.logging/) |
## Interactive Tutorials
@ -60,5 +58,3 @@ To learn more about how to configure Alloy to send logs to Loki within different
- [Sending OpenTelemetry logs to Loki using Alloy](examples/alloy-otel-logs/)
- [Sending logs over Kafka to Loki using Alloy](examples/alloy-kafka-logs/)

Loading…
Cancel
Save