[helm] Fix helm dashboards (#8261)

Fix the dashboards shipped with the helm chart.

Co-authored-by: Karsten Jeschkies <k@jeschkies.xyz>
Co-authored-by: J Stickler <julie.stickler@grafana.com>
pull/8303/head
Trevor Whitney 2 years ago committed by GitHub
parent ddc1d171a2
commit cc39334391
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 121
      docs/sources/installation/helm/monitor-and-alert/index.md
  2. 9
      docs/sources/installation/helm/reference.md
  3. 4
      production/helm/loki/CHANGELOG.md
  4. 108
      production/helm/loki/src/dashboards/loki-operational.json
  5. 16
      production/helm/loki/templates/_helpers.tpl
  6. 2
      production/helm/loki/templates/backend/statefulset-backend.yaml
  7. 3
      production/helm/loki/templates/monitoring/loki-alerts.yaml
  8. 3
      production/helm/loki/templates/monitoring/loki-rules.yaml
  9. 2
      production/helm/loki/templates/monitoring/pod-logs.yaml
  10. 2
      production/helm/loki/templates/read/deployment-read.yaml
  11. 2
      production/helm/loki/templates/read/statefulset-read.yaml
  12. 2
      production/helm/loki/templates/single-binary/statefulset.yaml
  13. 2
      production/helm/loki/templates/write/statefulset-write.yaml
  14. 2
      production/helm/loki/values.yaml
  15. 72
      tools/dev/k3d/environments/helm-cluster/main.jsonnet
  16. 2
      tools/dev/k3d/environments/helm-cluster/spec.json
  17. 4
      tools/dev/k3d/environments/helm-cluster/values/enterprise-logs.yaml
  18. 18
      tools/dev/k3d/jsonnetfile.lock.json

@ -6,7 +6,7 @@ aliases:
- /docs/installation/helm/monitoring
weight: 100
keywords:
- monitoring
- monitoring
- alert
- alerting
---
@ -15,10 +15,9 @@ keywords:
By default this Helm Chart configures meta-monitoring of metrics (service monitoring) and logs (self monitoring).
The `ServiceMonitor` resource works with either the Prometheus Operator or the Grafana Agent Operator, and defines how Loki's metrics should be scraped.
Scraping this Loki cluster using the scrape config defined in the `ServiceMonitor` resource is required for the included dashboards to work. A `MetricsInstance` can be configured to write the metrics to a remote Prometheus instance such as Grafana Cloud Metrics.
The `ServiceMonitor` resource works with either the Prometheus Operator or the Grafana Agent Operator, and defines how Loki's metrics should be scraped. Scraping this Loki cluster using the scrape config defined in the `SerivceMonitor` resource is required for the included dashboards to work. A `MetricsInstance` can be configured to write the metrics to a remote Prometheus instance such as Grafana Cloud Metrics.
*Self monitoring* is enabled by default. This will deploy a `GrafanaAgent`, `LogsInstance`, and `PodLogs` resource which will instruct the Grafana Agent Operator (installed seperately) on how to scrape this Loki cluster's logs and send them back to itself. Scraping this Loki cluster using the scrape config defined in the `PodLogs` resource is required for the included dashboards to work.
_Self monitoring_ is enabled by default. This will deploy a `GrafanaAgent`, `LogsInstance`, and `PodLogs` resource which will instruct the Grafana Agent Operator (installed seperately) on how to scrape this Loki cluster's logs and send them back to itself. Scraping this Loki cluster using the scrape config defined in the `PodLogs` resource is required for the included dashboards to work.
Rules and alerts are automatically deployed.
@ -27,12 +26,91 @@ Rules and alerts are automatically deployed.
- Helm 3 or above. See [Installing Helm](https://helm.sh/docs/intro/install/).
- A running Kubernetes cluster with a running Loki deployment.
- A running Grafana instance.
- A running Prometheus operator in order for recording rules for the dashboards
to work.
- A running Prometheus Operator installed using the `kube-prometheus-stack` Helm chart.
**Prometheus Operator Prequisites**
The dashboards require certain metric labels to display Kubernetes metrics. The best way to accomplish this is to install the `kube-prometheus-stack` Helm chart with the following values file, replacing `CLUSTER_NAME` with the name of your cluster. The cluster name is what you specify during the helm installation, so a cluster installed with the command `helm install loki-cluster grafana/loki` would be called `loki-cluster`.
```yaml
kubelet:
serviceMonitor:
cAdvisorRelabelings:
- action: replace
replacement: <CLUSTER_NAME>
targetLabel: cluster
- targetLabel: metrics_path
sourceLabels:
- "__metrics_path__"
- targetLabel: "instance"
sourceLabels:
- "node"
defaultRules:
additionalRuleLabels:
cluster: <CLUSTER_NAME>
"kube-state-metrics":
prometheus:
monitor:
relabelings:
- action: replace
replacement: <CLUSTER_NAME>
targetLabel: cluster
- targetLabel: "instance"
sourceLabels:
- "__meta_kubernetes_pod_node_name"
"prometheus-node-exporter":
prometheus:
monitor:
relabelings:
- action: replace
replacement: <CLUSTER_NAME>
targetLabel: cluster
- targetLabel: "instance"
sourceLabels:
- "__meta_kubernetes_pod_node_name"
prometheus:
monitor:
relabelings:
- action: replace
replacement: <CLUSTER_NAME>
targetLabel: cluster
```
The `kube-prometheus-stack` installs `ServicMonitor` and `PrometheusRule` resources for monitoring Kubernetes, and it depends on the `kube-state-metrics` and `prometheus-node-exporter` helm charts which also install `ServiceMonitor` resources for collecting `kubelet` and `node-exporter` metrics. The above values file adds the necessary additional labels required for these metrics to work with the included dashboards.
If you are using this helm chart in an environment which does not allow for the installation of `kube-prometheus-stack` or custom CRDs, you should run `helm template` on the `kube-prometheus-stack` helm chart with the above values file, and review all generated `ServiceMonitor` and `PrometheusRule` resources. These resources may have to be modified with the correct ports and selectors to find the various services such as `kubelet` and `node-exporter` in your environment.
**To install the dashboards:**
1. Dashboards are enabled by default. Set `monitoring.dashboards.namespace` to the namespace of the Grafana instance if it is in a different namespace than this Loki cluster.
1. Dashbards must be mounted to your Grafana container. The dashboards are in `ConfigMap`s named `loki-dashboards-1` and `loki-dashboards-2` for Loki, and `enterprise-logs-dashboards-1` and `enterprise-logs-dashboards-2` for GEL. Mount them to `/var/lib/grafana/dashboards/loki-1` and `/var/lib/grafana/dashboards/loki-2` in your Grafana container.
1. Create a dashboard provisioning file called `dashboards.yaml` in `/etc/grafana/provisioning/dashboards` of your Grafana container with the following contents (_note_: you may need to edit the `orgId`):
```yaml
---
apiVersion: 1
providers:
- disableDeletion: true
editable: false
folder: Loki
name: loki-1
options:
path: /var/lib/grafana/dashboards/loki-1
orgId: 1
type: file
- disableDeletion: true
editable: false
folder: Loki
name: loki-2
options:
path: /var/lib/grafana/dashboards/loki-2
orgId: 1
type: file
```
**To add add additional Prometheus rules:**
@ -52,17 +130,16 @@ Rules and alerts are automatically deployed.
expr: sum(rate(container_cpu_usage_seconds_total[1m])) by (node, namespace, pod, container)
```
**To disable monitoring:**
1. Modify the configuration file `values.yaml`:
```yaml
selfMonitoring:
enabled: false
enabled: false
serviceMonitor:
enabled: false
enabled: false
```
**To use a remote Prometheus and Loki instance such as Grafana Cloud**
@ -77,8 +154,8 @@ Rules and alerts are automatically deployed.
name: primary-credentials-metrics
namespace: default
stringData:
username: '<instance ID>'
password: '<API key>'
username: "<instance ID>"
password: "<API key>"
---
apiVersion: v1
kind: Secret
@ -86,8 +163,8 @@ Rules and alerts are automatically deployed.
name: primary-credentials-logs
namespace: default
stringData:
username: '<instance ID>'
password: '<API key>'
username: "<instance ID>"
password: "<API key>"
```
2. Add the secret to Kubernetes with `kubectl create -f secret.yaml`.
@ -116,19 +193,19 @@ Rules and alerts are automatically deployed.
```yaml
monitoring:
...
---
selfMonitoring:
enabled: true
logsInstance:
clients:
- url: <logs remote write endpoint>
basicAuth:
username:
name: primary-credentials-logs
key: username
password:
name: primary-credentials-logs
key: password
- url: <logs remote write endpoint>
basicAuth:
username:
name: primary-credentials-logs
key: username
password:
name: primary-credentials-logs
key: password
lokiCanary:
enabled: false
```

@ -2189,6 +2189,15 @@ true
<td><pre lang="json">
{}
</pre>
</td>
</tr>
<tr>
<td>monitoring.rules.namespace</td>
<td>string</td>
<td>Alternative namespace to create PrometheusRule resources in</td>
<td><pre lang="json">
null
</pre>
</td>
</tr>
<tr>

@ -11,6 +11,10 @@ Entries should be ordered as follows:
Entries should include a reference to the pull request that introduced the change.
## 4.4.1
- [BUGFIX] Fix a few problems with the included dashboards and allow the rules to be created in a different namespace (which may be necessary based on how your Prometheus Operator is deployed).
## 4.1.1
- [FEATURE] Added `loki.runtimeConfig` helm values to provide a reloadable runtime configuration.

@ -88,7 +88,7 @@
"steppedLine": false,
"targets": [
{
"expr": "sum by (status) (\nlabel_replace(\n label_replace(\n rate(loki_request_duration_seconds_count{cluster=\"$cluster\", job=~\"($namespace)/cortex-gw\", route=~\"api_prom_query|api_prom_label|api_prom_label_name_values|loki_api_v1_query|loki_api_v1_query_range|loki_api_v1_label|loki_api_v1_label_name_values\"}[5m]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n\"status\", \"${1}\", \"status_code\", \"([a-z]+)\")\n)",
"expr": "sum by (status) (\nlabel_replace(\n label_replace(\n rate(loki_request_duration_seconds_count{cluster=\"$cluster\", job=~\"($namespace)/(loki|enterprise-logs)-read\", route=~\"api_prom_query|api_prom_label|api_prom_label_name_values|loki_api_v1_query|loki_api_v1_query_range|loki_api_v1_label|loki_api_v1_label_name_values\"}[5m]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n\"status\", \"${1}\", \"status_code\", \"([a-z]+)\")\n)",
"legendFormat": "{{status}}",
"refId": "A"
}
@ -184,7 +184,7 @@
"steppedLine": false,
"targets": [
{
"expr": "sum by (status) (\nlabel_replace(\n label_replace(\n rate(loki_request_duration_seconds_count{cluster=\"$cluster\", job=~\"($namespace)/cortex-gw\", route=~\"api_prom_push|loki_api_v1_push\"}[5m]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n\"status\", \"${1}\", \"status_code\", \"([a-z]+)\"))",
"expr": "sum by (status) (\nlabel_replace(\n label_replace(\n rate(loki_request_duration_seconds_count{cluster=\"$cluster\", job=~\"($namespace)/(loki|enterprise-logs)-write\", route=~\"api_prom_push|loki_api_v1_push\"}[5m]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n\"status\", \"${1}\", \"status_code\", \"([a-z]+)\"))",
"legendFormat": "{{status}}",
"refId": "A"
}
@ -230,102 +230,6 @@
"alignLevel": null
}
},
{
"aliasColors": { },
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": { }
},
"overrides": [ ]
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 5,
"w": 4,
"x": 8,
"y": 1
},
"hiddenSeries": false,
"id": 11,
"legend": {
"avg": false,
"current": false,
"hideEmpty": false,
"hideZero": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"dataLinks": [ ]
},
"panels": [ ],
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [ ],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "topk(5, sum by (name,level) (rate(promtail_custom_bad_words_total{cluster=\"$cluster\", exported_namespace=\"$namespace\"}[$__interval])) - \nsum by (name,level) (rate(promtail_custom_bad_words_total{cluster=\"$cluster\", exported_namespace=\"$namespace\"}[$__interval] offset 1h)))",
"legendFormat": "{{name}}-{{level}}",
"refId": "A"
}
],
"thresholds": [ ],
"timeFrom": null,
"timeRegions": [ ],
"timeShift": null,
"title": "Bad Words",
"tooltip": {
"shared": true,
"sort": 2,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [ ]
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": { },
"bars": false,
@ -663,17 +567,17 @@
"steppedLine": false,
"targets": [
{
"expr": "histogram_quantile(0.99, sum by (le) (job_route:loki_request_duration_seconds_bucket:sum_rate{job=~\"($namespace)/cortex-gw\", route=~\"api_prom_push|loki_api_v1_push\", cluster=~\"$cluster\"})) * 1e3",
"expr": "histogram_quantile(0.99, sum by (le) (job_route:loki_request_duration_seconds_bucket:sum_rate{job=~\"($namespace)/(loki|enterprise-logs)-write\", route=~\"api_prom_push|loki_api_v1_push\", cluster=~\"$cluster\"})) * 1e3",
"legendFormat": ".99",
"refId": "A"
},
{
"expr": "histogram_quantile(0.75, sum by (le) (job_route:loki_request_duration_seconds_bucket:sum_rate{job=~\"($namespace)/cortex-gw\", route=~\"api_prom_push|loki_api_v1_push\", cluster=~\"$cluster\"})) * 1e3",
"expr": "histogram_quantile(0.75, sum by (le) (job_route:loki_request_duration_seconds_bucket:sum_rate{job=~\"($namespace)/(loki|enterprise-logs)-write\", route=~\"api_prom_push|loki_api_v1_push\", cluster=~\"$cluster\"})) * 1e3",
"legendFormat": ".9",
"refId": "B"
},
{
"expr": "histogram_quantile(0.5, sum by (le) (job_route:loki_request_duration_seconds_bucket:sum_rate{job=~\"($namespace)/cortex-gw\", route=~\"api_prom_push|loki_api_v1_push\", cluster=~\"$cluster\"})) * 1e3",
"expr": "histogram_quantile(0.5, sum by (le) (job_route:loki_request_duration_seconds_bucket:sum_rate{job=~\"($namespace)/(loki|enterprise-logs)-write\", route=~\"api_prom_push|loki_api_v1_push\", cluster=~\"$cluster\"})) * 1e3",
"legendFormat": ".5",
"refId": "C"
}
@ -6266,4 +6170,4 @@
"title": "Loki / Operational",
"uid": "operational",
"version": 0
}
}

@ -30,6 +30,22 @@ singleBinary fullname
{{- end -}}
{{- end -}}
{{/*
Resource name template
Params:
ctx = . context
component = component name (optional)
rolloutZoneName = rollout zone name (optional)
*/}}
{{- define "loki.resourceName" -}}
{{- $resourceName := include "loki.fullname" .ctx -}}
{{- if .component -}}{{- $resourceName = printf "%s-%s" $resourceName .component -}}{{- end -}}
{{- if and (not .component) .rolloutZoneName -}}{{- printf "Component name cannot be empty if rolloutZoneName (%s) is set" .rolloutZoneName | fail -}}{{- end -}}
{{- if .rolloutZoneName -}}{{- $resourceName = printf "%s-%s" $resourceName .rolloutZoneName -}}{{- end -}}
{{- if gt (len $resourceName) 253 -}}{{- printf "Resource name (%s) exceeds kubernetes limit of 253 character. To fix: shorten release name if this will be a fresh install or shorten zone names (e.g. \"a\" instead of \"zone-a\") if using zone-awareness." $resourceName | fail -}}{{- end -}}
{{- $resourceName -}}
{{- end -}}
{{/*
Return if deployment mode is simple scalable
*/}}

@ -62,7 +62,7 @@ spec:
{{- end }}
{{- end }}
containers:
- name: backend
- name: loki
image: {{ include "loki.image" . }}
imagePullPolicy: {{ .Values.loki.image.pullPolicy }}
args:

@ -13,7 +13,8 @@ metadata:
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "loki.fullname" $ }}-alerts
name: {{ include "loki.fullname" $ }}-loki-alerts
namespace: {{ .namespace | default $.Release.Namespace }}
spec:
groups:
{{- include "loki.ruleGroupToYaml" ($.Files.Get "src/alerts.yaml" | fromYaml).groups | indent 4 }}

@ -13,7 +13,8 @@ metadata:
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "loki.fullname" $ }}-rules
name: {{ include "loki.fullname" $ }}-loki-rules
namespace: {{ .namespace | default $.Release.Namespace }}
spec:
groups:
{{- include "loki.ruleGroupToYaml" (tpl ($.Files.Get "src/rules.yaml.tpl") $ | fromYaml).groups | indent 4 }}

@ -27,7 +27,7 @@ spec:
replacement: "$1"
separator: "-"
sourceLabels:
- __meta_kubernetes_pod_label_app_kubernetes_io_instance
- __meta_kubernetes_pod_label_app_kubernetes_io_name
- __meta_kubernetes_pod_label_app_kubernetes_io_component
targetLabel: __service__
- action: replace

@ -54,7 +54,7 @@ spec:
{{- toYaml .Values.loki.podSecurityContext | nindent 8 }}
terminationGracePeriodSeconds: {{ .Values.read.terminationGracePeriodSeconds }}
containers:
- name: read
- name: loki
image: {{ include "loki.image" . }}
imagePullPolicy: {{ .Values.loki.image.pullPolicy }}
args:

@ -63,7 +63,7 @@ spec:
{{- toYaml .Values.loki.podSecurityContext | nindent 8 }}
terminationGracePeriodSeconds: {{ .Values.read.terminationGracePeriodSeconds }}
containers:
- name: read
- name: loki
image: {{ include "loki.image" . }}
imagePullPolicy: {{ .Values.loki.image.pullPolicy }}
args:

@ -69,7 +69,7 @@ spec:
{{- end }}
{{- end }}
containers:
- name: single-binary
- name: loki
image: {{ include "loki.image" . }}
imagePullPolicy: {{ .Values.loki.image.pullPolicy }}
args:

@ -70,7 +70,7 @@ spec:
{{- end }}
{{- end }}
containers:
- name: write
- name: loki
image: {{ include "loki.image" . }}
imagePullPolicy: {{ .Values.loki.image.pullPolicy }}
args:

@ -520,6 +520,8 @@ monitoring:
enabled: true
# -- Include alerting rules
alerting: true
# -- Alternative namespace to create PrometheusRule resources in
namespace: null
# -- Additional annotations for the rules PrometheusRule resource
annotations: {}
# -- Additional labels for the rules PrometheusRule resource

@ -10,7 +10,7 @@ local helm = tanka.helm.new(std.thisFile);
local spec = (import './spec.json').spec;
local enterprise = std.extVar('enterprise');
local clusterName = if enterprise then 'enterprise-logs-test-fixture' else 'loki';
local lokiGatewayUrl = if enterprise then
'http://enterprise-logs-gateway.loki.svc.cluster.local'
else 'http://loki-gateway.loki.svc.cluster.local';
@ -27,11 +27,76 @@ local tenant = 'loki';
lokiNamespace: k.core.v1.namespace.new('loki'),
prometheus: helm.template('prometheus', '../../charts/kube-prometheus-stack', {
local clusterRelabel =
{
action: 'replace',
replacement: clusterName,
targetLabel: 'cluster',
},
namespace: $._config.namespace,
values+: {
grafana+: {
enabled: false,
},
kubelet: {
serviceMonitor: {
cAdvisorRelabelings: [
clusterRelabel,
{
targetLabel: 'metrics_path',
sourceLabels: [
'__metrics_path__',
],
},
{
targetLabel: 'instance',
sourceLabels: [
'node',
],
},
],
},
},
defaultRules: {
additionalRuleLabels: {
cluster: clusterName,
},
},
'kube-state-metrics': {
prometheus: {
monitor: {
relabelings: [
clusterRelabel,
{
targetLabel: 'instance',
sourceLabels: [
'__meta_kubernetes_pod_node_name',
],
},
],
},
},
},
'prometheus-node-exporter': {
prometheus: {
monitor: {
relabelings: [
clusterRelabel,
{
targetLabel: 'instance',
sourceLabels: [
'__meta_kubernetes_pod_node_name',
],
},
],
},
},
},
prometheus: {
prometheusSpec: {
serviceMonitorSelector: {
@ -40,6 +105,11 @@ local tenant = 'loki';
},
},
},
monitor: {
relabelings: [
clusterRelabel,
],
},
},
},
kubeVersion: 'v1.18.0',

@ -6,7 +6,7 @@
"namespace": "environments/helm-cluster/main.jsonnet"
},
"spec": {
"apiServer": "https://0.0.0.0:39165",
"apiServer": "https://0.0.0.0:43619",
"namespace": "k3d-helm-cluster",
"resourceDefaults": {},
"expectVersions": {}

@ -26,5 +26,9 @@ monitoring:
serviceMonitor:
labels:
release: "prometheus"
rules:
namespace: k3d-helm-cluster
labels:
release: "prometheus"
minio:
enabled: true

@ -8,7 +8,7 @@
"subdir": "consul"
}
},
"version": "a924ab1b5fd4e6eacd7235a20978d050a27bdb65",
"version": "09a66a888bfedc6a347a07544bf533fd5975acdb",
"sum": "Po3c1Ic96ngrJCtOazic/7OsLkoILOKZWXWyZWl+od8="
},
{
@ -18,7 +18,7 @@
"subdir": "enterprise-metrics"
}
},
"version": "a924ab1b5fd4e6eacd7235a20978d050a27bdb65",
"version": "09a66a888bfedc6a347a07544bf533fd5975acdb",
"sum": "hi2ZpHKl7qWXmSZ46sAycjWEQK6oGsoECuDKQT1dA+k="
},
{
@ -28,7 +28,7 @@
"subdir": "etcd-operator"
}
},
"version": "a924ab1b5fd4e6eacd7235a20978d050a27bdb65",
"version": "09a66a888bfedc6a347a07544bf533fd5975acdb",
"sum": "duHm6wmUju5KHQurOe6dnXoKgl5gTUsfGplgbmAOsHw="
},
{
@ -38,7 +38,7 @@
"subdir": "grafana"
}
},
"version": "a924ab1b5fd4e6eacd7235a20978d050a27bdb65",
"version": "09a66a888bfedc6a347a07544bf533fd5975acdb",
"sum": "Y5nheroSOIwmE+djEVPq4OvvTxKenzdHhpEwaR3Ebjs="
},
{
@ -48,7 +48,7 @@
"subdir": "jaeger-agent-mixin"
}
},
"version": "a924ab1b5fd4e6eacd7235a20978d050a27bdb65",
"version": "09a66a888bfedc6a347a07544bf533fd5975acdb",
"sum": "nsukyr2SS8h97I2mxvBazXZp2fxu1i6eg+rKq3/NRwY="
},
{
@ -58,7 +58,7 @@
"subdir": "ksonnet-util"
}
},
"version": "a924ab1b5fd4e6eacd7235a20978d050a27bdb65",
"version": "09a66a888bfedc6a347a07544bf533fd5975acdb",
"sum": "/pkNOLhRqvQoPA0yYdUuJvpPHqhkCLauAUMD2ZHMIkE="
},
{
@ -78,7 +78,7 @@
"subdir": "memcached"
}
},
"version": "a924ab1b5fd4e6eacd7235a20978d050a27bdb65",
"version": "09a66a888bfedc6a347a07544bf533fd5975acdb",
"sum": "SWywAq4U0MRPMbASU0Ez8O9ArRNeoZzb75sEuReueow="
},
{
@ -88,7 +88,7 @@
"subdir": "tanka-util"
}
},
"version": "a924ab1b5fd4e6eacd7235a20978d050a27bdb65",
"version": "09a66a888bfedc6a347a07544bf533fd5975acdb",
"sum": "ShSIissXdvCy1izTCDZX6tY7qxCoepE5L+WJ52Hw7ZQ="
},
{
@ -118,7 +118,7 @@
"subdir": "1.20"
}
},
"version": "2ff9e542231fce1ba6d880d1157c81724f176bc6",
"version": "85543e49238903ac14b486321bd3d60fef09d9ef",
"sum": "K8hAiyQ4ELAsln24tcwTf/++hKM/3YvLXsOYN0zD8eM="
}
],

Loading…
Cancel
Save