[helm] add third scalable target `backend` (#7920)

**What this PR does / why we need it**:

Adds the new 3rd scalable deployment mode target (backend) to the helm
chart. This new target was added in #7650
pull/8111/head helm-loki-3.10.0
Trevor Whitney 3 years ago committed by GitHub
parent 4d5678aa17
commit 712bfdd0e0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 56
      docs/sources/installation/helm/migrate-to-three-scalable-targets/index.md
  2. 218
      docs/sources/installation/helm/reference.md
  3. 10
      production/helm/loki/CHANGELOG.md
  4. 2
      production/helm/loki/Chart.yaml
  5. 2
      production/helm/loki/README.md
  6. 20
      production/helm/loki/ci/three-targets.yaml
  7. 164
      production/helm/loki/templates/_helpers.tpl
  8. 32
      production/helm/loki/templates/backend/_helpers-backend.tpl
  9. 14
      production/helm/loki/templates/backend/poddisruptionbudget-backend.yaml
  10. 25
      production/helm/loki/templates/backend/service-backend-headless.yaml
  11. 26
      production/helm/loki/templates/backend/service-backend.yaml
  12. 141
      production/helm/loki/templates/backend/statefulset-backend.yaml
  13. 8
      production/helm/loki/templates/gateway/configmap-gateway.yaml
  14. 13
      production/helm/loki/templates/monitoring/pod-logs.yaml
  15. 143
      production/helm/loki/templates/read/deployment-read.yaml
  16. 2
      production/helm/loki/templates/read/statefulset-read.yaml
  17. 11
      production/helm/loki/templates/tests/_helpers.tpl
  18. 4
      production/helm/loki/templates/tests/test-canary.yaml
  19. 294
      production/helm/loki/values.yaml
  20. 15
      tools/dev/k3d/Makefile
  21. 20
      tools/dev/k3d/jsonnetfile.lock.json

@ -0,0 +1,56 @@
---
title: Migrate To Three Scalable Targets
menuTitle: Migrate to Three Targets
description: Migration guide for moving from two scalable to three scalable targets
aliases:
- /docs/installation/helm/migrate-from-distributed
weight: 100
keywords:
- migrate
- ssd
- scalable
- simple
---
# Migrating to Three Scalable Targets
This guide will walk you through migrating from the old, two target, scalable configuration to the new, three target, scalable configuration. This new configuration introduces a `backend` component, and reduces the `read` component to running just a `Querier` and `QueryFrontend`, allowing it to be run as a kubernetes `Deployment` rather than a `StatefulSet`.
**Before you begin:**
We recommend having a Grafana instance available to monitor both the existing and new clusters, to make sure there is no data loss during the migration process. The `loki` chart ships with self-monitoring features, including dashboards. These are useful for monitoring the health of the cluster during migration.
**To Migrate from a "read & write" to a "backend, read & write" deployment**
1. Make sure your deployment is using a new enough version of loki
This feature landed as an option in the helm chart while still in the `main` branch of Loki. As a result, depending on when you run this migration, you may neeed to manually override the Loki or GEL image being used to one that has the third, `backend` target available. For loki, add the following to your `values.yaml`.
```yaml
loki:
image:
repository: "grafana/loki"
tag: "main-f5fbfab-amd64"
```
For GEL, you'll need to add:
```yaml
enterprise:
image:
repository: "grafana/enterprise-logs"
tag: "main-96f32b9f"
```
1. Set the `legacyReadTarget` flag to false
Set the value `read.legacyReadTarget` to false. In your `values.yaml`, add:
```yaml
read:
legacyReadTarget: false
```
1. Upgrade the helm installation
Run `helm upgrade` on your installation with your updated `values.yaml` file.

@ -27,6 +27,213 @@ This is the generated reference for the Loki Helm Chart values.
<th>Default</th>
</thead>
<tbody>
<tr>
<td>backend.affinity</td>
<td>string</td>
<td>Affinity for backend pods. Passed through `tpl` and, thus, to be configured as string</td>
<td><pre lang="">
Hard node and soft zone anti-affinity
</pre>
</td>
</tr>
<tr>
<td>backend.extraArgs</td>
<td>list</td>
<td>Additional CLI args for the backend</td>
<td><pre lang="json">
[]
</pre>
</td>
</tr>
<tr>
<td>backend.extraEnv</td>
<td>list</td>
<td>Environment variables to add to the backend pods</td>
<td><pre lang="json">
[]
</pre>
</td>
</tr>
<tr>
<td>backend.extraEnvFrom</td>
<td>list</td>
<td>Environment variables from secrets or configmaps to add to the backend pods</td>
<td><pre lang="json">
[]
</pre>
</td>
</tr>
<tr>
<td>backend.extraVolumeMounts</td>
<td>list</td>
<td>Volume mounts to add to the backend pods</td>
<td><pre lang="json">
[]
</pre>
</td>
</tr>
<tr>
<td>backend.extraVolumes</td>
<td>list</td>
<td>Volumes to add to the backend pods</td>
<td><pre lang="json">
[]
</pre>
</td>
</tr>
<tr>
<td>backend.image.registry</td>
<td>string</td>
<td>The Docker registry for the backend image. Overrides `loki.image.registry`</td>
<td><pre lang="json">
null
</pre>
</td>
</tr>
<tr>
<td>backend.image.repository</td>
<td>string</td>
<td>Docker image repository for the backend image. Overrides `loki.image.repository`</td>
<td><pre lang="json">
null
</pre>
</td>
</tr>
<tr>
<td>backend.image.tag</td>
<td>string</td>
<td>Docker image tag for the backend image. Overrides `loki.image.tag`</td>
<td><pre lang="json">
null
</pre>
</td>
</tr>
<tr>
<td>backend.nodeSelector</td>
<td>object</td>
<td>Node selector for backend pods</td>
<td><pre lang="json">
{}
</pre>
</td>
</tr>
<tr>
<td>backend.persistence.enableStatefulSetAutoDeletePVC</td>
<td>bool</td>
<td>Enable StatefulSetAutoDeletePVC feature</td>
<td><pre lang="json">
true
</pre>
</td>
</tr>
<tr>
<td>backend.persistence.selector</td>
<td>string</td>
<td>Selector for persistent disk</td>
<td><pre lang="json">
null
</pre>
</td>
</tr>
<tr>
<td>backend.persistence.size</td>
<td>string</td>
<td>Size of persistent disk</td>
<td><pre lang="json">
"10Gi"
</pre>
</td>
</tr>
<tr>
<td>backend.persistence.storageClass</td>
<td>string</td>
<td>Storage class to be used. If defined, storageClassName: <storageClass>. If set to "-", storageClassName: "", which disables dynamic provisioning. If empty or set to null, no storageClassName spec is set, choosing the default provisioner (gp2 on AWS, standard on GKE, AWS, and OpenStack).</td>
<td><pre lang="json">
null
</pre>
</td>
</tr>
<tr>
<td>backend.podAnnotations</td>
<td>object</td>
<td>Annotations for backend pods</td>
<td><pre lang="json">
{}
</pre>
</td>
</tr>
<tr>
<td>backend.priorityClassName</td>
<td>string</td>
<td>The name of the PriorityClass for backend pods</td>
<td><pre lang="json">
null
</pre>
</td>
</tr>
<tr>
<td>backend.replicas</td>
<td>int</td>
<td>Number of replicas for the backend</td>
<td><pre lang="json">
3
</pre>
</td>
</tr>
<tr>
<td>backend.resources</td>
<td>object</td>
<td>Resource requests and limits for the backend</td>
<td><pre lang="json">
{}
</pre>
</td>
</tr>
<tr>
<td>backend.selectorLabels</td>
<td>object</td>
<td>Additional selector labels for each `backend` pod</td>
<td><pre lang="json">
{}
</pre>
</td>
</tr>
<tr>
<td>backend.serviceLabels</td>
<td>object</td>
<td>Labels for ingestor service</td>
<td><pre lang="json">
{}
</pre>
</td>
</tr>
<tr>
<td>backend.targetModule</td>
<td>string</td>
<td>Comma-separated list of Loki modules to load for the read</td>
<td><pre lang="json">
"backend"
</pre>
</td>
</tr>
<tr>
<td>backend.terminationGracePeriodSeconds</td>
<td>int</td>
<td>Grace period to allow the backend to shutdown before it is killed. Especially for the ingestor, this must be increased. It must be long enough so backends can be gracefully shutdown flushing/transferring all data and to successfully leave the member ring on shutdown.</td>
<td><pre lang="json">
300
</pre>
</td>
</tr>
<tr>
<td>backend.tolerations</td>
<td>list</td>
<td>Tolerations for backend pods</td>
<td><pre lang="json">
[]
</pre>
</td>
</tr>
<tr>
<td>enterprise.adminApi</td>
<td>object</td>
@ -144,7 +351,7 @@ null
<td>string</td>
<td></td>
<td><pre lang="json">
"worker_processes 5; ## Default: 1\nerror_log /dev/stderr;\npid /tmp/nginx.pid;\nworker_rlimit_nofile 8192;\n\nevents {\n worker_connections 4096; ## Default: 1024\n}\n\nhttp {\n client_body_temp_path /tmp/client_temp;\n proxy_temp_path /tmp/proxy_temp_path;\n fastcgi_temp_path /tmp/fastcgi_temp;\n uwsgi_temp_path /tmp/uwsgi_temp;\n scgi_temp_path /tmp/scgi_temp;\n\n proxy_http_version 1.1;\n\n default_type application/octet-stream;\n log_format {{ .Values.gateway.nginxConfig.logFormat }}\n\n {{- if .Values.gateway.verboseLogging }}\n access_log /dev/stderr main;\n {{- else }}\n\n map $status $loggable {\n ~^[23] 0;\n default 1;\n }\n access_log /dev/stderr main if=$loggable;\n {{- end }}\n\n sendfile on;\n tcp_nopush on;\n resolver {{ .Values.global.dnsService }}.{{ .Values.global.dnsNamespace }}.svc.{{ .Values.global.clusterDomain }}.;\n\n {{- with .Values.gateway.nginxConfig.httpSnippet }}\n {{ . | nindent 2 }}\n {{- end }}\n\n server {\n listen 8080;\n\n {{- if .Values.gateway.basicAuth.enabled }}\n auth_basic \"Loki\";\n auth_basic_user_file /etc/nginx/secrets/.htpasswd;\n {{- end }}\n\n location = / {\n return 200 'OK';\n auth_basic off;\n }\n\n location = /api/prom/push {\n proxy_pass http://{{ include \"loki.writeFullname\" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;\n }\n\n location = /api/prom/tail {\n proxy_pass http://{{ include \"loki.readFullname\" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"upgrade\";\n }\n\n location ~ /api/prom/.* {\n proxy_pass http://{{ include \"loki.readFullname\" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;\n }\n\n location ~ /prometheus/api/v1/alerts.* {\n proxy_pass http://{{ include \"loki.readFullname\" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;\n }\n\n location ~ /prometheus/api/v1/rules.* {\n proxy_pass http://{{ include \"loki.readFullname\" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;\n }\n\n location = /loki/api/v1/push {\n proxy_pass http://{{ include \"loki.writeFullname\" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;\n }\n\n location = /loki/api/v1/tail {\n proxy_pass http://{{ include \"loki.readFullname\" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"upgrade\";\n }\n\n location ~ /loki/api/.* {\n proxy_pass http://{{ include \"loki.readFullname\" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;\n }\n\n location ~ /admin/api/.* {\n proxy_pass http://{{ include \"loki.writeFullname\" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;\n }\n\n location ~ /compactor/.* {\n proxy_pass http://{{ include \"loki.readFullname\" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;\n }\n\n location ~ /distributor/.* {\n proxy_pass http://{{ include \"loki.writeFullname\" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;\n }\n\n location ~ /ring {\n proxy_pass http://{{ include \"loki.writeFullname\" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;\n }\n\n location ~ /ingester/.* {\n proxy_pass http://{{ include \"loki.writeFullname\" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;\n }\n\n location ~ /ruler/.* {\n proxy_pass http://{{ include \"loki.readFullname\" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;\n }\n\n location ~ /scheduler/.* {\n proxy_pass http://{{ include \"loki.readFullname\" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;\n }\n\n {{- with .Values.gateway.nginxConfig.serverSnippet }}\n {{ . | nindent 4 }}\n {{- end }}\n }\n}\n"
null
</pre>
</td>
</tr>
@ -2493,6 +2700,15 @@ null
<td><pre lang="json">
null
</pre>
</td>
</tr>
<tr>
<td>read.legacyReadTarget</td>
<td>bool</td>
<td>Set to false to enable the new 3-target mode (read, write, backend) that will be the default in future version of Loki</td>
<td><pre lang="json">
true
</pre>
</td>
</tr>
<tr>

@ -11,13 +11,19 @@ Entries should be ordered as follows:
Entries should include a reference to the pull request that introduced the change.
## 3.8.2
## 3.10.0
- [CHANGE] Deprecate `enterprise.nginxConfig.file`. Both enterprise and gateway configurations now share the same nginx config. Admin routes will 404 on OSS deployments. Will be removed in version 4 of the chart, please use `gateway.nginxConfig.file` for both OSS and Enterprise gateways.
- [FEATURE] Added new simple deployment target `backend`. Running 3 targets for simple deployment will soon be the default in Loki. This new target allows the `read` target to be run as a deployment and auto-scaled.
- [FEATURE] Added `extraObjects` helm values to extra manifests.
## 3.9.0
- [BUGFIX] Fix race condition between minio create bucket job and enterprise tokengen job
## 3.8.2
- [FEATURE] Added `extraObjects` helm values to extra manifests.
## 3.8.1
- [ENHANCEMENT] Add the ability to specify container lifecycle

@ -4,7 +4,7 @@ name: loki
description: Helm chart for Grafana Loki in simple, scalable mode
type: application
appVersion: 2.7.0
version: 3.9.0
version: 3.10.0
home: https://grafana.github.io/helm-charts
sources:
- https://github.com/grafana/loki

@ -1,6 +1,6 @@
# loki
![Version: 3.9.0](https://img.shields.io/badge/Version-3.9.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 2.7.0](https://img.shields.io/badge/AppVersion-2.7.0-informational?style=flat-square)
![Version: 3.10.0](https://img.shields.io/badge/Version-3.10.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 2.7.0](https://img.shields.io/badge/AppVersion-2.7.0-informational?style=flat-square)
Helm chart for Grafana Loki in simple, scalable mode

@ -0,0 +1,20 @@
---
loki:
commonConfig:
replication_factor: 1
image:
repository: "grafana/loki"
tag: "main-f5fbfab-amd64"
read:
replicas: 1
legacyReadTarget: false
write:
replicas: 1
backend:
replicas: 1
monitoring:
serviceMonitor:
labels:
release: "prometheus"
test:
prometheusAddress: "http://prometheus-kube-prometheus-prometheus.prometheus.svc.cluster.local.:9090"

@ -485,3 +485,167 @@ Create the service endpoint including port for MinIO.
{{- define "enterprise-logs.canarySecret" }}
{{- .Values.enterprise.canarySecret | default (printf "%s-canary-secret" (include "loki.name" . )) -}}
{{- end -}}
{{/* Snippet for the nginx file used by gateway */}}
{{- define "loki.nginxFile" }}
worker_processes 5; ## Default: 1
error_log /dev/stderr;
pid /tmp/nginx.pid;
worker_rlimit_nofile 8192;
events {
worker_connections 4096; ## Default: 1024
}
http {
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
proxy_http_version 1.1;
default_type application/octet-stream;
log_format {{ .Values.gateway.nginxConfig.logFormat }}
{{- if .Values.gateway.verboseLogging }}
access_log /dev/stderr main;
{{- else }}
map $status $loggable {
~^[23] 0;
default 1;
}
access_log /dev/stderr main if=$loggable;
{{- end }}
sendfile on;
tcp_nopush on;
resolver {{ .Values.global.dnsService }}.{{ .Values.global.dnsNamespace }}.svc.{{ .Values.global.clusterDomain }}.;
{{- with .Values.gateway.nginxConfig.httpSnippet }}
{{ . | nindent 2 }}
{{- end }}
server {
listen 8080;
{{- if .Values.gateway.basicAuth.enabled }}
auth_basic "Loki";
auth_basic_user_file /etc/nginx/secrets/.htpasswd;
{{- end }}
location = / {
return 200 'OK';
auth_basic off;
}
location = /api/prom/push {
proxy_pass http://{{ include "loki.writeFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location = /api/prom/tail {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location ~ /api/prom/.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
{{- if .Values.read.legacyReadTarget }}
location ~ /prometheus/api/v1/alerts.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /prometheus/api/v1/rules.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /ruler/.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
{{- else }}
location ~ /prometheus/api/v1/alerts.* {
proxy_pass http://{{ include "loki.backendFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /prometheus/api/v1/rules.* {
proxy_pass http://{{ include "loki.backendFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /ruler/.* {
proxy_pass http://{{ include "loki.backendFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
{{- end }}
location = /loki/api/v1/push {
proxy_pass http://{{ include "loki.writeFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location = /loki/api/v1/tail {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
{{- if .Values.read.legacyReadTarget }}
location ~ /compactor/.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
{{- else }}
location ~ /compactor/.* {
proxy_pass http://{{ include "loki.backendFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
{{- end }}
location ~ /distributor/.* {
proxy_pass http://{{ include "loki.writeFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /ring {
proxy_pass http://{{ include "loki.writeFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /ingester/.* {
proxy_pass http://{{ include "loki.writeFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
{{- if .Values.read.legacyReadTarget }}
location ~ /store-gateway/.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
{{- else }}
location ~ /store-gateway/.* {
proxy_pass http://{{ include "loki.backendFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
{{- end }}
{{- if .Values.read.legacyReadTarget }}
location ~ /query-scheduler/.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /scheduler/.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
{{- else }}
location ~ /query-scheduler/.* {
proxy_pass http://{{ include "loki.backendFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /scheduler/.* {
proxy_pass http://{{ include "loki.backendFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
{{- end }}
location ~ /loki/api/.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /admin/api/.* {
proxy_pass http://{{ include "loki.writeFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
{{- with .Values.gateway.nginxConfig.serverSnippet }}
{{ . | nindent 4 }}
{{- end }}
}
}
{{- end }}

@ -0,0 +1,32 @@
{{/*
backend fullname
*/}}
{{- define "loki.backendFullname" -}}
{{ include "loki.name" . }}-backend
{{- end }}
{{/*
backend common labels
*/}}
{{- define "loki.backendLabels" -}}
{{ include "loki.labels" . }}
app.kubernetes.io/component: backend
{{- end }}
{{/*
backend selector labels
*/}}
{{- define "loki.backendSelectorLabels" -}}
{{ include "loki.selectorLabels" . }}
app.kubernetes.io/component: backend
{{- end }}
{{/*
backend priority class name
*/}}
{{- define "loki.backendPriorityClassName" -}}
{{- $pcn := coalesce .Values.global.priorityClassName .Values.backend.priorityClassName -}}
{{- if $pcn }}
priorityClassName: {{ $pcn }}
{{- end }}
{{- end }}

@ -0,0 +1,14 @@
{{- $isSimpleScalable := eq (include "loki.deployment.isScalable" .) "true" -}}
{{- if and $isSimpleScalable (gt (int .Values.backend.replicas) 1) (not .Values.read.legacyReadTarget ) }}
apiVersion: {{ include "loki.podDisruptionBudget.apiVersion" . }}
kind: PodDisruptionBudget
metadata:
name: {{ include "loki.backendFullname" . }}
labels:
{{- include "loki.backendLabels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "loki.backendSelectorLabels" . | nindent 6 }}
maxUnavailable: 1
{{- end }}

@ -0,0 +1,25 @@
{{- $isSimpleScalable := eq (include "loki.deployment.isScalable" .) "true" -}}
{{- if and $isSimpleScalable (not .Values.read.legacyReadTarget ) }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "loki.backendFullname" . }}-headless
labels:
{{- include "loki.backendSelectorLabels" . | nindent 4 }}
prometheus.io/service-monitor: "false"
spec:
type: ClusterIP
clusterIP: None
ports:
- name: http-metrics
port: 3100
targetPort: http-metrics
protocol: TCP
- name: grpc
port: 9095
targetPort: grpc
protocol: TCP
selector:
{{- include "loki.backendSelectorLabels" . | nindent 4 }}
{{- end }}

@ -0,0 +1,26 @@
{{- $isSimpleScalable := eq (include "loki.deployment.isScalable" .) "true" -}}
{{- if and $isSimpleScalable (not .Values.read.legacyReadTarget ) }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "loki.backendFullname" . }}
labels:
{{- include "loki.backendLabels" . | nindent 4 }}
{{- with .Values.backend.serviceLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
type: ClusterIP
ports:
- name: http-metrics
port: 3100
targetPort: http-metrics
protocol: TCP
- name: grpc
port: 9095
targetPort: grpc
protocol: TCP
selector:
{{- include "loki.backendSelectorLabels" . | nindent 4 }}
{{- end }}

@ -0,0 +1,141 @@
{{- $isSimpleScalable := eq (include "loki.deployment.isScalable" .) "true" -}}
{{- if and $isSimpleScalable (not .Values.read.legacyReadTarget ) }}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "loki.backendFullname" . }}
labels:
{{- include "loki.backendLabels" . | nindent 4 }}
app.kubernetes.io/part-of: memberlist
spec:
replicas: {{ .Values.backend.replicas }}
podManagementPolicy: Parallel
updateStrategy:
rollingUpdate:
partition: 0
serviceName: {{ include "loki.backendFullname" . }}-headless
revisionHistoryLimit: {{ .Values.loki.revisionHistoryLimit }}
selector:
matchLabels:
{{- include "loki.backendSelectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
checksum/config: {{ include (print .Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- with .Values.loki.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.backend.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "loki.backendSelectorLabels" . | nindent 8 }}
{{- with .Values.backend.selectorLabels }}
{{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
app.kubernetes.io/part-of: memberlist
spec:
serviceAccountName: {{ include "loki.serviceAccountName" . }}
automountServiceAccountToken: {{ .Values.serviceAccount.automountServiceAccountToken }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- include "loki.backendPriorityClassName" . | nindent 6 }}
securityContext:
{{- toYaml .Values.loki.podSecurityContext | nindent 8 }}
terminationGracePeriodSeconds: {{ .Values.backend.terminationGracePeriodSeconds }}
containers:
- name: backend
image: {{ include "loki.image" . }}
imagePullPolicy: {{ .Values.loki.image.pullPolicy }}
args:
- -config.file=/etc/loki/config/config.yaml
- -target={{ .Values.backend.targetModule }}
- -legacy-read-mode=false
{{- with .Values.backend.extraArgs }}
{{- toYaml . | nindent 12 }}
{{- end }}
ports:
- name: http-metrics
containerPort: 3100
protocol: TCP
- name: grpc
containerPort: 9095
protocol: TCP
- name: http-memberlist
containerPort: 7946
protocol: TCP
{{- with .Values.backend.extraEnv }}
env:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.backend.extraEnvFrom }}
envFrom:
{{- toYaml . | nindent 12 }}
{{- end }}
securityContext:
{{- toYaml .Values.loki.containerSecurityContext | nindent 12 }}
readinessProbe:
{{- toYaml .Values.loki.readinessProbe | nindent 12 }}
volumeMounts:
- name: config
mountPath: /etc/loki/config
- name: data
mountPath: /var/loki
{{- if .Values.enterprise.enabled }}
- name: license
mountPath: /etc/loki/license
{{- end}}
{{- with .Values.backend.extraVolumeMounts }}
{{- toYaml . | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.backend.resources | nindent 12 }}
{{- with .Values.backend.affinity }}
affinity:
{{- tpl . $ | nindent 8 }}
{{- end }}
{{- with .Values.backend.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.backend.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: config
{{- if .Values.loki.existingSecretForConfig }}
secret:
secretName: {{ .Values.loki.existingSecretForConfig }}
{{- else }}
configMap:
name: {{ include "loki.name" . }}
{{- end }}
{{- if .Values.enterprise.enabled }}
- name: license
secret:
{{- if .Values.enterprise.useExternalLicense }}
secretName: {{ .Values.enterprise.externalLicenseName }}
{{- else }}
secretName: enterprise-logs-license
{{- end }}
{{- end }}
{{- with .Values.backend.extraVolumes }}
{{- toYaml . | nindent 8 }}
{{- end }}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
{{- with .Values.backend.persistence.storageClass }}
storageClassName: {{ if (eq "-" .) }}""{{ else }}{{ . }}{{ end }}
{{- end }}
resources:
requests:
storage: {{ .Values.backend.persistence.size | quote }}
{{- end }}

@ -8,5 +8,11 @@ metadata:
{{- include "loki.gatewayLabels" . | nindent 4 }}
data:
nginx.conf: |
{{- tpl (ternary .Values.enterprise.nginxConfig.file .Values.gateway.nginxConfig.file .Values.enterprise.enabled) . | nindent 4 }}
{{- if .Values.enterprise.enabled }}
{{- $file := ( .Values.enterprise.nginxConfig.file | default .Values.gateway.nginxConfig.file) }}
{{- $indent := ternary 2 4 (empty .Values.enterprise.nginxConfig.file) }}
{{- tpl $file . | nindent $indent }}
{{- else }}
{{- tpl .Values.gateway.nginxConfig.file . | indent 2 }}
{{- end }}
{{- end }}

@ -25,9 +25,18 @@ spec:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
replacement: "{{ $.Release.Namespace }}/$1"
replacement: "$1"
separator: "-"
sourceLabels:
- __meta_kubernetes_pod_controller_name
- __meta_kubernetes_pod_label_app_kubernetes_io_instance
- __meta_kubernetes_pod_label_app_kubernetes_io_component
targetLabel: __service__
- action: replace
replacement: "$1"
separator: "/"
sourceLabels:
- __meta_kubernetes_namespace
- __service__
targetLabel: job
- action: replace
sourceLabels:

@ -0,0 +1,143 @@
{{- $isSimpleScalable := eq (include "loki.deployment.isScalable" .) "true" -}}
{{- if and $isSimpleScalable (not .Values.read.legacyReadTarget ) }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "loki.readFullname" . }}
labels:
app.kubernetes.io/part-of: memberlist
{{- include "loki.readLabels" . | nindent 4 }}
spec:
{{- if not .Values.read.autoscaling.enabled }}
replicas: {{ .Values.read.replicas }}
{{- end }}
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
revisionHistoryLimit: {{ .Values.loki.revisionHistoryLimit }}
selector:
matchLabels:
{{- include "loki.readSelectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
checksum/config: {{ include (print .Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- with .Values.loki.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.read.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
app.kubernetes.io/part-of: memberlist
{{- include "loki.readSelectorLabels" . | nindent 8 }}
{{- with .Values.loki.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.read.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.read.selectorLabels }}
{{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
spec:
serviceAccountName: {{ include "loki.serviceAccountName" . }}
automountServiceAccountToken: {{ .Values.serviceAccount.automountServiceAccountToken }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- include "loki.readPriorityClassName" . | nindent 6 }}
securityContext:
{{- toYaml .Values.loki.podSecurityContext | nindent 8 }}
terminationGracePeriodSeconds: {{ .Values.read.terminationGracePeriodSeconds }}
containers:
- name: read
image: {{ include "loki.image" . }}
imagePullPolicy: {{ .Values.loki.image.pullPolicy }}
args:
- -config.file=/etc/loki/config/config.yaml
- -target={{ .Values.read.targetModule }}
- -legacy-read-mode=false
- -common.compactor-grpc-address={{ include "loki.backendFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:9095
{{- with .Values.read.extraArgs }}
{{- toYaml . | nindent 12 }}
{{- end }}
ports:
- name: http-metrics
containerPort: 3100
protocol: TCP
- name: grpc
containerPort: 9095
protocol: TCP
- name: http-memberlist
containerPort: 7946
protocol: TCP
{{- with .Values.read.extraEnv }}
env:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.read.extraEnvFrom }}
envFrom:
{{- toYaml . | nindent 12 }}
{{- end }}
securityContext:
{{- toYaml .Values.loki.containerSecurityContext | nindent 12 }}
readinessProbe:
{{- toYaml .Values.loki.readinessProbe | nindent 12 }}
volumeMounts:
- name: config
mountPath: /etc/loki/config
- name: tmp
mountPath: /tmp
- name: data
mountPath: /var/loki
{{- if .Values.enterprise.enabled }}
- name: license
mountPath: /etc/loki/license
{{- end}}
{{- with .Values.read.extraVolumeMounts }}
{{- toYaml . | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.read.resources | nindent 12 }}
{{- with .Values.read.affinity }}
affinity:
{{- tpl . $ | nindent 8 }}
{{- end }}
{{- with .Values.read.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.read.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: tmp
emptyDir: {}
- name: data
emptyDir: {}
- name: config
{{- if .Values.loki.existingSecretForConfig }}
secret:
secretName: {{ .Values.loki.existingSecretForConfig }}
{{- else }}
configMap:
name: {{ include "loki.name" . }}
{{- end }}
{{- if .Values.enterprise.enabled }}
- name: license
secret:
{{- if .Values.enterprise.useExternalLicense }}
secretName: {{ .Values.enterprise.externalLicenseName }}
{{- else }}
secretName: enterprise-logs-license
{{- end }}
{{- end }}
{{- with .Values.read.extraVolumes }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

@ -1,5 +1,5 @@
{{- $isSimpleScalable := eq (include "loki.deployment.isScalable" .) "true" -}}
{{- if $isSimpleScalable }}
{{- if and $isSimpleScalable (.Values.read.legacyReadTarget ) }}
---
apiVersion: apps/v1
kind: StatefulSet

@ -1,7 +1,16 @@
{{/*
Docker image name for loki helm test
*/}}
{{- define "loki.helm-test-image" -}}
{{- define "loki.helmTestImage" -}}
{{- $dict := dict "service" .Values.test.image "global" .Values.global.image "defaultVersion" "latest" -}}
{{- include "loki.baseImage" $dict -}}
{{- end -}}
{{/*
test common labels
*/}}
{{- define "loki.helmTestLabels" -}}
{{ include "loki.labels" . }}
app.kubernetes.io/component: helm-test
{{- end }}

@ -6,7 +6,7 @@ kind: Pod
metadata:
name: "{{ include "loki.name" $ }}-helm-test"
labels:
{{- include "loki.labels" $ | nindent 4 }}
{{- include "loki.helmTestLabels" $ | nindent 4 }}
{{- with .labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
@ -18,7 +18,7 @@ metadata:
spec:
containers:
- name: loki-helm-test
image: {{ include "loki.helm-test-image" $ }}
image: {{ include "loki.helmTestImage" $ }}
env:
- name: CANARY_PROMETHEUS_ADDRESS
value: "{{ .prometheusAddress }}"

@ -411,130 +411,11 @@ enterprise:
# -- Volume mounts to add to the provisioner pods
extraVolumeMounts: []
# DEPRECATED: will be removed in version 4 of chart
# please use gateway.nginxConfig.file for both OSS and
# enterprise gateways.
nginxConfig:
file: |
worker_processes 5; ## Default: 1
error_log /dev/stderr;
pid /tmp/nginx.pid;
worker_rlimit_nofile 8192;
events {
worker_connections 4096; ## Default: 1024
}
http {
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
proxy_http_version 1.1;
default_type application/octet-stream;
log_format {{ .Values.gateway.nginxConfig.logFormat }}
{{- if .Values.gateway.verboseLogging }}
access_log /dev/stderr main;
{{- else }}
map $status $loggable {
~^[23] 0;
default 1;
}
access_log /dev/stderr main if=$loggable;
{{- end }}
sendfile on;
tcp_nopush on;
resolver {{ .Values.global.dnsService }}.{{ .Values.global.dnsNamespace }}.svc.{{ .Values.global.clusterDomain }}.;
{{- with .Values.gateway.nginxConfig.httpSnippet }}
{{ . | nindent 2 }}
{{- end }}
server {
listen 8080;
{{- if .Values.gateway.basicAuth.enabled }}
auth_basic "Loki";
auth_basic_user_file /etc/nginx/secrets/.htpasswd;
{{- end }}
location = / {
return 200 'OK';
auth_basic off;
}
location = /api/prom/push {
proxy_pass http://{{ include "loki.writeFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location = /api/prom/tail {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location ~ /api/prom/.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /prometheus/api/v1/alerts.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /prometheus/api/v1/rules.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location = /loki/api/v1/push {
proxy_pass http://{{ include "loki.writeFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location = /loki/api/v1/tail {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location ~ /loki/api/.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /admin/api/.* {
proxy_pass http://{{ include "loki.writeFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /compactor/.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /distributor/.* {
proxy_pass http://{{ include "loki.writeFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /ring {
proxy_pass http://{{ include "loki.writeFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /ingester/.* {
proxy_pass http://{{ include "loki.writeFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /ruler/.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /scheduler/.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
{{- with .Values.gateway.nginxConfig.serverSnippet }}
{{ . | nindent 4 }}
{{- end }}
}
}
file: null
# -- Options that may be necessary when performing a migration from another helm chart
migrate:
@ -738,7 +619,7 @@ monitoring:
# -- Docker image pull policy
pullPolicy: IfNotPresent
# Configuration for the write
# Configuration for the write pod(s)
write:
# -- Number of replicas for the write
replicas: 3
@ -804,7 +685,7 @@ write:
# -- Selector for persistent disk
selector: null
# Configuration for the read node(s)
# Configuration for the read pod(s)
read:
# -- Number of replicas for the read
replicas: 3
@ -838,6 +719,9 @@ read:
serviceLabels: {}
# -- Comma-separated list of Loki modules to load for the read
targetModule: "read"
# -- Set to false to enable the new 3-target mode (read, write, backend) that will be the default
# in future version of Loki
legacyReadTarget: true
# -- Additional CLI args for the read
extraArgs: []
# -- Environment variables to add to the read pods
@ -881,6 +765,70 @@ read:
# -- Selector for persistent disk
selector: null
# Configuration for the backend pod(s)
backend:
# -- Number of replicas for the backend
replicas: 3
image:
# -- The Docker registry for the backend image. Overrides `loki.image.registry`
registry: null
# -- Docker image repository for the backend image. Overrides `loki.image.repository`
repository: null
# -- Docker image tag for the backend image. Overrides `loki.image.tag`
tag: null
# -- The name of the PriorityClass for backend pods
priorityClassName: null
# -- Annotations for backend pods
podAnnotations: {}
# -- Additional selector labels for each `backend` pod
selectorLabels: {}
# -- Labels for ingestor service
serviceLabels: {}
# -- Comma-separated list of Loki modules to load for the read
targetModule: "backend"
# -- Additional CLI args for the backend
extraArgs: []
# -- Environment variables to add to the backend pods
extraEnv: []
# -- Environment variables from secrets or configmaps to add to the backend pods
extraEnvFrom: []
# -- Volume mounts to add to the backend pods
extraVolumeMounts: []
# -- Volumes to add to the backend pods
extraVolumes: []
# -- Resource requests and limits for the backend
resources: {}
# -- Grace period to allow the backend to shutdown before it is killed. Especially for the ingestor,
# this must be increased. It must be long enough so backends can be gracefully shutdown flushing/transferring
# all data and to successfully leave the member ring on shutdown.
terminationGracePeriodSeconds: 300
# -- Affinity for backend pods. Passed through `tpl` and, thus, to be configured as string
# @default -- Hard node and soft zone anti-affinity
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
{{- include "loki.backendSelectorLabels" . | nindent 10 }}
topologyKey: kubernetes.io/hostname
# -- Node selector for backend pods
nodeSelector: {}
# -- Tolerations for backend pods
tolerations: []
persistence:
# -- Enable StatefulSetAutoDeletePVC feature
enableStatefulSetAutoDeletePVC: true
# -- Size of persistent disk
size: 10Gi
# -- Storage class to be used.
# If defined, storageClassName: <storageClass>.
# If set to "-", storageClassName: "", which disables dynamic provisioning.
# If empty or set to null, no storageClassName spec is
# set, choosing the default provisioner (gp2 on AWS, standard on GKE, AWS, and OpenStack).
storageClass: null
# -- Selector for persistent disk
selector: null
# Configuration for the single binary node(s)
singleBinary:
# -- Number of replicas for the single binary
@ -1147,101 +1095,7 @@ gateway:
# -- Config file contents for Nginx. Passed through the `tpl` function to allow templating
# @default -- See values.yaml
file: |
worker_processes 5; ## Default: 1
error_log /dev/stderr;
pid /tmp/nginx.pid;
worker_rlimit_nofile 8192;
events {
worker_connections 4096; ## Default: 1024
}
http {
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
proxy_http_version 1.1;
default_type application/octet-stream;
log_format {{ .Values.gateway.nginxConfig.logFormat }}
{{- if .Values.gateway.verboseLogging }}
access_log /dev/stderr main;
{{- else }}
map $status $loggable {
~^[23] 0;
default 1;
}
access_log /dev/stderr main if=$loggable;
{{- end }}
sendfile on;
tcp_nopush on;
resolver {{ .Values.global.dnsService }}.{{ .Values.global.dnsNamespace }}.svc.{{ .Values.global.clusterDomain }}.;
{{- with .Values.gateway.nginxConfig.httpSnippet }}
{{ . | nindent 2 }}
{{- end }}
server {
listen 8080;
{{- if .Values.gateway.basicAuth.enabled }}
auth_basic "Loki";
auth_basic_user_file /etc/nginx/secrets/.htpasswd;
{{- end }}
location = / {
return 200 'OK';
auth_basic off;
}
location = /api/prom/push {
proxy_pass http://{{ include "loki.writeFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location = /api/prom/tail {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location ~ /api/prom/.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /prometheus/api/v1/alerts.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /prometheus/api/v1/rules.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location = /loki/api/v1/push {
proxy_pass http://{{ include "loki.writeFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location = /loki/api/v1/tail {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location ~ /loki/api/.* {
proxy_pass http://{{ include "loki.readFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
{{- with .Values.gateway.nginxConfig.serverSnippet }}
{{ . | nindent 4 }}
{{- end }}
}
}
{{- include "loki.nginxFile" . | indent 2 -}}
networkPolicy:
# -- Specifies whether Network Policies should be created
enabled: false

@ -1,4 +1,4 @@
.PHONY: loki-distributed down add-repos update-repos prepare build-latest-image
.PHONY: loki-distributed down add-repos update-repos prepare prepare-gel build-latest-image
IMAGE_TAG := $(shell ../../../tools/image-tag)
EXISTING_REGISTRY_PORT := $(shell k3d registry list -o json | jq -r '.[] | select(.name == "k3d-grafana") | .portMappings."5000/tcp" | .[0].HostPort')
@ -10,7 +10,7 @@ loki-distributed: prepare build-latest-image
sleep 5
tk apply --ext-str registry="k3d-grafana:$(REGISTRY_PORT)" environments/loki-distributed
enterprise-logs: prepare
enterprise-logs: prepare-gel
$(CURDIR)/scripts/create_cluster.sh enterprise-logs $(REGISTRY_PORT)
# wait 5s for the cluster to be ready
sleep 5
@ -23,7 +23,7 @@ helm-cluster: prepare
# wait 5s for the cluster to be ready
sleep 5
$(MAKE) -C $(CURDIR) apply-helm-cluster
apply-enterprise-logs:
tk apply --ext-str registry="k3d-grafana:$(REGISTRY_PORT)" environments/enterprise-logs
@ -72,7 +72,8 @@ secrets/gel.jwt:
mkdir -p secrets/
op document get "loki/gel.jwt" --output=$(CURDIR)/secrets/gel.jwt
prepare: create-registry update-repos secrets
prepare: create-registry update-repos
prepare-gel: prepare secrets
build-latest-image:
make -C $(CURDIR)/../../.. loki-image
@ -82,3 +83,9 @@ build-latest-image:
HELM_DIR := $(shell cd $(CURDIR)/../../../production/helm/loki && pwd)
helm-install-enterprise-logs:
helm install enterprise-logs-test-fixture "$(HELM_DIR)" -n loki --create-namespace --values "$(CURDIR)/environments/helm-cluster/values/enterprise-logs.yaml"
helm-upgrade-enterprise-logs:
helm upgrade enterprise-logs-test-fixture "$(HELM_DIR)" -n loki --values "$(CURDIR)/environments/helm-cluster/values/enterprise-logs.yaml"
helm-uninstall-enterprise-logs:
helm uninstall enterprise-logs-test-fixture -n loki

@ -8,7 +8,7 @@
"subdir": "consul"
}
},
"version": "bb39488d030dd783ac8ceaa1ff936be14be993f0",
"version": "3219da57b18acbade37caaa42895c68f477cef95",
"sum": "Po3c1Ic96ngrJCtOazic/7OsLkoILOKZWXWyZWl+od8="
},
{
@ -18,7 +18,7 @@
"subdir": "enterprise-metrics"
}
},
"version": "bb39488d030dd783ac8ceaa1ff936be14be993f0",
"version": "3219da57b18acbade37caaa42895c68f477cef95",
"sum": "hi2ZpHKl7qWXmSZ46sAycjWEQK6oGsoECuDKQT1dA+k="
},
{
@ -28,7 +28,7 @@
"subdir": "etcd-operator"
}
},
"version": "bb39488d030dd783ac8ceaa1ff936be14be993f0",
"version": "3219da57b18acbade37caaa42895c68f477cef95",
"sum": "duHm6wmUju5KHQurOe6dnXoKgl5gTUsfGplgbmAOsHw="
},
{
@ -38,7 +38,7 @@
"subdir": "grafana"
}
},
"version": "bb39488d030dd783ac8ceaa1ff936be14be993f0",
"version": "3219da57b18acbade37caaa42895c68f477cef95",
"sum": "Y5nheroSOIwmE+djEVPq4OvvTxKenzdHhpEwaR3Ebjs="
},
{
@ -48,7 +48,7 @@
"subdir": "jaeger-agent-mixin"
}
},
"version": "bb39488d030dd783ac8ceaa1ff936be14be993f0",
"version": "3219da57b18acbade37caaa42895c68f477cef95",
"sum": "nsukyr2SS8h97I2mxvBazXZp2fxu1i6eg+rKq3/NRwY="
},
{
@ -58,7 +58,7 @@
"subdir": "ksonnet-util"
}
},
"version": "bb39488d030dd783ac8ceaa1ff936be14be993f0",
"version": "3219da57b18acbade37caaa42895c68f477cef95",
"sum": "/pkNOLhRqvQoPA0yYdUuJvpPHqhkCLauAUMD2ZHMIkE="
},
{
@ -78,7 +78,7 @@
"subdir": "memcached"
}
},
"version": "bb39488d030dd783ac8ceaa1ff936be14be993f0",
"version": "3219da57b18acbade37caaa42895c68f477cef95",
"sum": "SWywAq4U0MRPMbASU0Ez8O9ArRNeoZzb75sEuReueow="
},
{
@ -88,7 +88,7 @@
"subdir": "tanka-util"
}
},
"version": "bb39488d030dd783ac8ceaa1ff936be14be993f0",
"version": "3219da57b18acbade37caaa42895c68f477cef95",
"sum": "ShSIissXdvCy1izTCDZX6tY7qxCoepE5L+WJ52Hw7ZQ="
},
{
@ -118,8 +118,8 @@
"subdir": "1.20"
}
},
"version": "4613c97af18622848e4ffac3c4a78a9349f1d716",
"sum": "KZac8iaWuW0tiN54OE9uE+Squ6SVnOHz6Wk2QgmPe3o="
"version": "21f1224e3d351cf85951221d91d015eef790ed48",
"sum": "Sj/Xxz4AvIDs3HI3uQ3TYOiAO2zcYs1veMBRFpPsc0Q="
}
],
"legacyImports": false

Loading…
Cancel
Save