mirror of https://github.com/grafana/loki
Tag:
Branch:
Tree:
a99c73dd97
2023-03-16-new-query-limits
56quarters/vendor-updates
7139-json-properties-in-log-line-is-not-sorted
Alex3k-patch-1
Alex3k-patch-2
Alex3k-patch-3
Alex3k-patch-5
Alex3k-patch-6
add-10055-to-release-notes
add-10193-to-release-notes
add-10213-to-release-notes
add-10281-to-release-notes
add-10417-to-release-notes
add-12403-to-release-notes
add-9063-to-release-notes
add-9484-to-release-notes
add-9568-to-release-notes
add-9704-to-release-notes
add-9857-to-release-notes
add-bucket-name-to-objclient-metric
add-containerSecurityContext-to-statefulset-backend-sidecar
add-max-flushes-retries
add-page-count-to-dataobj-inspect
add-per-scope-limits
add-time-snap-middleware
add_metrics_namespace_setting
add_series_chunk_filter_test
add_vector_to_lokitool_tests
added-hints-to-try-explore-logs
adeverteuil-patch-1
aengusrooneygrafana-update-doc-pack-md
akhilanarayanan/dountilquorum
akhilanarayanan/query-escaping
akhilanarayanan/replace-do-with-dountilquorum2
andrewthomas92-patch-1
andrii/fix_default_value_for_sasl_auth
arrow-engine/stitch-store-and-engine
ashwanth/remove-unordered-writes-config
ashwanth/restructure-query-section
ashwanth/skip-tsdb-load-on-err
attempt-count-streams-per-query
auto-remove-unhealthy-distributors
auto-triager
automated-helm-chart-update/2023-02-01-05-30-47
automated-helm-chart-update/2023-04-05-19-46-39
automated-helm-chart-update/2023-04-24-20-56-21
automated-helm-chart-update/2023-04-24-22-40-04
automated-helm-chart-update/2023-09-07-18-09-02
automated-helm-chart-update/2023-09-14-16-23-44
automated-helm-chart-update/2023-10-16-14-20-07
automated-helm-chart-update/2023-10-18-10-10-52
automated-helm-chart-update/2023-10-18-13-14-43
automated-helm-chart-update/2024-01-24-16-05-59
automated-helm-chart-update/2024-04-08-19-24-50
backport-10090-to-k160
backport-10101-to-release-2.9.x
backport-10221-to-release-2.8.x
backport-10318-to-k163
backport-10687-to-release-2.9.x
backport-11251-to-k175
backport-11827-to-k186
backport-13116-to-release-3.2.x
backport-13116-to-release-3.3.x
backport-13225-to-main
backport-14221-to-release-3.2.x
backport-14780-to-release-3.2.x
backport-15483-to-release-3.3.x
backport-16045-to-k239
backport-16203-to-k242
backport-16954-to-main
backport-17054-to-k249
backport-8893-to-release-2.6.x
backport-8971-to-release-2.7.x
backport-9176-to-release-2.8.x
backport-9757-to-release-2.8.x
backport-9978-to-k158
backport-9978-to-k159
backport-b57d260dd
benclive/fix-mem-leak-in-iterator
benclive/fix-some-data-races
benton/loki-mixin-updates
benton/loki-mixin-v2
blockbuilder-timespan
blockscheduler-track-commits
bloom-compactor/debugging-issues-in-mergeBuilder
bound-parallelism-slicefor
buffered-kafka-reads
build-samples-based-on-num-chunks-size
callum-builder-basemap-lock
callum-explainer-hack
callum-hackathon-explainer
callum-iterator-arrow-record
callum-k136-jsonnet-fix
callum-lambda-promtail-test
callum-parallelize-first-last
callum-pipeline-sanitize-sm-values
callum-prob-step-eval
callum-quantile-inner-child
callum-query-limits-validation
callum-querylimit-pointers
callum-remove-epool
callum-ruler-local-warn
callum-s3-prefix-metric
callum-shard-last
callum-snappy-exp
callum-stream_limit-insights
callum-track-max-labels
charleskorn/stringlabels
chaudum/batch-log-enqueue-dequeue
chaudum/benchmark-reassign-queriers
chaudum/bloomfilter-e2e-parallel-requests
chaudum/bloomfilter-jsonnet
chaudum/bloomgateway-client-tracing
chaudum/bloomgateway-testing
chaudum/bloomstore-cache-test
chaudum/bloomstore-fetch-blocks
chaudum/bump-helm-4.4.3
chaudum/canary-actor
chaudum/chaudum/query-execution-pull-iterators
chaudum/chunk-compression-read-benchmark
chaudum/cleanup-ingester
chaudum/cmp-fix
chaudum/compactor-list-objects
chaudum/cri-config
chaudum/day-chunks-iter-test
chaudum/debug-skipped
chaudum/distributor-healthcheck
chaudum/dockerfmt
chaudum/fix-flaky-multitenant-e2e-test
chaudum/fix-max-query-range-limit
chaudum/fix-predicate-from-matcher
chaudum/fixed-size-memory-ringbuffer
chaudum/hackathon-analyze-pipelines
chaudum/hackathon-analyze-pipelines-v2
chaudum/hackathon-analyze-pipelines-v3
chaudum/helm-remove-image-override-for-gel
chaudum/improve-git-fetch-makefile
chaudum/improve-timestamp-parsing
chaudum/index-gateway-instrumentation-k204
chaudum/integration-test-startup-timeout
chaudum/k204-index-gateway
chaudum/linked-map
chaudum/literals
chaudum/local-index-query
chaudum/logcli-load-multiple-schemaconfig
chaudum/loki-query-engine-ui
chaudum/make-bloomfilter-task-cancelable
chaudum/metastore-caching
chaudum/native-docker-builds
chaudum/new-engine-sharding
chaudum/physical-plan-optimizer-visitor-pattern
chaudum/querier-worker-cpu-affinity
chaudum/query-execution
chaudum/query-executor-4
chaudum/query-skip-factor
chaudum/rewrite-runtime-config
chaudum/seek-panic
chaudum/shard-by-sections
chaudum/syslog-udp-cleanup-idle-streams
check-inverse-postings
cherrypick-9484-k151
chunk-inspect-read-corrupt
chunk-query
chunks-inspect-v4-read-corrupt
chunks_compaction_research
chunkv5
cle_updates
cleanup-campsite/removing-deprecations
cleanup-migrate
codeowners-mixins-20240925
context-cause-usage
correct-kafka-metric-names
correctly-propagates-ctx
custom-headers
dannykopping/groupcache-instrument
dannykopping/memcached-slab-allocator
dannykopping/remove-cache-stats
danstadler-pdx-patch-1
danstadler-pdx-patch-2
data-race-fix-01
dataobj
dataobj-compression-ratio-and-final-size
dataobj-comsumer-metastore-orig
dataobj-log-batches
dataobj-logs-sort
dataobj-logs-sortorder
dataobj-querier-logger
dataobj-reader-stats
dataobj-store-sort-order
debug-bloomgateway
dedup-only-partitions
dependabot/go_modules/github.com/containerd/containerd/v2-2.0.5
dependabot/go_modules/operator/api/loki/golang.org/x/net-0.38.0
deprecatable-metrics-example
deps-update/main-cloud.google.comgostorage
deps-update/main-docker.iografanaloki
deps-update/main-github.comapachearrow-gov18
deps-update/main-github.cominfluxdatatelegraf
deps-update/main-github.comprometheuscommon
deps-update/main-github.comprometheusprometheus
deps-update/main-github.comtwmbfranz-go
deps-update/main-go-github.com-containerd-containerd-v2-vulnerability
deps-update/main-go-golang.org-x-net-vulnerability
deps-update/main-go.opentelemetry.iocollectorpdata
deps-update/main-google.golang.orgapi
deps-update/main-google.golang.orggrpc
deps-update/release-2.9.x-go-golang.org-x-net-vulnerability
deps-update/release-3.3.x-go-golang.org-x-net-vulnerability
deps-update/release-3.4.x-go-golang.org-x-net-vulnerability
deps-update/release-3.5.x-go-github.com-containerd-containerd-v2-vulnerability
deps-update/release-3.5.x-go-golang.org-x-net-vulnerability
detected-labels-add-limits-param
detected-labels-from-store
detected-labels-minor-enhancements
dev-rel-workshop
dfinnegan-fgh-patch-1
digitalemil-patch-1
digitalemil-patch-2
digitalemil-patch-3
digitalemil-patch-4
dimitarvdimitrov-patch-1
distributed-helm-chart
distributed-helm-demo
distributors-exp-avg
do-not-retry-enforced-labels-error
do-until-quorom-wip
doanbutar-patch-1
doanbutar-patch-2
docs-ipv6
docs-logql
docs-nvdh-gcp-helm
dodson/admonitions
dont-log-every-indexset-call-
ej25a-patch-1
emit-events-without-debuggnig
enable-hedging-on-ingester-requests
enable-limitedpusherrorslogging-by-default
enable-stream-sharding
enforce-sharding-of-approx-topk-queries
exceeds-rate-limit-check
explore-logs-fallback-query-path
faster-cleanupexpired
faster-truncate-log-lines
fcjack/backport-dataobj-metrics
fcjack/ci-test
fcjack/image-workflows
feat/drain-format
feat/pattern-pattern-mining
feat/syslog-rfc3164-defaultyear
feat/usage-tracker
fix-2.8-references
fix-headers
fix-helm-enterprise-values
fix-helmchart
fix-igw-job
fix-image-tag-script
fix-legacy-panels
fix-orphan-spans
fix-promtail-cves
fix-release-lib-shellcheck
fix/pattern-merge
fix_more_dashboards
fix_windowsserver_version
fmt-jsonnet-fix
force-loki-helm-publish
get-marked-for-deletions
gh-action-labeler-fix
gh-readonly-queue/main/pr-11793-215b5fd2fd71574e454529b1b620a295f1323dac
grafana-dylan-patch-1
grobinson/failover-to-other-zones
grobinson/k251-disable-autocommit
grobinson/k251-disable-writing-metadata
grobinson/kafka-client-v2
grobinson/use-new-evictor
groupcache
guard-againts-non-scheduler-request
guard-ingester-detected-field-errors
hackathon-2023-08-events-in-graphite-proxy
hackathon/demo
hackathon/hackathon-2023-12-arrow-engine
handle-errors-per-category
hedge-index-gateway
hedge-index-gateway-220
helm-5.47.3
helm-5.48
helm-chart-tagged-6.20.0
helm-chart-tagged-6.26.0
helm-chart-tagged-6.27.0
helm-chart-tagged-6.28.0
helm-chart-tagged-6.30.0
helm-chart-weekly-6.24.0-weekly.233
helm-chart-weekly-6.25.0-weekly.234
helm-chart-weekly-6.25.0-weekly.235
helm-chart-weekly-6.25.0-weekly.236
helm-chart-weekly-6.25.0-weekly.237
helm-chart-weekly-6.26.0
helm-chart-weekly-6.26.0-weekly.238
helm-chart-weekly-6.26.0-weekly.239
helm-chart-weekly-6.26.0-weekly.240
helm-chart-weekly-6.26.0-weekly.241
helm-chart-weekly-6.28.0-weekly.242
helm-chart-weekly-6.28.0-weekly.243
helm-chart-weekly-6.28.0-weekly.244
helm-chart-weekly-6.29.0-weekly.245
helm-chart-weekly-6.29.0-weekly.246
helm-chart-weekly-6.29.0-weekly.247
helm-chart-weekly-6.30.0
helm-chart-weekly-6.31.0
helm-loki-values-backend-target
ignore-yaml-errors
improve-cleanup-stats
improve-distributor-latency
index-gateways/reduce-goroutines
index-stats
ingest-pipelines
inline-tsdb-on-cache
integrate-laser
intentional-failure
is-this-qfs-cure
jdb/2022-10-enterprise-logs-content-reuse
jdb/2023-03-update-doc.mk
jdb/2025-05/add-docs-license
jsonnet-update/2023-01-31-10-09-02
k100
k101
k102
k103
k104
k105
k106
k107
k108
k109
k110
k111
k112
k113
k114
k115
k116
k117
k118
k119
k12
k120
k121
k122
k123
k124
k125
k126
k127
k128
k129
k13
k130
k131
k131-no-validate-matchers-labels
k132
k133
k135
k135-sharding-hotfix
k136
k137
k138
k139
k14
k140
k141
k142
k143
k144
k145
k146
k146-with-chunk-logging
k147
k148
k149
k15
k150
k150-merge-itr-fix
k151
k152
k153
k154
k155
k156
k157
k158
k159
k16
k160
k161
k162
k163
k164
k165
k166
k167
k168
k168-ewelch-concurrency-limits
k169
k17
k170
k171
k171-with-retry
k172
k173
k174
k174-fixes2
k175
k176
k177
k178
k179
k18
k180
k181
k182
k183
k183-quantile-patch
k184
k185
k185-fix-previous-tsdb
k186
k187
k188
k189
k19
k190
k191
k192
k193
k194
k195
k195-backup
k196
k197
k198
k199
k199-debug
k20
k200
k201
k202
k203
k203-with-samples
k204
k204-separate-download
k205
k205-with-samples
k206
k207
k207-ingester-profiling-2
k208
k209
k209-ewelch-idx-gateway-hedging
k21
k210
k210-ewelch-idx-gateway-hedge
k210-ewelch-shard-limited
k211
k211-ewelch-congestion-control
k211-ewelch-datasample
k211-ewelch-test-frontend-changes
k212
k213
k213-ewelch
k214
k215
k216
k217
k217-alloy-v1.7-fork
k217-without-promlog
k218
k219
k22
k220
k220-index-sync
k220-move-detected-fields-logic-to-qf
k220-with-detected-fields-guard
k221
k221-index-sync-fixes
k221-with-stream-logging
k222
k222-shard-volume-queries
k228
k229
k23
k230
k231
k232
k233
k234
k235
k236
k236-with-agg-metric-payload-fix
k237
k238
k239
k24
k240
k241
k242
k243
k244
k245
k246
k246-with-per-tenant-ruler-wal-replay
k247
k248
k248-distributor-lvl-detection
k248-level-detection-debugging
k248-levels-as-index
k249
k25
k250
k251
k252
k253
k254
k255
k256
k26
k27
k28
k29
k30
k31
k32
k33
k34
k35
k36
k37
k38
k39
k40
k41
k42
k43
k44
k45
k46
k47
k48
k49
k50
k51
k52
k53
k54
k55
k56
k57
k58
k59
k60
k61
k62
k63
k64
k65
k66
k67
k68
k69
k70
k71
k72
k73
k74
k75
k76
k77
k78
k79
k80
k81
k82
k83
k84
k85
k86
k87
k88
k89
k90
k91
k92
k93
k94
k95
k96
k97
k98
k99
kadjoudi-patch-1
kafka-usage-wip
kafka-wal-block
karsten/dedup-overlapping-chunks
karsten/first-over-time
karsten/fix-grpc-error
karsten/protos-query-request
karsten/test-ops
kaviraj/changelog-logql-bug
kaviraj/memcached-backup-tmp
kaviraj/single-gomod
kavirajk/backport-10319-release-2.9.x
kavirajk/bug-fix-memcached-multi-fetch
kavirajk/cache-instant-queries
kavirajk/cache-test
kavirajk/experiment-instant-query-bug
kavirajk/fix-engine-literalevaluator
kavirajk/linefilte-path-on-top-of-k196
kavirajk/memcache-cancellation-bug-fix
kavirajk/metadata-cache-with-k183
kavirajk/promtail-use-inotify
kavirajk/script-to-update-example
kavirajk/update-go-version-gomod
kavirajk/upgrade-prometheus-0.46
kavirajk/url-encode-aws-url
label-filter-predicate-pushdown
lambda-promtail-generic-s3
leizor/latest-produce-ts
limit-streams-chunks-subquery
logcli_object_store_failure_logging
loki-bench-tool
loki-mixin-parallel-read-path
loki-streaming-query-api
lru-symbols-cache
lru-symbols-cache-w-conn-limits
main
map-streams-to-ingestion-scope
marinnedea-patch-1
mdsgrafana-patch-1
mess-with-multiplegrpcconfigs
meta-monitoring-v2-p2
metadata-decoder-corrections
metastore-bootstrap
metastore-experiments
more-date-functions
more-details-tracing-for-distributors
more-release-testing
multi-zone-topology-support
new-index-spans
no-extents-no-problem
nvdh/query
operator-loki-v3
otlp-severity-detection
owen-d/fix/nil-ptr-due-to-empty-resp
pablo/lambda-promtail-event-bridge-setup
pablo/promtail-wal-support
pablo/refactor-client-manager
pablo/refactor-http-targets
panic-if-builder-fails-to-init
panic_query_frontend_test
parser-backtick-regexp-error
parser-hints/bug
paul1r/corrupted_wal_repair
paul1r/republish_lambda_promtail
persist-patterns-as-aggs
pooling-decode-buffers-dataobj
poyzannur/add-pdb-idx-gws
poyzannur/fix-blooms-checksum-bug
poyzannur/fix-compactor-starting-indexshipper-in-RW-mode
poyzannur/fix-errors-introduced-by-10748
poyzannur/fix-flaky-test
pr_11086
prepare-2.8-changelog
promtail-go-gelf
ptodev/reset-promtail-metrics-archive-23-april-2024
ptodev/update-win-eventlog
pub-sub-cancel
query-limits-validation
query-splitting-api
query-timestamp-validation
rbrady/16330-fix-rolebinding-provisioner
rbrady/17614-update-provisioner
read-corrupt-blocks
read-path-improvement-wal
reenable-ipv6-for-memberlist
refactor-extractors-multiple-samples-2
release-2.0.1
release-2.2
release-2.2.1
release-2.3
release-2.4
release-2.5.x
release-2.6.x
release-2.7.x
release-2.8.x
release-2.8.x-fix-failing-test
release-2.9.x
release-3.0.x
release-3.1.x
release-3.2.x
release-3.3.x
release-3.4.x
release-3.5.x
release-notes-appender
release-please--branches--add-major-release-workflow
release-please--branches--fix-vuln-scanning
release-please--branches--k195
release-please--branches--k196
release-please--branches--k197
release-please--branches--k198
release-please--branches--k199
release-please--branches--k200
release-please--branches--k201
release-please--branches--k202
release-please--branches--k203
release-please--branches--k204
release-please--branches--k205
release-please--branches--k206
release-please--branches--k208
release-please--branches--k209
release-please--branches--k210
release-please--branches--k211
release-please--branches--k212
release-please--branches--k215
release-please--branches--k216
release-please--branches--k221
release-please--branches--k222
release-please--branches--k228
release-please--branches--k234
release-please--branches--k235
release-please--branches--k236
release-please--branches--k237
release-please--branches--k238
release-please--branches--k239
release-please--branches--k240
release-please--branches--k241
release-please--branches--k242
release-please--branches--k243
release-please--branches--k244
release-please--branches--k246
release-please--branches--k247
release-please--branches--k249
release-please--branches--k250
release-please--branches--k251
release-please--branches--k253
release-please--branches--k254
release-please--branches--k255
release-please--branches--k256
release-please--branches--main
release-please--branches--main--components--operator
release-please--branches--release-3.0.x
release-please--branches--release-3.1.x
release-please--branches--release-3.2.x
release-please--branches--release-3.3.x
release-please--branches--release-3.4.x
release-please--branches--release-3.5.x
release-please--branches--update-release-pipeline
remove-early-eof
remove-override
remove_lokitool_binary
retry-limits-middleware
reuse-server-index
revert-15950-deps-update/main-github.comprometheusprometheus
revert-7179-azure_service_principal_auth
revert-8662
revert-map-pooling
rgnvldr-patch-1
rk/update-helm-docs
salvacorts/2.9.12/fix-vulns
salvacorts/backport-3.4.x
salvacorts/compator-deletes-acache
samu6851-patch-1
samu6851-patch-2
scope-usage
shantanu/add-to-release-notes
shantanu/fix-scalar-timestamp
shantanu/remove-ruler-configs
shard-parsing
shard-volume-queries
shipper/skip-notready-on-sync
simulate-retention-endpoint
singleflight
snyk-monitor-workflow
sp/logged_trace_id
split-rules-into-more-groups
split-tests-by-package
split-with-header
steven_2_8_docs
stop-using-retry-flag
store-aggregated-metrics-in-loki
store-aggregated-metrics-in-loki-3
stream-generator-split-send-loops
stripe-lock-ctx-cancelation
structured-metadata-indexing
svennergr/structured-metadata-api
tch/bestBranchEvverrrrrrrrrr
temp-fluentbit-change
temp-proto-fix
test-docker-plugin-publish
test-failcheck
test-gateway
test-helm-release
test-release
test_PR
test_branch
testing-drain-params
testing-drain-params-2
tpatterson/cache-json-label-values
tpatterson/chunk-iterator
tpatterson/expose-partition-ring
tpatterson/generate-drone-yaml
tpatterson/label-matcher-optimizations
tpatterson/reporder-filters
tpatterson/revert-async-store-change
tpatterson/size-based-compaction-with-latest
tpatterson/space-compaction
tpatterson/stats-estimate
trace-labels-in-distributor
transform_mixin
trevorwhitney/detect-only-no-parser
trevorwhitney/how-to-make-a-pr
trevorwhitney/index-stats-perf-improvement
trevorwhitney/logcli-client-test
trevorwhitney/refactor-nix-folder
trevorwhitney/respect-tsdb-version-in-compactor
trevorwhitney/series-volume-fix
trevorwhitney/upgrade-dskit
trevorwhitney/use-tsdb-version-from-schema-config
trevorwhitney/volume-memory-fix-k160
trigger-ci
try-new-span-chagnes
try-reverting-pr9404
tsdb-benchmark-setup
tulmah-patch-1
undelete
update-docs-Running-Promtail-on-AWS-EC2-tutorial
updateCHANGELOG
upgrade-golang-jwt-2.9
upgrade33
usage-poc-combined
use-cfg-consumer-group
use-worker-pool-for-kafka-push
use-worker-pool-kafka-push
use_constant_for_loki_prefix
use_go_120_6
validate-retention-api
wip-stringlabels
wrap-downloading-file-errors
x160-ewelch-cache
x161-ewelch-l2-cache
x162-ewelch-memcached-connect-timeout
yinkagr-patch-1
2.8.3
helm-loki-3.0.0
helm-loki-3.0.1
helm-loki-3.0.2
helm-loki-3.0.3
helm-loki-3.0.4
helm-loki-3.0.5
helm-loki-3.0.6
helm-loki-3.0.7
helm-loki-3.0.8
helm-loki-3.0.9
helm-loki-3.1.0
helm-loki-3.10.0
helm-loki-3.2.0
helm-loki-3.2.1
helm-loki-3.2.2
helm-loki-3.3.0
helm-loki-3.3.1
helm-loki-3.3.2
helm-loki-3.3.3
helm-loki-3.3.4
helm-loki-3.4.0
helm-loki-3.4.1
helm-loki-3.4.2
helm-loki-3.4.3
helm-loki-3.5.0
helm-loki-3.6.0
helm-loki-3.6.1
helm-loki-3.7.0
helm-loki-3.8.0
helm-loki-3.8.1
helm-loki-3.8.2
helm-loki-3.9.0
helm-loki-4.0.0
helm-loki-4.1.0
helm-loki-4.10.0
helm-loki-4.2.0
helm-loki-4.3.0
helm-loki-4.4.0
helm-loki-4.4.1
helm-loki-4.4.2
helm-loki-4.5.0
helm-loki-4.5.1
helm-loki-4.6.0
helm-loki-4.6.1
helm-loki-4.6.2
helm-loki-4.7.0
helm-loki-4.8.0
helm-loki-4.9.0
helm-loki-5.0.0
helm-loki-5.1.0
helm-loki-5.10.0
helm-loki-5.11.0
helm-loki-5.12.0
helm-loki-5.13.0
helm-loki-5.14.0
helm-loki-5.14.1
helm-loki-5.15.0
helm-loki-5.17.0
helm-loki-5.18.0
helm-loki-5.18.1
helm-loki-5.19.0
helm-loki-5.2.0
helm-loki-5.20.0
helm-loki-5.21.0
helm-loki-5.22.0
helm-loki-5.22.1
helm-loki-5.22.2
helm-loki-5.23.0
helm-loki-5.23.1
helm-loki-5.24.0
helm-loki-5.25.0
helm-loki-5.26.0
helm-loki-5.27.0
helm-loki-5.28.0
helm-loki-5.29.0
helm-loki-5.3.0
helm-loki-5.3.1
helm-loki-5.30.0
helm-loki-5.31.0
helm-loki-5.32.0
helm-loki-5.33.0
helm-loki-5.34.0
helm-loki-5.35.0
helm-loki-5.36.0
helm-loki-5.36.1
helm-loki-5.36.2
helm-loki-5.36.3
helm-loki-5.37.0
helm-loki-5.38.0
helm-loki-5.39.0
helm-loki-5.4.0
helm-loki-5.40.1
helm-loki-5.41.0
helm-loki-5.41.1
helm-loki-5.41.2
helm-loki-5.41.3
helm-loki-5.41.4
helm-loki-5.41.5
helm-loki-5.41.6
helm-loki-5.41.7
helm-loki-5.41.8
helm-loki-5.41.9-distributed
helm-loki-5.41.9-distributed-rc2
helm-loki-5.42.0
helm-loki-5.42.1
helm-loki-5.42.2
helm-loki-5.42.3
helm-loki-5.43.0
helm-loki-5.43.1
helm-loki-5.43.2
helm-loki-5.43.3
helm-loki-5.43.4
helm-loki-5.43.5
helm-loki-5.43.6
helm-loki-5.43.7
helm-loki-5.44.0
helm-loki-5.44.1
helm-loki-5.44.2
helm-loki-5.44.3
helm-loki-5.44.4
helm-loki-5.45.0
helm-loki-5.46.0
helm-loki-5.47.0
helm-loki-5.47.1
helm-loki-5.47.2
helm-loki-5.48.0
helm-loki-5.5.0
helm-loki-5.5.1
helm-loki-5.5.10
helm-loki-5.5.11
helm-loki-5.5.12
helm-loki-5.5.2
helm-loki-5.5.3
helm-loki-5.5.4
helm-loki-5.5.5
helm-loki-5.5.6
helm-loki-5.5.7
helm-loki-5.5.8
helm-loki-5.5.9
helm-loki-5.6.0
helm-loki-5.6.1
helm-loki-5.6.2
helm-loki-5.6.3
helm-loki-5.6.4
helm-loki-5.7.1
helm-loki-5.8.0
helm-loki-5.8.1
helm-loki-5.8.10
helm-loki-5.8.11
helm-loki-5.8.2
helm-loki-5.8.3
helm-loki-5.8.4
helm-loki-5.8.5
helm-loki-5.8.6
helm-loki-5.8.7
helm-loki-5.8.8
helm-loki-5.8.9
helm-loki-5.9.0
helm-loki-5.9.1
helm-loki-5.9.2
helm-loki-6.0.0
helm-loki-6.1.0
helm-loki-6.10.0
helm-loki-6.10.1
helm-loki-6.10.2
helm-loki-6.11.0
helm-loki-6.12.0
helm-loki-6.15.0
helm-loki-6.16.0
helm-loki-6.18.0
helm-loki-6.19.0
helm-loki-6.19.0-weekly.227
helm-loki-6.2.0
helm-loki-6.2.1
helm-loki-6.2.2
helm-loki-6.2.3
helm-loki-6.2.4
helm-loki-6.2.5
helm-loki-6.20.0
helm-loki-6.20.0-weekly.229
helm-loki-6.21.0
helm-loki-6.22.0
helm-loki-6.22.0-weekly.230
helm-loki-6.23.0
helm-loki-6.23.0-weekly.231
helm-loki-6.24.0
helm-loki-6.24.0-weekly.232
helm-loki-6.24.1
helm-loki-6.25.0
helm-loki-6.25.1
helm-loki-6.26.0
helm-loki-6.27.0
helm-loki-6.28.0
helm-loki-6.29.0
helm-loki-6.3.0
helm-loki-6.3.1
helm-loki-6.3.2
helm-loki-6.3.3
helm-loki-6.3.4
helm-loki-6.30.0
helm-loki-6.30.1
helm-loki-6.4.0
helm-loki-6.4.1
helm-loki-6.4.2
helm-loki-6.5.0
helm-loki-6.5.1
helm-loki-6.5.2
helm-loki-6.6.0
helm-loki-6.6.1
helm-loki-6.6.2
helm-loki-6.6.3
helm-loki-6.6.4
helm-loki-6.6.5
helm-loki-6.6.6
helm-loki-6.7.0
helm-loki-6.7.1
helm-loki-6.7.2
helm-loki-6.7.3
helm-loki-6.7.4
helm-loki-6.8.0
helm-loki-6.9.0
operator/v0.4.0
operator/v0.5.0
operator/v0.6.0
operator/v0.6.1
operator/v0.6.2
operator/v0.7.0
operator/v0.7.1
operator/v0.8.0
pkg/logql/syntax/v0.0.1
v0.1.0
v0.2.0
v0.3.0
v0.4.0
v1.0.0
v1.0.1
v1.0.2
v1.1.0
v1.2.0
v1.3.0
v1.4.0
v1.4.1
v1.5.0
v1.6.0
v1.6.1
v2.0.0
v2.0.1
v2.1.0
v2.2.0
v2.2.1
v2.3.0
v2.4.0
v2.4.1
v2.4.2
v2.5.0
v2.6.0
v2.6.1
v2.7.0
v2.7.1
v2.7.2
v2.7.3
v2.7.4
v2.7.5
v2.7.6
v2.7.7
v2.8.0
v2.8.1
v2.8.10
v2.8.11
v2.8.2
v2.8.3
v2.8.4
v2.8.5
v2.8.6
v2.8.7
v2.8.8
v2.8.9
v2.9.0
v2.9.1
v2.9.10
v2.9.11
v2.9.12
v2.9.13
v2.9.14
v2.9.2
v2.9.3
v2.9.4
v2.9.5
v2.9.6
v2.9.7
v2.9.8
v2.9.9
v3.0.0
v3.0.1
v3.1.0
v3.1.1
v3.1.2
v3.2.0
v3.2.1
v3.2.2
v3.3.0
v3.3.1
v3.3.2
v3.3.3
v3.3.4
v3.4.0
v3.4.1
v3.4.2
v3.4.3
v3.5.0
v3.5.1
${ noResults }
4929 Commits (a99c73dd97bf55d912d391339a7b82acccabf915)
Author | SHA1 | Message | Date |
---|---|---|---|
![]() |
0030cafb16
|
Remove dedupe cache from operations documentation. (#8957)
|
2 years ago |
![]() |
1bcf683513
|
Expose optional label matcher for label values handler (#8824)
|
2 years ago |
![]() |
b97525a448
|
[CI/CD] Update yaml file `./production/helm/loki/values.yaml` (+1 other) (#8955)
**Here is a summary of the updates contained in this PR:** *** Update attribute `$.enterprise.version` in yaml file `./production/helm/loki/values.yaml` to the following value: `v1.6.3` *** Bump version of Helm Chart Add changelog entry to `./production/helm/loki/CHANGELOG.md` Re-generate docs |
2 years ago |
![]() |
28a7733ede
|
Rename config for enforcing a minimum number of label matchers (#8940)
**What this PR does / why we need it**: Followup PR for https://github.com/grafana/loki/pull/8918 renaming config. See https://github.com/grafana/loki/pull/8918/files#r1151820792. **Which issue(s) this PR fixes**: Fixes https://github.com/grafana/loki-private/issues/699 **Special notes for your reviewer**: **Checklist** - [ ] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [x] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --------- Co-authored-by: Dylan Guedes <djmgguedes@gmail.com> |
2 years ago |
![]() |
abc0fd26d2
|
Enforce per tenant queue size (#8947)
**What this PR does / why we need it**: Prior to the changes in https://github.com/grafana/loki/pull/8752 the max queue size per tenant was enforced by the size of the buffered channel to which a request was enqueued. However, since we have hierarchical queues, every sub-queue has the same channel capacity as the root (tenant) queue. Therefore the total queue size per tenant needs to be tracked separately, so requests can be rejected when the max queue size is reached. Signed-off-by: Christian Haudum <christian.haudum@gmail.com> |
2 years ago |
![]() |
0adedfa689
|
Revert high cardinality metric in scheduler (#8946)
**What this PR does / why we need it**:
The metrics was useful for initial testing of the new scheduler queue implementation but yields high cardinality metrics, which is not desired. Also, the metric does not add additional value beyond the initial testing phase.
**Special notes for your reviewer**:
The metric was introduced with commit
|
2 years ago |
![]() |
89996c9714
|
Promtail WAL support: Implement reader side (#8302)
**What this PR does / why we need it**: This PR is the second in a series that will implement WAL support into Promtail. The main objective of this PR is implement the reader side of WAL. That is, using the previously instantiated `client.Manager` to somehow read entries from the WAL instead of receiving them from a channel. **Pending work**: - [x] Implement mechanism so that `WriteTo` targets of each `wal.Watcher` can cleanup un-used series - [x] Write a full fledged test-case that tests a WAL enabled Promtail (maybe in a follow up) **Which issue(s) this PR fixes**: Part of #8197 **Special notes for your reviewer**: The line count might be big since this PR implements two fully fledged test cases of Promtail with WAL enabled. **Checklist** - [ ] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [ ] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` |
2 years ago |
![]() |
b76be36e0c
|
operator: Fix makefile target operatorhub (#8930)
|
2 years ago |
![]() |
43ae1db14c
|
Give examples for all cache configurations. (#8832)
**What this PR does / why we need it**: #8373 documented how to setup Memcached for caching. The documentation is missing some information on a few caches. These are added in this PR. **Checklist** - [ ] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [x] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --------- Co-authored-by: J Stickler <julie.stickler@grafana.com> |
2 years ago |
![]() |
d421feafe6
|
Log when returning query-time limit (#8938)
**What this PR does / why we need it**:
At https://github.com/grafana/loki/pull/8727 we introduced various
limits that can now be configured at query time. We always compare the
value of the limit configured at query time with the value set on the
overrides for the tenant or the default if not configured (aka
original); applying the most restrictive one.
If the most restrictive is the original value or the limit is not
configured at query-time, we print the following debug message:
|
2 years ago |
![]() |
038a7722d6
|
Ask users to add node types to the sizing tool. (#8834)
**What this PR does / why we need it**: We want to invite users to make contributions to the node types of the sizing tool since we don't have the capacity to do so. **Checklist** - [ ] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [x] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --------- Co-authored-by: J Stickler <julie.stickler@grafana.com> |
2 years ago |
![]() |
ee045312a9
|
Automatically Reorder Pipeline Filters (#8914)
This PR moves Line Filters to be as early in a log pipeline as possible. For example: `{app="foo"} | logfmt |="some stuff"` becomes `{app="foo"} |="some stuff" | logfmt` Any LineFilter after a `LineFormat` stage will be moved to directly after the nearest `LineFormat` stage benchmarks: ``` goos: linux goarch: amd64 pkg: github.com/grafana/loki/pkg/logql/syntax cpu: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz │ reorder_old.txt │ reorder_new.txt │ │ sec/op │ sec/op vs base │ ReorderedPipeline-8 2104.5n ± 3% 173.9n ± 2% -91.74% (p=0.000 n=10) │ reorder_old.txt │ reorder_new.txt │ │ B/op │ B/op vs base │ ReorderedPipeline-8 336.0 ± 0% 0.0 ± 0% -100.00% (p=0.000 n=10) │ reorder_old.txt │ reorder_new.txt │ │ allocs/op │ allocs/op vs base │ ReorderedPipeline-8 16.00 ± 0% 0.00 ± 0% -100.00% (p=0.000 n=10) ``` |
2 years ago |
![]() |
782ffca409
|
operator: Add missing replaces directives for release v0.2.0 (#8912)
|
2 years ago |
![]() |
45775c82f7
|
Implement `RequiredNumberLabels` query limit (#8918)
**What this PR does / why we need it**: As pointed out in https://github.com/grafana/loki/pull/8851, some queries can impose a great workload on a cluster by selecting too many streams. Similarly to the `RequiredLabels` limit introduced at https://github.com/grafana/loki/pull/8851, here we add a new limit `RequiredNumberLabels` to require queries to specify at least N label. For example, if the limit is set to 2, then the query should contain at least 2 label matchers. This limit can be configured per tenant and at query time.  **Which issue(s) this PR fixes**: Fixes https://github.com/grafana/loki-private/issues/699 **Special notes for your reviewer**: **Checklist** - [x] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [x] Documentation added - [x] Tests updated - [x] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --------- Co-authored-by: Dylan Guedes <djmgguedes@gmail.com> |
2 years ago |
![]() |
c81975bd9c
|
operator: Fix calculator dockerfile (#8931)
|
2 years ago |
![]() |
34486d4ba2
|
Change mention of 2.7.3 to 2.7.5 in Nomad (#8934)
**What this PR does / why we need it**: Missed this in the previous PR. **Checklist** - [X] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [ ] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` Signed-off-by: Michel Hollands <michel.hollands@grafana.com> |
2 years ago |
![]() |
ee69f2bd37
|
Split index request in 24h intervals (#8909)
**What this PR does / why we need it**: At https://github.com/grafana/loki/pull/8670, we applied a time split of 24h intervals to all index stats requests to enforce the `max_query_bytes_read` and `max_querier_bytes_read` limits. When the limit is surpassed, the following message get's displayed:  As can be seen, the reported bytes read by the query are not the same as those reported by Grafana in the lower right corner of the query editor. This is because: 1. The index stats request for enforcing the limit is split in subqueries of 24h. The other index stats rquest is not time split. 2. When enforcing the limit, we are not displaying the bytes in powers of 2, but powers of 10 ([see here][2]). I.e. 1KB is 1000B vs 1KiB is 1024B. This PR adds the same logic to all index stats requests so we also time split by 24 intervals all requests that hit the Index Stats API endpoint. We also use powers of 2 instead of 10 on the message when enforcing `max_query_bytes_read` and `max_querier_bytes_read`.  Note that the library we use under the hoot to print the bytes rounds up and down to the nearest integer ([see][3]); that's why we see 16GiB compared to the 15.5GB in the Grafana query editor. **Which issue(s) this PR fixes**: Fixes https://github.com/grafana/loki/issues/8910 **Special notes for your reviewer**: - I refactored the`newQuerySizeLimiter` function and the rest of the _Tripperwares_ in `rountrip.go` to reuse the new IndexStatsTripperware. So we configure the split-by-time middleware only once. **Checklist** - [x] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [x] Documentation added - [x] Tests updated - [x] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` [1]: https://grafana.com/docs/loki/latest/api/#index-stats [2]: https://github.com/grafana/loki/blob/main/pkg/querier/queryrange/limits.go#L367-L368 [3]: https://github.com/dustin/go-humanize/blob/master/bytes.go#L75-L78 |
2 years ago |
![]() |
f6a3300f87
|
[CI/CD] Update yaml file `./production/helm/loki/Chart.yaml` (+1 other) (#8923)
**Here is a summary of the updates contained in this PR:** *** Update attribute `$.appVersion` in yaml file `./production/helm/loki/Chart.yaml` to the following value: `2.7.5` *** Bump version of Helm Chart Add changelog entry to `./production/helm/loki/CHANGELOG.md` Re-generate docs Co-authored-by: Michel Hollands <42814411+MichelHollands@users.noreply.github.com> |
2 years ago |
![]() |
e7e752378f
|
Add release notes for v2.7.5 release (#8924)
**What this PR does / why we need it**: This adds the release notes for v2.7.5 and v2.7.4. This last release was apparently not added last time. This will be backported to the release branch as well in a separate PR. **Checklist** - [X] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [X] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` Signed-off-by: Michel Hollands <michel.hollands@grafana.com> |
2 years ago |
![]() |
035f673e24
|
Update Loki version to latest (#8926)
**What this PR does / why we need it**: Update to v2.7.5 **Checklist** - [ ] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [X] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` Signed-off-by: Michel Hollands <michel.hollands@grafana.com> |
2 years ago |
![]() |
6f2aa5fb68
|
operator: Update LokiStack annotation on RulerConfig delete (#8911)
|
2 years ago |
![]() |
ce14592686
|
Update version used in production scripts (#8925)
**What this PR does / why we need it**: Update to 2.7.5 **Checklist** - [ ] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [ ] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` Signed-off-by: Michel Hollands <michel.hollands@grafana.com> |
2 years ago |
![]() |
163fd9d8af
|
Short circuit parsing when label matchers are present (#8890)
This PR makes parsers aware of any downstream label-matcher stages at parse time. As labels are parsed, if one has a matcher, the matcher is checked at parse time. If the label does not match it's matcher, parsing is halted on that log line. **ex 1:** consider the log: `foo=1 bar=2 baz=3` And the query `{} | logfmt | bar=3` When `bar` is parsed it is immediately checked against it's matcher. The match fails so we the parser never spends time parsing the rest of the line. **ex 2:** consider the log: `foo=1 baz=3 bletch=4` And the query `{} | logfmt | bar=3` `bar` is never seen in the log so the whole line is parsed. **Benchmarks:** ``` │ parsers__old_2.txt │ parsers__new_3.txt │ │ sec/op │ sec/op vs base │ _Parser/json/inline_stages-8 3413.5n ± 5% 766.4n ± 4% -77.55% (p=0.000 n=10) _Parser/jsonParser-not_json_line/inline_stages-8 101.5n ± 6% 103.1n ± 8% ~ (p=0.645 n=10) _Parser/unpack/inline_stages-8 383.8n ± 4% 388.0n ± 9% ~ (p=0.954 n=10) _Parser/unpack-not_json_line/inline_stages-8 13.30n ± 2% 13.11n ± 1% ~ (p=0.247 n=10) _Parser/logfmt/inline_stages-8 2105.5n ± 16% 727.7n ± 4% -65.44% (p=0.000 n=10) _Parser/regex_greedy/inline_stages-8 4.220µ ± 4% 4.175µ ± 4% ~ (p=0.739 n=10) _Parser/regex_status_digits/inline_stages-8 319.8n ± 5% 326.4n ± 8% ~ (p=0.481 n=10) _Parser/pattern/inline_stages-8 185.2n ± 7% 154.2n ± 3% -16.74% (p=0.000 n=10) │ parsers__old_2.txt │ parsers__new_3.txt │ │ B/op │ B/op vs base │ _Parser/json/inline_stages-8 280.00 ± 0% 64.00 ± 0% -77.14% (p=0.000 n=10) _Parser/jsonParser-not_json_line/inline_stages-8 16.00 ± 0% 16.00 ± 0% ~ (p=1.000 n=10) _Parser/unpack/inline_stages-8 80.00 ± 0% 80.00 ± 0% ~ (p=1.000 n=10) _Parser/unpack-not_json_line/inline_stages-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=10) _Parser/logfmt/inline_stages-8 336.00 ± 0% 74.00 ± 0% -77.98% (p=0.000 n=10) _Parser/regex_greedy/inline_stages-8 193.0 ± 1% 192.0 ± 1% ~ (p=0.656 n=10) _Parser/regex_status_digits/inline_stages-8 51.00 ± 0% 51.00 ± 0% ~ (p=1.000 n=10) _Parser/pattern/inline_stages-8 35.000 ± 0% 3.000 ± 0% -91.43% (p=0.000 n=10) │ parsers__old_2.txt │ parsers__new_3.txt │ │ allocs/op │ allocs/op vs base │ _Parser/json/inline_stages-8 18.000 ± 0% 4.000 ± 0% -77.78% (p=0.000 n=10) _Parser/jsonParser-not_json_line/inline_stages-8 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=10) _Parser/unpack/inline_stages-8 4.000 ± 0% 4.000 ± 0% ~ (p=1.000 n=10) _Parser/unpack-not_json_line/inline_stages-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=10) _Parser/logfmt/inline_stages-8 16.000 ± 0% 6.000 ± 0% -62.50% (p=0.000 n=10) _Parser/regex_greedy/inline_stages-8 2.000 ± 0% 2.000 ± 0% ~ (p=1.000 n=10) _Parser/regex_status_digits/inline_stages-8 2.000 ± 0% 2.000 ± 0% ~ (p=1.000 n=10) _Parser/pattern/inline_stages-8 2.000 ± 0% 1.000 ± 0% -50.00% (p=0.000 n=10) ``` --------- Co-authored-by: Owen Diehl <ow.diehl@gmail.com> |
2 years ago |
![]() |
b3cce9e84b
|
Update changelog for 2.7.5 release (#8919)
**What this PR does / why we need it**: Update changelog. The changes related to the 2.7.s will be backported to the release-2.7.x branch in another PR. **Special notes for your reviewer**: **Checklist** - [ ] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [ ] Documentation added - [ ] Tests updated - [X] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` Signed-off-by: Michel Hollands <michel.hollands@grafana.com> |
2 years ago |
![]() |
ffb961c439
|
feat(storage): add support for IBM cloud object storage as storage client (#8826)
**What this PR does / why we need it**: Add support for IBM cloud object storage as storage client **Which issue(s) this PR fixes**: Fixes NA **Checklist** - [x] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [x] Documentation added - [x] Tests updated - [ ] `CHANGELOG.md` updated --------- Signed-off-by: Shahul <shahulsonhal@gmail.com> Co-authored-by: Aman Kumar Singh <amankrsingh2110@gmail.com> Co-authored-by: Suruthi-G-K <shruthi.suruthi@gmail.com> Co-authored-by: tareqmamari <tariq.mamari@de.ibm.com> Co-authored-by: shahulsonhal <shahulsonhal@gmail.com> Co-authored-by: Aditya C S <aditya.gnu@gmail.com> Co-authored-by: Tareq Al-Maamari <tariq.mamari@gmail.com> |
2 years ago |
![]() |
336e08fc4b
|
Salvacorts/max querier size messaging (#8916)
**What this PR does / why we need it**: In https://github.com/grafana/loki/pull/8670 we introduced a new limit `max_querier_bytes_read`. When the limit was surpassed the following error message is printed: ``` query too large to execute on a single querier, either because parallelization is not enabled, the query is unshardable, or a shard query is too big to execute: (query: %s, limit: %s). Consider adding more specific stream selectors or reduce the time range of the query ``` As pointed out in [this comment][1], a user would have a hard time figuring out whether the cause was `parallelization is not enabled`, `the query is unshardable` or `a shard query is too big to execute`. This PR improves the error messaging for the `max_querier_bytes_read` limit to raise a different error for each of the causes above. **Which issue(s) this PR fixes**: Followup for https://github.com/grafana/loki/pull/8670 **Special notes for your reviewer**: **Checklist** - [x] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [x] Documentation added - [x] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` [1]: https://github.com/grafana/loki/pull/8670#discussion_r1146008266 --------- Co-authored-by: Danny Kopping <danny.kopping@grafana.com> |
2 years ago |
![]() |
44f1d8d7f6
|
azure: respect retry config before cancelling the context (#8732)
**What this PR does / why we need it**: On GET blob operations a context timeout is set to `RequestTimeout`, however this value is not the full timeout. This is only the first timeout and afterwards there are retries. As a result, the retry configuration is never used because the context is immediately cancelled. To fix we should set the context timeout to a value larger than the Azure timeout after retries. Since the backoff is exponential and randomized we need to take the value of `MaxRetryDelay` to set an upper bound. |
2 years ago |
![]() |
46b7c2cfb6
|
operator: Prepare community release v0.2.0 (#8651)
|
2 years ago |
![]() |
c9d5a91206
|
Ruler: Implement consistent rule evaluation jitter (#8896)
**What this PR does / why we need it**: This PR replaces the previous random jitter with a consistent jitter. While both are random, having the random jitter be applied _consistently_ is essential for evaluating rules on a predictable cadence. If a rule is supposed to evaluate every minute, whether it evaluates at (e.g.) `01:00` or `01:03.234` is irrelevant because the evaluation _instant_ is not adjusted, so it will produce the same result whether run at `01:00` or `01:03.234`. However, if 1000 rules are set to evaluate at `01:00`, this will create a resource contention issue. |
2 years ago |
![]() |
d38d481f35
|
Distributor: add detail to stream rates failure (#8900)
**What this PR does / why we need it**: Currently we cannot see if a single ingester is the source of the `unable to get stream rates` failures. This change adds the client address to the log entry. Signed-off-by: Danny Kopping <danny.kopping@grafana.com> |
2 years ago |
![]() |
4c4c7e3010
|
Promtail: Fix examples how to build it (#8898)
The build flags have to be provided before the package paths. Otherwise the `build` command fails with this error: > malformed import path "--tags=promtail_journal_enabled": leading dash See `go help build`: > usage: go build [-o output] [build flags] [packages] |
2 years ago |
![]() |
cba31024d4
|
Extend scheduler queue metrics with enqueue/dequeue counters (#8891)
**What this PR does / why we need it**: Better o11y of the scheduler. This change yields new metrics with potentially high cardinality on the scheduler. Signed-off-by: Christian Haudum <christian.haudum@gmail.com> |
2 years ago |
![]() |
3344d59fb5 |
Extract scheduler queue metrics into separate field
This allows for easier passing of the metrics to the scheduler instantiation. Signed-off-by: Christian Haudum <christian.haudum@gmail.com> |
2 years ago |
![]() |
99acb9b345
|
operator: Provide community bundle for openshift community hub (#8881)
|
2 years ago |
![]() |
1c012d6a26
|
Bump actions/setup-go from 3 to 4 (#8837)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 3 to 4. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/actions/setup-go/releases">actions/setup-go's releases</a>.</em></p> <blockquote> <h2>v4.0.0</h2> <p>In scope of release we enable cache by default. The action won’t throw an error if the cache can’t be restored or saved. The action will throw a warning message but it won’t stop a build process. The cache can be disabled by specifying <code>cache: false</code>.</p> <pre lang="yaml"><code>steps: - uses: actions/checkout@v3 - uses: actions/setup-go@v4 with: go-version: ‘1.19’ - run: go run hello.go </code></pre> <p>Besides, we introduce such changes as</p> <ul> <li><a href="https://redirect.github.com/actions/setup-go/pull/305">Allow to use only GOCACHE for cache</a></li> <li><a href="https://redirect.github.com/actions/setup-go/pull/315">Bump json5 from 2.2.1 to 2.2.3</a></li> <li><a href="https://redirect.github.com/actions/setup-go/pull/323">Use proper version for primary key in cache</a></li> <li><a href="https://redirect.github.com/actions/setup-go/pull/351">Always add Go bin to the PATH</a></li> <li><a href="https://redirect.github.com/actions/setup-go/pull/350">Add step warning if go-version input is empty</a></li> </ul> <h2>Add support for stable and oldstable aliases</h2> <p>In scope of this release we introduce aliases for the <code>go-version</code> input. The <code>stable</code> alias instals the latest stable version of Go. The <code>oldstable</code> alias installs previous latest minor release (the stable is 1.19.x -> the oldstable is 1.18.x).</p> <h3>Stable</h3> <pre lang="yaml"><code>steps: - uses: actions/checkout@v3 - uses: actions/setup-go@v3 with: go-version: 'stable' - run: go run hello.go </code></pre> <h3>OldStable</h3> <pre lang="yaml"><code>steps: - uses: actions/checkout@v3 - uses: actions/setup-go@v3 with: go-version: 'oldstable' - run: go run hello.go </code></pre> <h2>Add support for go.work and pass the token input through on GHES</h2> <p>In scope of this release we added <a href="https://redirect.github.com/actions/setup-go/pull/283">support for go.work file to pass it in go-version-file input</a>.</p> <pre lang="yaml"><code>steps: - uses: actions/checkout@v3 - uses: actions/setup-go@v3 </tr></table> </code></pre> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
2 years ago |
![]() |
17e05d28e4
|
Promtail: Add a new target for the Azure Event Hub (#8787)
**What this PR does / why we need it**: We want to allow receiving logs from the Azure Cloud. To solve this problem, we need to be able to configure Promtail to consume logs from the Azure Events Hub. To achieve this goal, we need to add a new configuration option to the scrape configuration in Promtail and implement the component that, given this configuration, can connect to the Azure Event Hub, parse received data, and prepare it to be stored in Loki. **Which issue(s) this PR fixes**: Fix [#8788](https://github.com/grafana/loki/issues/8788) # **Special notes for your reviewer**: With the current implementation, I would like to leverage Azure Events Hub's compatibility with a Kafka protocol and reuse code for a Kafka target. ### Why I decided to create a new target instead of just reusing the Kafka target: 1. Configuration for Kafka in this scenario requires fewer parameters. For example, version and auth should have some specific values. 2. Message handling is different for Azure Logs. In addition to what we are doing for the Kafka target, we want to fix JSON and split incoming messages into multiple log lines before propagating it further. ### From the implementation perspective, there are two goals: - Extract interfaces that are not Kafka configuration specific from the Kafka target module. Splitting Kafka config parsing and target creation. - To make the message handler/parser injectable or configurable: we would like to fix JSON and split a message into multiple messages. **Checklist** - [x] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [x] Documentation added - [x] Tests updated - [x] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --------- Co-authored-by: Pablo <2617411+thepalbi@users.noreply.github.com> Co-authored-by: Paschalis Tsilias <tpaschalis@users.noreply.github.com> Co-authored-by: J Stickler <julie.stickler@grafana.com> |
2 years ago |
![]() |
793a689d1f
|
Iterators: re-implement mergeEntryIterator using loser.Tree for performance (#8637)
**What this PR does / why we need it**: Building on #8351, this re-implements `mergeEntryIterator` using `loser.Tree`; the benchmark says it goes much faster but uses a bit more memory (while building the tree). ``` name old time/op new time/op delta SortIterator/merge_sort-4 10.7ms ± 4% 2.9ms ± 2% -72.74% (p=0.008 n=5+5) name old alloc/op new alloc/op delta SortIterator/merge_sort-4 11.2kB ± 0% 21.7kB ± 0% +93.45% (p=0.008 n=5+5) name old allocs/op new allocs/op delta SortIterator/merge_sort-4 6.00 ± 0% 7.00 ± 0% +16.67% (p=0.008 n=5+5) ``` The implementation is very different: rather than relying on iterators supporting `Peek()`, `mergeEntryIterator` now pulls items into its buffer until it finds one with a different timestamp or stream, and always works off what is in the buffer. The comment `"[we] pop the ones whose common value occurs most often."` did not appear to match the previous implementation, and no attempt was made to match this comment. A `Push()` function was added to `loser.Tree` to support live-streaming. This works by finding or making an empty slot, then re-running the initialize function to find the new winner. A consequence is that the previous "winner" value is lost after calling `Push()`, and users must call `Next()` to see the next item. A couple of tests had to be amended to avoid assuming particular behaviour of the implementation; I recommend that reviewers consider these closely. **Checklist** - [x] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - NA Documentation added - [x] Tests updated - NA `CHANGELOG.md` updated - NA Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` |
2 years ago |
![]() |
3b57ad2a54
|
WAL: remove ePool that is unused (#8669)
**What this PR does / why we need it**: Nothing is ever fetched from the pool - `GetEntries()` is never called. And the loop to call `PutEntries()` follows a call where the slice it ranges over is reset to empty, so that loop never added anything to the pool. **Checklist** - [x] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - NA Documentation added - NA Tests updated - NA `CHANGELOG.md` updated - NA Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` |
2 years ago |
![]() |
3ed9f0c9ef
|
Canary: support filtering / parsing logs with LogQL (#8871)
**What this PR does / why we need it**: The canary logs can be shipped to Loki in various formats. This allows users of the canary to parse out the canaries logs from any format e.g. nested json. The alternative is to do something different for the canary logs to prevent the logs being wrapped / manipulated (e.g. write to Loki directly, or use a different log pipeline) which limits the benefits of the canary. I've confirmed this works locally. **Which issue(s) this PR fixes**: Fixes #7775 **Checklist** - [x] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [ ] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` |
2 years ago |
![]() |
4f94b89fdb
|
Loki: Add more spans to write path (#8888)
**What this PR does / why we need it**: Add new spans to our write path to better determine which operations are holding our write performance. |
2 years ago |
![]() |
d24fe3e68b
|
Max bytes read limit (#8670)
**What this PR does / why we need it**: This PR implements two new per-tenant limits that are enforced on log and metric queries (both range and instant) when TSDB is used: - `max_query_bytes_read`: Refuse queries that would read more than the configured bytes here. Overall limit regardless of splitting/sharding. The goal is to refuse queries that would take too long. The default value of 0 disables this limit. - `max_querier_bytes_read`: Refuse queries in which any of their subqueries after splitting and sharding would read more than the configured bytes here. The goal is to avoid a querier from running a query that would load too much data in memory and can potentially get OOMed. The default value of 0 disables this limit. These new limits can be configured per tenant and per query (see https://github.com/grafana/loki/pull/8727). The bytes a query would read are estimated through TSDB's index stats. Even though they are not exact, they are good enough to have a rough estimation of whether a query is too big to run or not. For more details on this refer to this discussion in the PR: https://github.com/grafana/loki/pull/8670#discussion_r1124858508. Both limits are implemented in the frontend. Even though we considered implementing `max_querier_bytes_read` in the querier, this way, the limits for pre and post splitting/sharding queries are enforced close to each other on the same component. Moreover, this way we can reduce the number of index stats requests issued to the index gateways by reusing the stats gathered while sharding the query. With regard to how index stats requests are issued: - We parallelize index stats requests by splitting them into queries that span up to 24h since our indices are sharded by 24h periods. On top of that, this prevents a single index gateway from processing a single huge request like `{app=~".+"} for 30d`. - If sharding is enabled and the query is shardable, for `max_querier_bytes_read`, we re-use the stats requests issued by the sharding ware. Specifically, we look at the [bytesPerShard][1] to enforce this limit. Note that once we merge this PR and enable these limits, the load of index stats requests will increase substantially and we may discover bottlenecks in our index gateways and TSDB. After speaking with @owen-d, we think it should be fine as, if needed, we can scale up our index gateways and support caching index stats requests. Here's a demo of this working: <img width="1647" alt="image" src="https://user-images.githubusercontent.com/8354290/226918478-d4b6c2fd-de4d-478a-9c8b-e38fe148fa95.png"> <img width="1647" alt="image" src="https://user-images.githubusercontent.com/8354290/226918798-a71b1db8-ea68-4d00-933b-e5eb1524d240.png"> **Which issue(s) this PR fixes**: This PR addresses https://github.com/grafana/loki-private/issues/674. **Special notes for your reviewer**: - @jeschkies has reviewed the changes related to query-time limits. - I've done some refactoring in this PR: - Extracted logic to get stats for a set of matches into a new function [getStatsForMatchers][2]. - Extracted the _Handler_ interface implementation for [queryrangebase.roundTripper][3] into a new type [queryrangebase.roundTripperHandler][4]. This is used to create the handler that skips the rest of configured middlewares when sending an index stat quests ([example][5]). **Checklist** - [x] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [x] Documentation added - [x] Tests updated - [x] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` [1]: |
2 years ago |
![]() |
94725e7908
|
Define `RequiredLabels` query limit. (#8851)
**What this PR does / why we need it**: Some end-users can impose great workload on a cluster by selecting too many streams in their queries. We should be able to limit them. Therefore we introduce a new limit `RequiredLabelMatchers` which list label names that must be included in the stream selectors. The implementation follows the same approach as for max query limit. **Which issue(s) this PR fixes**: Fixes #8745 **Checklist** - [ ] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [x] Documentation added - [x] Tests updated - [x] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` |
2 years ago |
![]() |
4e893a0a88
|
tsdb: sample chunk info from tsdb index to limit the amount of chunkrefs we read from index (#8742)
**What this PR does / why we need it**: Previously we used to read the info of all the chunks from the index and then filter it out in a layer above within the tsdb code. This wastes a lot of resources when there are too many chunks in the index, but we just need a few of them based on the query range. Before jumping into how and why I went with chunk sampling, here are some points to consider: * Chunks in the index are sorted by the start time of the chunk. Since this does not tell us much about the end time of the chunks, we can only skip chunks that start after the end time of the query, which still would make us process lots of chunks when the query touches chunks that are near the end of the table boundary. * Data is written to tsdb with variable length encoding. This means we can't skip/jump chunks since each chunk info might vary in the number of bytes we write. Here is how I have implemented the sampling approach: * Chunks are sampled considering their end times from the index and stored in memory. * Here is how `chunkSample` is defined: ``` type chunkSample struct { largestMaxt int64 // holds largest chunk end time we have seen so far. In other words all the earlier chunks have maxt <= largestMaxt idx int // index of the chunk in the list which helps with determining position of sampled chunk offset int // offset is relative to beginning chunk info block i.e after series labels info and chunk count etc prevChunkMaxt int64 // chunk times are stored as deltas. This is used for calculating mint of sampled chunk } ``` * When a query comes in, we will find `chunkSample`, which has the largest "largestMaxt" that is less than the given query start time. In other words, find a chunk sample which skips all/most of the chunks that end before the query start time. * Once we have found a chunk sample which skips all/most of the chunks that end before the query start, we will sequentially go through chunks and consider only the once that overlap with the query range. We will stop processing chunks as soon as we see a chunk that starts after the end time of the query since the chunks are sorted by start time. * Sampling of chunks is done lazily for only the series that are queried, so we do not waste any resources on sampling series that are not queried. * To avoid sampling too many chunks, I am sampling chunks at `1h` steps i.e given a sampled chunk with chunk end time `t`, the next chunk would be sampled with end time >= `t + 1h`. This means typically, we should have ~28 chunks sampled for each series queried from each index file, considering 2h default chunk length and chunks overlapping multiple tables. Here are the benchmark results showing the difference it makes: ``` benchmark old ns/op new ns/op delta BenchmarkTSDBIndex_GetChunkRefs-10 12420741 4764309 -61.64% BenchmarkTSDBIndex_GetChunkRefs-10 12412014 4794156 -61.37% BenchmarkTSDBIndex_GetChunkRefs-10 12382716 4748571 -61.65% BenchmarkTSDBIndex_GetChunkRefs-10 12391397 4691054 -62.14% BenchmarkTSDBIndex_GetChunkRefs-10 12272200 5023567 -59.07% benchmark old allocs new allocs delta BenchmarkTSDBIndex_GetChunkRefs-10 345653 40 -99.99% BenchmarkTSDBIndex_GetChunkRefs-10 345653 40 -99.99% BenchmarkTSDBIndex_GetChunkRefs-10 345653 40 -99.99% BenchmarkTSDBIndex_GetChunkRefs-10 345653 40 -99.99% BenchmarkTSDBIndex_GetChunkRefs-10 345653 40 -99.99% benchmark old bytes new bytes delta BenchmarkTSDBIndex_GetChunkRefs-10 27286536 6398855 -76.55% BenchmarkTSDBIndex_GetChunkRefs-10 27286571 6399276 -76.55% BenchmarkTSDBIndex_GetChunkRefs-10 27286566 6400699 -76.54% BenchmarkTSDBIndex_GetChunkRefs-10 27286561 6399158 -76.55% BenchmarkTSDBIndex_GetChunkRefs-10 27286580 6399643 -76.55% ``` **Checklist** - [x] Tests updated |
2 years ago |
![]() |
1549fec2fa
|
mixins: Normalize headless service name for query-frontend/scheduler (#8880)
**What this PR does / why we need it**: Mixins in general are rather complex pieces of code to consume. When trying to use the loki mixins it wasn't apparent at first that I could use a headless service for DNS SRV discovery for both query_frontend and query_scheduler, especially because for frontend the service is named `query_frontend_headless_service` and for the scheduler, it's named `query_scheduler_service_discovery`. Furthermore, the query_frontend mixin provides a non-headless version of the service while query_scheduler doesn't. This PR aims at normalizing this for end users, both headless services are now named `query_frontend_headless_service` and `query_scheduler_headless_service` and both also provide a non-headless version of the service. |
2 years ago |
![]() |
93a1c21da5
|
operator: Break the API types out into their own module (#8863)
|
2 years ago |
![]() |
a2370e3af3
|
operator: Refactor all type validations into own package (#8878)
|
2 years ago |
![]() |
c6c07a8276
|
Document how to install Loki and Promtail using APT/DNF. (#8841)
**What this PR does / why we need it**: With #6456 we started providing native packages of Loki, Promtail and logcli. They've been added to Grafana's repository in the meantime. In order to spread the knowledge of the packages and the repo it should be documented. **Which issue(s) this PR fixes**: Fixes #52 **Checklist** - [ ] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [x] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --------- Co-authored-by: René Scheibe <rene.scheibe@gmail.com> Co-authored-by: J Stickler <julie.stickler@grafana.com> |
2 years ago |
![]() |
9844fad8b4
|
Rename LeafQueue to TreeQueue (#8856)
**What this PR does / why we need it**: `TreeQueue` is the semantically more correct term for this type of queue. Signed-off-by: Christian Haudum <christian.haudum@gmail.com> |
2 years ago |
![]() |
4721d7efd3
|
operator: Remove mutations to non-updatable statefulset fields (#8875)
|
2 years ago |
![]() |
b8221dc68f
|
Correcting typos merged in #8870. (#8873)
**Which issue(s) this PR fixes**: Fixes typos introduced in #8870 |
2 years ago |