mirror of https://github.com/grafana/loki
De-duplicate common prefixes as returned for individual buckets (#11317)
With multiple buckets configured on a single AWS/S3 store, each object is stored on one of the buckets by hashing its object key in `bucketFromKey()`. That is sufficient for `GetObject()` and `PutObject()`. `List()` needs to list each bucket and combine the results. It appends all object keys into a `storageObjects` slice. That works because each key uniquely exists in only one of the buckets. But that is not the case for the common prefixes; a common prefix may exist in multiple buckets. This PR removes duplicates from the common prefixes as gathered from the multiple buckets in `List()`. Without this fix, we find that a repeated common prefix leads to a table being compacted multiple times concurrently which is not safe. E.g. this error results when the compactor concurrently tries to compact a table and a 'losing' execution finds that its clean-up has already been done: `level=error caller=compactor.go:128 table-name=loki_index_tsdb_19683 msg="failed to remove downloaded index file" path=/var/loki/compactor/loki_index_tsdb_19683/1700667008-loki-write-0-1700660468255353690.tsdb err="remove /var/loki/compactor/loki_index_tsdb_19683/1700667008-loki-write-0-1700660468255353690.tsdb: no such file or directory" `pull/11211/head
parent
856b57336c
commit
18778cd548
Loading…
Reference in new issue