Like Prometheus, but for logs.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
loki/pkg/storage/factory.go

766 lines
28 KiB

package storage
import (
"context"
"flag"
"fmt"
"strings"
"time"
index-shipper: add support for multiple stores (#7754) Signed-off-by: Ashwanth Goli <iamashwanth@gmail.com> **What this PR does / why we need it**: Currently loki initializes a single instance of index-shipper to [handle all the table ranges](https://github.com/grafana/loki/blob/ff7b46297345b215fbf49c2cd4c364d125b6290b/pkg/storage/factory.go#L188) (from across periods) for a given index type `boltdb-shipper, tsdb`. Since index-shipper only has the object client handle to the store defined by `shared_store_type`, it limits the index uploads to a single store. Setting `shared_store_type` to a different store at a later point in time would mean losing access to the indexes stored in the previously configured store. With this PR, we initialize a separate index-shipper & table manager for each period if `shared_store_type` is not explicity configured. This offers the flexibility to store index in multiple stores (across providers). **Note**: - usage of `shared_store_type` in this commit text refers to one of these config options depending on the index in use: `-boltdb.shipper.shared-store`, `-tsdb.shipper.shared-store` - `shared_store_type` used to default to the `object_store` from the latest `period_config` if not explicitly configured. This PR removes these defaults in favor of supporting index uploads to multiple stores. **Which issue(s) this PR fixes**: Fixes #7276 **Special notes for your reviewer**: All the instances of downloads table manager operate on the same cacheDir. But it shouldn't be a problem as the tableRanges do not overlap across periods. **Checklist** - [X] Reviewed the `CONTRIBUTING.md` guide - [ ] Documentation added - [X] Tests updated - [x] `CHANGELOG.md` updated - [x] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --------- Signed-off-by: Ashwanth Goli <iamashwanth@gmail.com> Co-authored-by: J Stickler <julie.stickler@grafana.com>
2 years ago
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/grafana/dskit/flagext"
"github.com/grafana/loki/pkg/storage/chunk/cache"
"github.com/grafana/loki/pkg/storage/chunk/client"
"github.com/grafana/loki/pkg/storage/chunk/client/alibaba"
"github.com/grafana/loki/pkg/storage/chunk/client/aws"
"github.com/grafana/loki/pkg/storage/chunk/client/azure"
Feat: add Baidu Cloud BOS as storage backends for Loki #4788 (#5848) * feat: add baidu bce bos storage support Signed-off-by: arcosx <arcosx@outlook.com> * add baidu bce bos as chunk storage backend Signed-off-by: arcosx <arcosx@outlook.com> * fix: some doc error && rewrite bad code Signed-off-by: arcosx <arcosx@outlook.com> * fix: add the `BceServiceError` source link Signed-off-by: arcosx <arcosx@outlook.com> * Update CHANGELOG.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> Signed-off-by: arcosx <arcosx@outlook.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com>
3 years ago
"github.com/grafana/loki/pkg/storage/chunk/client/baidubce"
"github.com/grafana/loki/pkg/storage/chunk/client/cassandra"
Dynamic client-side throttling to avoid object storage rate-limits (GCS only) (#10140) **What this PR does / why we need it**: Across the various cloud providers' object storage services, there are different rate-limits implemented. Rate-limits can be imposed under multiple conditions, such as server-side scale up (ramping up from low volume to high, "local" limit), reaching some defined upper limit ("absolute" limit), etc. We cannot know apriori when these rate-limits will be imposed, so we can't set up a client-side limiter to only allow a certain number of requests through per second. Additionally, that would require global coordination between queriers - which is difficult. With the above constraints, I have instead taken inspiration from TCP's [congestion control algorithms](https://en.wikipedia.org/wiki/TCP_congestion_control). This PR implements [AIMD](https://en.wikipedia.org/wiki/Additive_increase/multiplicative_decrease) (Additive Increase, Multiplicative Decrease), which is used in the congestion _avoidance_ phase of congestion control. The default window size (object store requests per second) is 2000; in other words, we skip the "slow start" phase. The controller uses the Go [`rate.Limiter`](https://pkg.go.dev/golang.org/x/time/rate), which implements the token-bucket algorithm. To put it simply: - every successful request widens the window (per second client rate-limit) - every rate-limited response reduces the window size by a backoff factor (0.5 by default, so it will halve) - when the limit has been reached, the querier will be delayed from making further requests until tokens are available
2 years ago
"github.com/grafana/loki/pkg/storage/chunk/client/congestion"
"github.com/grafana/loki/pkg/storage/chunk/client/gcp"
"github.com/grafana/loki/pkg/storage/chunk/client/grpc"
"github.com/grafana/loki/pkg/storage/chunk/client/hedging"
"github.com/grafana/loki/pkg/storage/chunk/client/ibmcloud"
"github.com/grafana/loki/pkg/storage/chunk/client/local"
"github.com/grafana/loki/pkg/storage/chunk/client/openstack"
"github.com/grafana/loki/pkg/storage/chunk/client/testutils"
"github.com/grafana/loki/pkg/storage/config"
"github.com/grafana/loki/pkg/storage/stores"
TSDB shipper + WAL (#6049) * begins speccing out TSDB Head * auto incrementing series ref + mempostings * mintime/maxtime methods * tsdb head IndexReader impl * head correctly populates ref lookup * tsdb head tests * adds prometheus license to tsdb head * linting * [WIP] speccing out tsdb head wal * fix length check and adds tsdb wal encoding tests * exposes wal structs & removes closed semantics * logs start time in the tsdb wal * wal interface + testing * exports walrecord + returns ref when appending * specs out head manager * tsdb head manager wal initialization * tsdb wal rotation * wals dont use node name, but tsdb files do * cleans up fn signature * multi tsdb idx now just wraps Index interfaces * no longer sorts indices when creating multi-idx * tenantHeads & HeadManger index impls * head mgr tests * bugfixes & head manager tests * tsdb dir selection now helper fns * period utility * pulls out more code to helpers, fixes some var races * head recovery is more generic * tsdb manager builds from wals * pulls more helpers out of headmanager * lockedIdx, Close() on idx, tsdbManager update * removes mmap from index reader implementation * tsdb file * adds tsdb shipper config and refactors initStore * removes unused tsdbManager code * implements stores.Index and stores.ChunkWriter for tsdb * chunk.Data now supports an Entries() method * moves walreader to new util/wal pkg to avoid circular dep + tsdb storage alignment * tsdb store * passes indexWriter to chunkWriter * build a tsdb per index bucket in according with shipper conventions * dont open tsdb files until necessary for indexshipper * tsdbManager Index impl * tsdb defaults + initStore fix for invalid looping * fixes UsingTSDB helper * disables deleteRequestStore when using TSDB * pass limits to tsdb store * always start headmanager for tsdb Signed-off-by: Owen Diehl <ow.diehl@gmail.com> * fixes copy bug Signed-off-by: Owen Diehl <ow.diehl@gmail.com> * more logging Signed-off-by: Owen Diehl <ow.diehl@gmail.com> * fixes duplicate tenant label bug Signed-off-by: Owen Diehl <ow.diehl@gmail.com> * debug logs, uses label builder, removes __name__=logs for tsdb Signed-off-by: Owen Diehl <ow.diehl@gmail.com> * tsdb fixes labels at earlier pt Signed-off-by: Owen Diehl <ow.diehl@gmail.com> * account for setting tenant label in head manager test * changing tsdb dir names * identifier interface, builder to tsdb pkg * tsdb version path prefix * fixes buildfromwals identifier * fixes tsdb shipper paths * split buckets once per user set * refactors combining single and multi tenant tsdb indices on shipper reads * indexshipper ignores old gzip logic * method name refactor * remove unused record type * removes v1 prefix in tsdb paths and refactores indices method * ignores double optimization in tsdb looking for multitenant idx, shipper handles this * removes 5-ln requirement on shipper tablename regexp * groups identifiers, begins removing multitenant prefix in shipped files * passses open fn to indexshipper * exposes RealByteSlice * TSDBFile no longer needs a file descriptor, parses gzip extensions * method signature fixing * stop masquerading as compressed indices post-download in indexshipper * variable bucket regexp * removes accidental configs committed * label matcher handling for multitenancy and metricname in tsdb * explicitly require fingerprint when creating tsdb index * only add tenant label when creating multitenant tsdb write fingerprints without synthetic tenant label strip out tenant labels from queries * linting + unused removal * more linting :( * goimports * removes uploadername from indexshipper * maxuint32 for arm32 builds * tsdb chunk filterer support * always set ingester name when using object storage index Co-authored-by: Sandeep Sukhani <sandeep.d.sukhani@gmail.com>
3 years ago
"github.com/grafana/loki/pkg/storage/stores/indexshipper"
"github.com/grafana/loki/pkg/storage/stores/indexshipper/downloads"
"github.com/grafana/loki/pkg/storage/stores/indexshipper/gatewayclient"
"github.com/grafana/loki/pkg/storage/stores/series/index"
"github.com/grafana/loki/pkg/storage/stores/shipper"
"github.com/grafana/loki/pkg/storage/stores/shipper/indexgateway"
"github.com/grafana/loki/pkg/storage/stores/tsdb"
"github.com/grafana/loki/pkg/util"
util_log "github.com/grafana/loki/pkg/util/log"
)
var (
index-shipper: add support for multiple stores (#7754) Signed-off-by: Ashwanth Goli <iamashwanth@gmail.com> **What this PR does / why we need it**: Currently loki initializes a single instance of index-shipper to [handle all the table ranges](https://github.com/grafana/loki/blob/ff7b46297345b215fbf49c2cd4c364d125b6290b/pkg/storage/factory.go#L188) (from across periods) for a given index type `boltdb-shipper, tsdb`. Since index-shipper only has the object client handle to the store defined by `shared_store_type`, it limits the index uploads to a single store. Setting `shared_store_type` to a different store at a later point in time would mean losing access to the indexes stored in the previously configured store. With this PR, we initialize a separate index-shipper & table manager for each period if `shared_store_type` is not explicity configured. This offers the flexibility to store index in multiple stores (across providers). **Note**: - usage of `shared_store_type` in this commit text refers to one of these config options depending on the index in use: `-boltdb.shipper.shared-store`, `-tsdb.shipper.shared-store` - `shared_store_type` used to default to the `object_store` from the latest `period_config` if not explicitly configured. This PR removes these defaults in favor of supporting index uploads to multiple stores. **Which issue(s) this PR fixes**: Fixes #7276 **Special notes for your reviewer**: All the instances of downloads table manager operate on the same cacheDir. But it shouldn't be a problem as the tableRanges do not overlap across periods. **Checklist** - [X] Reviewed the `CONTRIBUTING.md` guide - [ ] Documentation added - [X] Tests updated - [x] `CHANGELOG.md` updated - [x] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --------- Signed-off-by: Ashwanth Goli <iamashwanth@gmail.com> Co-authored-by: J Stickler <julie.stickler@grafana.com>
2 years ago
indexGatewayClient index.Client
// singleton for each period
boltdbIndexClientsWithShipper = make(map[config.DayTime]*shipper.IndexClient)
supportedIndexTypes = []string{
config.BoltDBShipperType,
config.TSDBType,
}
deprecatedIndexTypes = []string{
config.StorageTypeAWS,
config.StorageTypeAWSDynamo,
config.StorageTypeBigTable,
config.StorageTypeBigTableHashed,
config.StorageTypeBoltDB,
config.StorageTypeCassandra,
config.StorageTypeGCP,
config.StorageTypeGCPColumnKey,
config.StorageTypeGrpc,
}
supportedStorageTypes = []string{
// local file system
config.StorageTypeFileSystem,
// remote object storages
config.StorageTypeAWS,
config.StorageTypeAlibabaCloud,
config.StorageTypeAzure,
config.StorageTypeBOS,
config.StorageTypeCOS,
config.StorageTypeGCS,
config.StorageTypeS3,
config.StorageTypeSwift,
}
deprecatedStorageTypes = []string{
config.StorageTypeAWSDynamo,
config.StorageTypeBigTable,
config.StorageTypeBigTableHashed,
config.StorageTypeCassandra,
config.StorageTypeGCP,
config.StorageTypeGCPColumnKey,
config.StorageTypeGrpc,
}
testingStorageTypes = []string{
config.StorageTypeInMemory,
}
)
index-shipper: add support for multiple stores (#7754) Signed-off-by: Ashwanth Goli <iamashwanth@gmail.com> **What this PR does / why we need it**: Currently loki initializes a single instance of index-shipper to [handle all the table ranges](https://github.com/grafana/loki/blob/ff7b46297345b215fbf49c2cd4c364d125b6290b/pkg/storage/factory.go#L188) (from across periods) for a given index type `boltdb-shipper, tsdb`. Since index-shipper only has the object client handle to the store defined by `shared_store_type`, it limits the index uploads to a single store. Setting `shared_store_type` to a different store at a later point in time would mean losing access to the indexes stored in the previously configured store. With this PR, we initialize a separate index-shipper & table manager for each period if `shared_store_type` is not explicity configured. This offers the flexibility to store index in multiple stores (across providers). **Note**: - usage of `shared_store_type` in this commit text refers to one of these config options depending on the index in use: `-boltdb.shipper.shared-store`, `-tsdb.shipper.shared-store` - `shared_store_type` used to default to the `object_store` from the latest `period_config` if not explicitly configured. This PR removes these defaults in favor of supporting index uploads to multiple stores. **Which issue(s) this PR fixes**: Fixes #7276 **Special notes for your reviewer**: All the instances of downloads table manager operate on the same cacheDir. But it shouldn't be a problem as the tableRanges do not overlap across periods. **Checklist** - [X] Reviewed the `CONTRIBUTING.md` guide - [ ] Documentation added - [X] Tests updated - [x] `CHANGELOG.md` updated - [x] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --------- Signed-off-by: Ashwanth Goli <iamashwanth@gmail.com> Co-authored-by: J Stickler <julie.stickler@grafana.com>
2 years ago
// ResetBoltDBIndexClientsWithShipper allows to reset the singletons.
// MUST ONLY BE USED IN TESTS
index-shipper: add support for multiple stores (#7754) Signed-off-by: Ashwanth Goli <iamashwanth@gmail.com> **What this PR does / why we need it**: Currently loki initializes a single instance of index-shipper to [handle all the table ranges](https://github.com/grafana/loki/blob/ff7b46297345b215fbf49c2cd4c364d125b6290b/pkg/storage/factory.go#L188) (from across periods) for a given index type `boltdb-shipper, tsdb`. Since index-shipper only has the object client handle to the store defined by `shared_store_type`, it limits the index uploads to a single store. Setting `shared_store_type` to a different store at a later point in time would mean losing access to the indexes stored in the previously configured store. With this PR, we initialize a separate index-shipper & table manager for each period if `shared_store_type` is not explicity configured. This offers the flexibility to store index in multiple stores (across providers). **Note**: - usage of `shared_store_type` in this commit text refers to one of these config options depending on the index in use: `-boltdb.shipper.shared-store`, `-tsdb.shipper.shared-store` - `shared_store_type` used to default to the `object_store` from the latest `period_config` if not explicitly configured. This PR removes these defaults in favor of supporting index uploads to multiple stores. **Which issue(s) this PR fixes**: Fixes #7276 **Special notes for your reviewer**: All the instances of downloads table manager operate on the same cacheDir. But it shouldn't be a problem as the tableRanges do not overlap across periods. **Checklist** - [X] Reviewed the `CONTRIBUTING.md` guide - [ ] Documentation added - [X] Tests updated - [x] `CHANGELOG.md` updated - [x] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --------- Signed-off-by: Ashwanth Goli <iamashwanth@gmail.com> Co-authored-by: J Stickler <julie.stickler@grafana.com>
2 years ago
func ResetBoltDBIndexClientsWithShipper() {
for _, client := range boltdbIndexClientsWithShipper {
client.Stop()
}
boltdbIndexClientsWithShipper = make(map[config.DayTime]*shipper.IndexClient)
index-shipper: add support for multiple stores (#7754) Signed-off-by: Ashwanth Goli <iamashwanth@gmail.com> **What this PR does / why we need it**: Currently loki initializes a single instance of index-shipper to [handle all the table ranges](https://github.com/grafana/loki/blob/ff7b46297345b215fbf49c2cd4c364d125b6290b/pkg/storage/factory.go#L188) (from across periods) for a given index type `boltdb-shipper, tsdb`. Since index-shipper only has the object client handle to the store defined by `shared_store_type`, it limits the index uploads to a single store. Setting `shared_store_type` to a different store at a later point in time would mean losing access to the indexes stored in the previously configured store. With this PR, we initialize a separate index-shipper & table manager for each period if `shared_store_type` is not explicity configured. This offers the flexibility to store index in multiple stores (across providers). **Note**: - usage of `shared_store_type` in this commit text refers to one of these config options depending on the index in use: `-boltdb.shipper.shared-store`, `-tsdb.shipper.shared-store` - `shared_store_type` used to default to the `object_store` from the latest `period_config` if not explicitly configured. This PR removes these defaults in favor of supporting index uploads to multiple stores. **Which issue(s) this PR fixes**: Fixes #7276 **Special notes for your reviewer**: All the instances of downloads table manager operate on the same cacheDir. But it shouldn't be a problem as the tableRanges do not overlap across periods. **Checklist** - [X] Reviewed the `CONTRIBUTING.md` guide - [ ] Documentation added - [X] Tests updated - [x] `CHANGELOG.md` updated - [x] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --------- Signed-off-by: Ashwanth Goli <iamashwanth@gmail.com> Co-authored-by: J Stickler <julie.stickler@grafana.com>
2 years ago
if indexGatewayClient != nil {
indexGatewayClient.Stop()
indexGatewayClient = nil
}
}
// StoreLimits helps get Limits specific to Queries for Stores
type StoreLimits interface {
downloads.Limits
stores.StoreLimits
indexgateway.Limits
CardinalityLimit(string) int
}
// Storage configs defined as Named stores don't get any defaults as they do not
// register flags. To get around this we implement Unmarshaler interface that
// assigns the defaults before calling unmarshal.
// We cannot implement Unmarshaler directly on aws.StorageConfig or other stores
// as it would end up overriding values set as part of ApplyDynamicConfig().
// Note: we unmarshal a second time after applying dynamic configs
//
// Implementing the Unmarshaler for Named*StorageConfig types is fine as
// we do not apply any dynamic config on them.
type NamedAWSStorageConfig aws.StorageConfig
// UnmarshalYAML implements the yaml.Unmarshaler interface.
func (cfg *NamedAWSStorageConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
flagext.DefaultValues((*aws.StorageConfig)(cfg))
return unmarshal((*aws.StorageConfig)(cfg))
}
func (cfg *NamedAWSStorageConfig) Validate() error {
return (*aws.StorageConfig)(cfg).Validate()
}
type NamedBlobStorageConfig azure.BlobStorageConfig
// UnmarshalYAML implements the yaml.Unmarshaler interface.
func (cfg *NamedBlobStorageConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
flagext.DefaultValues((*azure.BlobStorageConfig)(cfg))
return unmarshal((*azure.BlobStorageConfig)(cfg))
}
func (cfg *NamedBlobStorageConfig) Validate() error {
return (*azure.BlobStorageConfig)(cfg).Validate()
}
type NamedBOSStorageConfig baidubce.BOSStorageConfig
// UnmarshalYAML implements the yaml.Unmarshaler interface.
func (cfg *NamedBOSStorageConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
flagext.DefaultValues((*baidubce.BOSStorageConfig)(cfg))
return unmarshal((*baidubce.BOSStorageConfig)(cfg))
}
type NamedFSConfig local.FSConfig
// UnmarshalYAML implements the yaml.Unmarshaler interface.
func (cfg *NamedFSConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
flagext.DefaultValues((*local.FSConfig)(cfg))
return unmarshal((*local.FSConfig)(cfg))
}
type NamedGCSConfig gcp.GCSConfig
// UnmarshalYAML implements the yaml.Unmarshaler interface.
func (cfg *NamedGCSConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
flagext.DefaultValues((*gcp.GCSConfig)(cfg))
return unmarshal((*gcp.GCSConfig)(cfg))
}
type NamedOssConfig alibaba.OssConfig
// UnmarshalYAML implements the yaml.Unmarshaler interface.
func (cfg *NamedOssConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
flagext.DefaultValues((*alibaba.OssConfig)(cfg))
return unmarshal((*alibaba.OssConfig)(cfg))
}
type NamedSwiftConfig openstack.SwiftConfig
// UnmarshalYAML implements the yaml.Unmarshaler interface.
func (cfg *NamedSwiftConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
flagext.DefaultValues((*openstack.SwiftConfig)(cfg))
return unmarshal((*openstack.SwiftConfig)(cfg))
}
func (cfg *NamedSwiftConfig) Validate() error {
return (*openstack.SwiftConfig)(cfg).Validate()
}
type NamedCOSConfig ibmcloud.COSConfig
// UnmarshalYAML implements the yaml.Unmarshaler interface.
func (cfg *NamedCOSConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
flagext.DefaultValues((*ibmcloud.COSConfig)(cfg))
return unmarshal((*ibmcloud.COSConfig)(cfg))
}
// NamedStores helps configure additional object stores from a given storage provider
type NamedStores struct {
AWS map[string]NamedAWSStorageConfig `yaml:"aws"`
Azure map[string]NamedBlobStorageConfig `yaml:"azure"`
BOS map[string]NamedBOSStorageConfig `yaml:"bos"`
Filesystem map[string]NamedFSConfig `yaml:"filesystem"`
GCS map[string]NamedGCSConfig `yaml:"gcs"`
AlibabaCloud map[string]NamedOssConfig `yaml:"alibabacloud"`
Swift map[string]NamedSwiftConfig `yaml:"swift"`
COS map[string]NamedCOSConfig `yaml:"cos"`
// contains mapping from named store reference name to store type
storeType map[string]string `yaml:"-"`
}
func (ns *NamedStores) populateStoreType() error {
ns.storeType = make(map[string]string)
checkForDuplicates := func(name string) error {
switch name {
case config.StorageTypeAWS, config.StorageTypeAWSDynamo, config.StorageTypeS3,
config.StorageTypeGCP, config.StorageTypeGCPColumnKey, config.StorageTypeBigTable, config.StorageTypeBigTableHashed, config.StorageTypeGCS,
config.StorageTypeAzure, config.StorageTypeBOS, config.StorageTypeSwift, config.StorageTypeCassandra,
config.StorageTypeFileSystem, config.StorageTypeInMemory, config.StorageTypeGrpc:
return fmt.Errorf("named store %q should not match with the name of a predefined storage type", name)
}
if st, ok := ns.storeType[name]; ok {
return fmt.Errorf("named store %q is already defined under %s", name, st)
}
return nil
}
for name := range ns.AWS {
if err := checkForDuplicates(name); err != nil {
return err
}
ns.storeType[name] = config.StorageTypeAWS
}
for name := range ns.Azure {
if err := checkForDuplicates(name); err != nil {
return err
}
ns.storeType[name] = config.StorageTypeAzure
}
for name := range ns.AlibabaCloud {
if err := checkForDuplicates(name); err != nil {
return err
}
ns.storeType[name] = config.StorageTypeAlibabaCloud
}
for name := range ns.BOS {
if err := checkForDuplicates(name); err != nil {
return err
}
ns.storeType[name] = config.StorageTypeBOS
}
for name := range ns.Filesystem {
if err := checkForDuplicates(name); err != nil {
return err
}
ns.storeType[name] = config.StorageTypeFileSystem
}
for name := range ns.GCS {
if err := checkForDuplicates(name); err != nil {
return err
}
ns.storeType[name] = config.StorageTypeGCS
}
for name := range ns.Swift {
if err := checkForDuplicates(name); err != nil {
return err
}
ns.storeType[name] = config.StorageTypeSwift
}
return nil
}
func (ns *NamedStores) validate() error {
for name, awsCfg := range ns.AWS {
if err := awsCfg.Validate(); err != nil {
return errors.Wrap(err, fmt.Sprintf("invalid AWS Storage config with name %s", name))
}
}
for name, azureCfg := range ns.Azure {
if err := azureCfg.Validate(); err != nil {
return errors.Wrap(err, fmt.Sprintf("invalid Azure Storage config with name %s", name))
}
}
for name, swiftCfg := range ns.Swift {
if err := swiftCfg.Validate(); err != nil {
return errors.Wrap(err, fmt.Sprintf("invalid Swift Storage config with name %s", name))
}
}
return ns.populateStoreType()
}
// Config chooses which storage client to use.
type Config struct {
AlibabaStorageConfig alibaba.OssConfig `yaml:"alibabacloud"`
AWSStorageConfig aws.StorageConfig `yaml:"aws"`
Feat: add Baidu Cloud BOS as storage backends for Loki #4788 (#5848) * feat: add baidu bce bos storage support Signed-off-by: arcosx <arcosx@outlook.com> * add baidu bce bos as chunk storage backend Signed-off-by: arcosx <arcosx@outlook.com> * fix: some doc error && rewrite bad code Signed-off-by: arcosx <arcosx@outlook.com> * fix: add the `BceServiceError` source link Signed-off-by: arcosx <arcosx@outlook.com> * Update CHANGELOG.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> Signed-off-by: arcosx <arcosx@outlook.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com>
3 years ago
AzureStorageConfig azure.BlobStorageConfig `yaml:"azure"`
BOSStorageConfig baidubce.BOSStorageConfig `yaml:"bos"`
GCPStorageConfig gcp.Config `yaml:"bigtable" doc:"description=Deprecated: Configures storing indexes in Bigtable. Required fields only required when bigtable is defined in config."`
Add configuration documentation generation tool (#7916) **What this PR does / why we need it**: Add a tool to generate configuration flags documentation based on the flags properties defined on registration on the code. This tool is based on the [Mimir doc generation tool](https://github.com/grafana/mimir/tree/main/tools/doc-generator) and adapted according to Loki configuration specifications. Prior to this PR, the configuration flags documentation was dispersed across two sources: * [_index.md](https://github.com/grafana/loki/blob/5550cd65ecd2299b219d26501221df0b191d8a78/docs/sources/configuration/_index.md) * configuration flags registration in the code This meant that there was no single source of truth. In this PR, the previous `_index.md` file is replaced with the new file generated by the tool. The next step includes adding a CI step that validates if the _index.md file was generated according to the flags settings. This will be done in a follow-up PR. **NOTE:** this is not a documentation update PR. Apart from some minor typo fixes, the documentation changes on the code, were copied from the `_index.md` file. **Which issue(s) this PR fixes**: Fixes https://github.com/grafana/loki-private/issues/83 **Special notes for your reviewer**: Files: * [docs/sources/configuration/index.template](https://github.com/grafana/loki/blob/5550cd65ecd2299b219d26501221df0b191d8a78/docs/sources/configuration/index.template): template used to generate the final configuration file * [/docs/sources/configuration/_index.md](https://github.com/grafana/loki/blob/c32e5d0acb3cdacc9e50bb71a83a9ba42721e0e2/docs/sources/configuration/_index.md): file generated by tool * `loki/pkg` directory files updated with up-to-date documentation from `_index.md` file * [tools/doc-generator](https://github.com/grafana/loki/tree/5550cd65ecd2299b219d26501221df0b191d8a78/tools/doc-generator) directory with documentation generation tool. **Checklist** - [ ] Reviewed the `CONTRIBUTING.md` guide - [ ] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md`
2 years ago
GCSConfig gcp.GCSConfig `yaml:"gcs" doc:"description=Configures storing chunks in GCS. Required fields only required when gcs is defined in config."`
CassandraStorageConfig cassandra.Config `yaml:"cassandra" doc:"description=Deprecated: Configures storing chunks and/or the index in Cassandra."`
BoltDBConfig local.BoltDBConfig `yaml:"boltdb" doc:"description=Deprecated: Configures storing index in BoltDB. Required fields only required when boltdb is present in the configuration."`
Add configuration documentation generation tool (#7916) **What this PR does / why we need it**: Add a tool to generate configuration flags documentation based on the flags properties defined on registration on the code. This tool is based on the [Mimir doc generation tool](https://github.com/grafana/mimir/tree/main/tools/doc-generator) and adapted according to Loki configuration specifications. Prior to this PR, the configuration flags documentation was dispersed across two sources: * [_index.md](https://github.com/grafana/loki/blob/5550cd65ecd2299b219d26501221df0b191d8a78/docs/sources/configuration/_index.md) * configuration flags registration in the code This meant that there was no single source of truth. In this PR, the previous `_index.md` file is replaced with the new file generated by the tool. The next step includes adding a CI step that validates if the _index.md file was generated according to the flags settings. This will be done in a follow-up PR. **NOTE:** this is not a documentation update PR. Apart from some minor typo fixes, the documentation changes on the code, were copied from the `_index.md` file. **Which issue(s) this PR fixes**: Fixes https://github.com/grafana/loki-private/issues/83 **Special notes for your reviewer**: Files: * [docs/sources/configuration/index.template](https://github.com/grafana/loki/blob/5550cd65ecd2299b219d26501221df0b191d8a78/docs/sources/configuration/index.template): template used to generate the final configuration file * [/docs/sources/configuration/_index.md](https://github.com/grafana/loki/blob/c32e5d0acb3cdacc9e50bb71a83a9ba42721e0e2/docs/sources/configuration/_index.md): file generated by tool * `loki/pkg` directory files updated with up-to-date documentation from `_index.md` file * [tools/doc-generator](https://github.com/grafana/loki/tree/5550cd65ecd2299b219d26501221df0b191d8a78/tools/doc-generator) directory with documentation generation tool. **Checklist** - [ ] Reviewed the `CONTRIBUTING.md` guide - [ ] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md`
2 years ago
FSConfig local.FSConfig `yaml:"filesystem" doc:"description=Configures storing the chunks on the local file system. Required fields only required when filesystem is present in the configuration."`
Feat: add Baidu Cloud BOS as storage backends for Loki #4788 (#5848) * feat: add baidu bce bos storage support Signed-off-by: arcosx <arcosx@outlook.com> * add baidu bce bos as chunk storage backend Signed-off-by: arcosx <arcosx@outlook.com> * fix: some doc error && rewrite bad code Signed-off-by: arcosx <arcosx@outlook.com> * fix: add the `BceServiceError` source link Signed-off-by: arcosx <arcosx@outlook.com> * Update CHANGELOG.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> Signed-off-by: arcosx <arcosx@outlook.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com>
3 years ago
Swift openstack.SwiftConfig `yaml:"swift"`
GrpcConfig grpc.Config `yaml:"grpc_store" doc:"deprecated"`
Feat: add Baidu Cloud BOS as storage backends for Loki #4788 (#5848) * feat: add baidu bce bos storage support Signed-off-by: arcosx <arcosx@outlook.com> * add baidu bce bos as chunk storage backend Signed-off-by: arcosx <arcosx@outlook.com> * fix: some doc error && rewrite bad code Signed-off-by: arcosx <arcosx@outlook.com> * fix: add the `BceServiceError` source link Signed-off-by: arcosx <arcosx@outlook.com> * Update CHANGELOG.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> Signed-off-by: arcosx <arcosx@outlook.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com>
3 years ago
Hedging hedging.Config `yaml:"hedging"`
NamedStores NamedStores `yaml:"named_stores"`
COSConfig ibmcloud.COSConfig `yaml:"cos"`
IndexCacheValidity time.Duration `yaml:"index_cache_validity"`
Dynamic client-side throttling to avoid object storage rate-limits (GCS only) (#10140) **What this PR does / why we need it**: Across the various cloud providers' object storage services, there are different rate-limits implemented. Rate-limits can be imposed under multiple conditions, such as server-side scale up (ramping up from low volume to high, "local" limit), reaching some defined upper limit ("absolute" limit), etc. We cannot know apriori when these rate-limits will be imposed, so we can't set up a client-side limiter to only allow a certain number of requests through per second. Additionally, that would require global coordination between queriers - which is difficult. With the above constraints, I have instead taken inspiration from TCP's [congestion control algorithms](https://en.wikipedia.org/wiki/TCP_congestion_control). This PR implements [AIMD](https://en.wikipedia.org/wiki/Additive_increase/multiplicative_decrease) (Additive Increase, Multiplicative Decrease), which is used in the congestion _avoidance_ phase of congestion control. The default window size (object store requests per second) is 2000; in other words, we skip the "slow start" phase. The controller uses the Go [`rate.Limiter`](https://pkg.go.dev/golang.org/x/time/rate), which implements the token-bucket algorithm. To put it simply: - every successful request widens the window (per second client rate-limit) - every rate-limited response reduces the window size by a backoff factor (0.5 by default, so it will halve) - when the limit has been reached, the querier will be delayed from making further requests until tokens are available
2 years ago
CongestionControl congestion.Config `yaml:"congestion_control,omitempty"`
IndexQueriesCacheConfig cache.Config `yaml:"index_queries_cache_config"`
DisableBroadIndexQueries bool `yaml:"disable_broad_index_queries"`
MaxParallelGetChunk int `yaml:"max_parallel_get_chunk"`
MaxChunkBatchSize int `yaml:"max_chunk_batch_size"`
BoltDBShipperConfig shipper.Config `yaml:"boltdb_shipper" doc:"description=Configures storing index in an Object Store (GCS/S3/Azure/Swift/COS/Filesystem) in the form of boltdb files. Required fields only required when boltdb-shipper is defined in config."`
TSDBShipperConfig tsdb.IndexCfg `yaml:"tsdb_shipper" doc:"description=Configures storing index in an Object Store (GCS/S3/Azure/Swift/COS/Filesystem) in a prometheus TSDB-like format. Required fields only required when TSDB is defined in config."`
// Config for using AsyncStore when using async index stores like `boltdb-shipper`.
// It is required for getting chunk ids of recently flushed chunks from the ingesters.
EnableAsyncStore bool `yaml:"-"`
AsyncStoreConfig AsyncStoreCfg `yaml:"-"`
}
// RegisterFlags adds the flags required to configure this flag set.
func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
cfg.AWSStorageConfig.RegisterFlags(f)
cfg.AzureStorageConfig.RegisterFlags(f)
Feat: add Baidu Cloud BOS as storage backends for Loki #4788 (#5848) * feat: add baidu bce bos storage support Signed-off-by: arcosx <arcosx@outlook.com> * add baidu bce bos as chunk storage backend Signed-off-by: arcosx <arcosx@outlook.com> * fix: some doc error && rewrite bad code Signed-off-by: arcosx <arcosx@outlook.com> * fix: add the `BceServiceError` source link Signed-off-by: arcosx <arcosx@outlook.com> * Update CHANGELOG.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> Signed-off-by: arcosx <arcosx@outlook.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com>
3 years ago
cfg.BOSStorageConfig.RegisterFlags(f)
cfg.COSConfig.RegisterFlags(f)
cfg.GCPStorageConfig.RegisterFlags(f)
cfg.GCSConfig.RegisterFlags(f)
cfg.CassandraStorageConfig.RegisterFlags(f)
cfg.BoltDBConfig.RegisterFlags(f)
cfg.FSConfig.RegisterFlags(f)
cfg.Swift.RegisterFlags(f)
cfg.GrpcConfig.RegisterFlags(f)
cfg.Hedging.RegisterFlagsWithPrefix("store.", f)
Dynamic client-side throttling to avoid object storage rate-limits (GCS only) (#10140) **What this PR does / why we need it**: Across the various cloud providers' object storage services, there are different rate-limits implemented. Rate-limits can be imposed under multiple conditions, such as server-side scale up (ramping up from low volume to high, "local" limit), reaching some defined upper limit ("absolute" limit), etc. We cannot know apriori when these rate-limits will be imposed, so we can't set up a client-side limiter to only allow a certain number of requests through per second. Additionally, that would require global coordination between queriers - which is difficult. With the above constraints, I have instead taken inspiration from TCP's [congestion control algorithms](https://en.wikipedia.org/wiki/TCP_congestion_control). This PR implements [AIMD](https://en.wikipedia.org/wiki/Additive_increase/multiplicative_decrease) (Additive Increase, Multiplicative Decrease), which is used in the congestion _avoidance_ phase of congestion control. The default window size (object store requests per second) is 2000; in other words, we skip the "slow start" phase. The controller uses the Go [`rate.Limiter`](https://pkg.go.dev/golang.org/x/time/rate), which implements the token-bucket algorithm. To put it simply: - every successful request widens the window (per second client rate-limit) - every rate-limited response reduces the window size by a backoff factor (0.5 by default, so it will halve) - when the limit has been reached, the querier will be delayed from making further requests until tokens are available
2 years ago
cfg.CongestionControl.RegisterFlagsWithPrefix("store.", f)
cfg.IndexQueriesCacheConfig.RegisterFlagsWithPrefix("store.index-cache-read.", "", f)
f.DurationVar(&cfg.IndexCacheValidity, "store.index-cache-validity", 5*time.Minute, "Cache validity for active index entries. Should be no higher than -ingester.max-chunk-idle.")
f.BoolVar(&cfg.DisableBroadIndexQueries, "store.disable-broad-index-queries", false, "Disable broad index queries which results in reduced cache usage and faster query performance at the expense of somewhat higher QPS on the index store.")
f.IntVar(&cfg.MaxParallelGetChunk, "store.max-parallel-get-chunk", 150, "Maximum number of parallel chunk reads.")
cfg.BoltDBShipperConfig.RegisterFlags(f)
f.IntVar(&cfg.MaxChunkBatchSize, "store.max-chunk-batch-size", 50, "The maximum number of chunks to fetch per batch.")
TSDB shipper + WAL (#6049) * begins speccing out TSDB Head * auto incrementing series ref + mempostings * mintime/maxtime methods * tsdb head IndexReader impl * head correctly populates ref lookup * tsdb head tests * adds prometheus license to tsdb head * linting * [WIP] speccing out tsdb head wal * fix length check and adds tsdb wal encoding tests * exposes wal structs & removes closed semantics * logs start time in the tsdb wal * wal interface + testing * exports walrecord + returns ref when appending * specs out head manager * tsdb head manager wal initialization * tsdb wal rotation * wals dont use node name, but tsdb files do * cleans up fn signature * multi tsdb idx now just wraps Index interfaces * no longer sorts indices when creating multi-idx * tenantHeads & HeadManger index impls * head mgr tests * bugfixes & head manager tests * tsdb dir selection now helper fns * period utility * pulls out more code to helpers, fixes some var races * head recovery is more generic * tsdb manager builds from wals * pulls more helpers out of headmanager * lockedIdx, Close() on idx, tsdbManager update * removes mmap from index reader implementation * tsdb file * adds tsdb shipper config and refactors initStore * removes unused tsdbManager code * implements stores.Index and stores.ChunkWriter for tsdb * chunk.Data now supports an Entries() method * moves walreader to new util/wal pkg to avoid circular dep + tsdb storage alignment * tsdb store * passes indexWriter to chunkWriter * build a tsdb per index bucket in according with shipper conventions * dont open tsdb files until necessary for indexshipper * tsdbManager Index impl * tsdb defaults + initStore fix for invalid looping * fixes UsingTSDB helper * disables deleteRequestStore when using TSDB * pass limits to tsdb store * always start headmanager for tsdb Signed-off-by: Owen Diehl <ow.diehl@gmail.com> * fixes copy bug Signed-off-by: Owen Diehl <ow.diehl@gmail.com> * more logging Signed-off-by: Owen Diehl <ow.diehl@gmail.com> * fixes duplicate tenant label bug Signed-off-by: Owen Diehl <ow.diehl@gmail.com> * debug logs, uses label builder, removes __name__=logs for tsdb Signed-off-by: Owen Diehl <ow.diehl@gmail.com> * tsdb fixes labels at earlier pt Signed-off-by: Owen Diehl <ow.diehl@gmail.com> * account for setting tenant label in head manager test * changing tsdb dir names * identifier interface, builder to tsdb pkg * tsdb version path prefix * fixes buildfromwals identifier * fixes tsdb shipper paths * split buckets once per user set * refactors combining single and multi tenant tsdb indices on shipper reads * indexshipper ignores old gzip logic * method name refactor * remove unused record type * removes v1 prefix in tsdb paths and refactores indices method * ignores double optimization in tsdb looking for multitenant idx, shipper handles this * removes 5-ln requirement on shipper tablename regexp * groups identifiers, begins removing multitenant prefix in shipped files * passses open fn to indexshipper * exposes RealByteSlice * TSDBFile no longer needs a file descriptor, parses gzip extensions * method signature fixing * stop masquerading as compressed indices post-download in indexshipper * variable bucket regexp * removes accidental configs committed * label matcher handling for multitenancy and metricname in tsdb * explicitly require fingerprint when creating tsdb index * only add tenant label when creating multitenant tsdb write fingerprints without synthetic tenant label strip out tenant labels from queries * linting + unused removal * more linting :( * goimports * removes uploadername from indexshipper * maxuint32 for arm32 builds * tsdb chunk filterer support * always set ingester name when using object storage index Co-authored-by: Sandeep Sukhani <sandeep.d.sukhani@gmail.com>
3 years ago
cfg.TSDBShipperConfig.RegisterFlagsWithPrefix("tsdb.", f)
}
// Validate config and returns error on failure
func (cfg *Config) Validate() error {
if err := cfg.CassandraStorageConfig.Validate(); err != nil {
return errors.Wrap(err, "invalid Cassandra Storage config")
}
if err := cfg.GCPStorageConfig.Validate(); err != nil {
return errors.Wrap(err, "invalid GCP Storage Storage config")
}
if err := cfg.Swift.Validate(); err != nil {
return errors.Wrap(err, "invalid Swift Storage config")
}
if err := cfg.AzureStorageConfig.Validate(); err != nil {
return errors.Wrap(err, "invalid Azure Storage config")
}
if err := cfg.AWSStorageConfig.Validate(); err != nil {
return errors.Wrap(err, "invalid AWS Storage config")
}
if err := cfg.BoltDBShipperConfig.Validate(); err != nil {
return errors.Wrap(err, "invalid boltdb-shipper config")
}
if err := cfg.TSDBShipperConfig.Validate(); err != nil {
return errors.Wrap(err, "invalid tsdb config")
}
return cfg.NamedStores.validate()
}
// NewIndexClient creates a new index client of the desired type specified in the PeriodConfig
func NewIndexClient(periodCfg config.PeriodConfig, tableRange config.TableRange, cfg Config, schemaCfg config.SchemaConfig, limits StoreLimits, cm ClientMetrics, shardingStrategy indexgateway.ShardingStrategy, registerer prometheus.Registerer, logger log.Logger) (index.Client, error) {
switch true {
case util.StringsContain(testingStorageTypes, periodCfg.IndexType):
switch periodCfg.IndexType {
case config.StorageTypeInMemory:
store := testutils.NewMockStorage()
return store, nil
}
case util.StringsContain(supportedIndexTypes, periodCfg.IndexType):
switch periodCfg.IndexType {
case config.BoltDBShipperType:
if shouldUseIndexGatewayClient(cfg.BoltDBShipperConfig.Config) {
if indexGatewayClient != nil {
return indexGatewayClient, nil
}
gateway, err := gatewayclient.NewGatewayClient(cfg.BoltDBShipperConfig.IndexGatewayClientConfig, registerer, limits, logger)
if err != nil {
return nil, err
}
indexGatewayClient = gateway
return gateway, nil
}
if client, ok := boltdbIndexClientsWithShipper[periodCfg.From]; ok {
return client, nil
}
objectType := periodCfg.ObjectType
if cfg.BoltDBShipperConfig.SharedStoreType != "" {
objectType = cfg.BoltDBShipperConfig.SharedStoreType
index-shipper: add support for multiple stores (#7754) Signed-off-by: Ashwanth Goli <iamashwanth@gmail.com> **What this PR does / why we need it**: Currently loki initializes a single instance of index-shipper to [handle all the table ranges](https://github.com/grafana/loki/blob/ff7b46297345b215fbf49c2cd4c364d125b6290b/pkg/storage/factory.go#L188) (from across periods) for a given index type `boltdb-shipper, tsdb`. Since index-shipper only has the object client handle to the store defined by `shared_store_type`, it limits the index uploads to a single store. Setting `shared_store_type` to a different store at a later point in time would mean losing access to the indexes stored in the previously configured store. With this PR, we initialize a separate index-shipper & table manager for each period if `shared_store_type` is not explicity configured. This offers the flexibility to store index in multiple stores (across providers). **Note**: - usage of `shared_store_type` in this commit text refers to one of these config options depending on the index in use: `-boltdb.shipper.shared-store`, `-tsdb.shipper.shared-store` - `shared_store_type` used to default to the `object_store` from the latest `period_config` if not explicitly configured. This PR removes these defaults in favor of supporting index uploads to multiple stores. **Which issue(s) this PR fixes**: Fixes #7276 **Special notes for your reviewer**: All the instances of downloads table manager operate on the same cacheDir. But it shouldn't be a problem as the tableRanges do not overlap across periods. **Checklist** - [X] Reviewed the `CONTRIBUTING.md` guide - [ ] Documentation added - [X] Tests updated - [x] `CHANGELOG.md` updated - [x] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --------- Signed-off-by: Ashwanth Goli <iamashwanth@gmail.com> Co-authored-by: J Stickler <julie.stickler@grafana.com>
2 years ago
}
objectClient, err := NewObjectClient(objectType, cfg, cm)
if err != nil {
return nil, err
}
var filterFn downloads.TenantFilter
if shardingStrategy != nil {
filterFn = shardingStrategy.FilterTenants
}
indexClient, err := shipper.NewIndexClient(cfg.BoltDBShipperConfig, objectClient, limits, filterFn, tableRange, registerer, logger)
if err != nil {
return nil, err
}
boltdbIndexClientsWithShipper[periodCfg.From] = indexClient
return indexClient, nil
index-shipper: add support for multiple stores (#7754) Signed-off-by: Ashwanth Goli <iamashwanth@gmail.com> **What this PR does / why we need it**: Currently loki initializes a single instance of index-shipper to [handle all the table ranges](https://github.com/grafana/loki/blob/ff7b46297345b215fbf49c2cd4c364d125b6290b/pkg/storage/factory.go#L188) (from across periods) for a given index type `boltdb-shipper, tsdb`. Since index-shipper only has the object client handle to the store defined by `shared_store_type`, it limits the index uploads to a single store. Setting `shared_store_type` to a different store at a later point in time would mean losing access to the indexes stored in the previously configured store. With this PR, we initialize a separate index-shipper & table manager for each period if `shared_store_type` is not explicity configured. This offers the flexibility to store index in multiple stores (across providers). **Note**: - usage of `shared_store_type` in this commit text refers to one of these config options depending on the index in use: `-boltdb.shipper.shared-store`, `-tsdb.shipper.shared-store` - `shared_store_type` used to default to the `object_store` from the latest `period_config` if not explicitly configured. This PR removes these defaults in favor of supporting index uploads to multiple stores. **Which issue(s) this PR fixes**: Fixes #7276 **Special notes for your reviewer**: All the instances of downloads table manager operate on the same cacheDir. But it shouldn't be a problem as the tableRanges do not overlap across periods. **Checklist** - [X] Reviewed the `CONTRIBUTING.md` guide - [ ] Documentation added - [X] Tests updated - [x] `CHANGELOG.md` updated - [x] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --------- Signed-off-by: Ashwanth Goli <iamashwanth@gmail.com> Co-authored-by: J Stickler <julie.stickler@grafana.com>
2 years ago
case config.TSDBType:
// TODO(chaudum): Move TSDB index client creation into this code path
return nil, fmt.Errorf("code path not supported")
index-shipper: add support for multiple stores (#7754) Signed-off-by: Ashwanth Goli <iamashwanth@gmail.com> **What this PR does / why we need it**: Currently loki initializes a single instance of index-shipper to [handle all the table ranges](https://github.com/grafana/loki/blob/ff7b46297345b215fbf49c2cd4c364d125b6290b/pkg/storage/factory.go#L188) (from across periods) for a given index type `boltdb-shipper, tsdb`. Since index-shipper only has the object client handle to the store defined by `shared_store_type`, it limits the index uploads to a single store. Setting `shared_store_type` to a different store at a later point in time would mean losing access to the indexes stored in the previously configured store. With this PR, we initialize a separate index-shipper & table manager for each period if `shared_store_type` is not explicity configured. This offers the flexibility to store index in multiple stores (across providers). **Note**: - usage of `shared_store_type` in this commit text refers to one of these config options depending on the index in use: `-boltdb.shipper.shared-store`, `-tsdb.shipper.shared-store` - `shared_store_type` used to default to the `object_store` from the latest `period_config` if not explicitly configured. This PR removes these defaults in favor of supporting index uploads to multiple stores. **Which issue(s) this PR fixes**: Fixes #7276 **Special notes for your reviewer**: All the instances of downloads table manager operate on the same cacheDir. But it shouldn't be a problem as the tableRanges do not overlap across periods. **Checklist** - [X] Reviewed the `CONTRIBUTING.md` guide - [ ] Documentation added - [X] Tests updated - [x] `CHANGELOG.md` updated - [x] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --------- Signed-off-by: Ashwanth Goli <iamashwanth@gmail.com> Co-authored-by: J Stickler <julie.stickler@grafana.com>
2 years ago
}
case util.StringsContain(deprecatedIndexTypes, periodCfg.IndexType):
level.Warn(util_log.Logger).Log("msg", fmt.Sprintf("%s is deprecated. Consider migrating to tsdb", periodCfg.IndexType))
switch periodCfg.IndexType {
case config.StorageTypeAWS, config.StorageTypeAWSDynamo:
if cfg.AWSStorageConfig.DynamoDB.URL == nil {
return nil, fmt.Errorf("Must set -dynamodb.url in aws mode")
}
path := strings.TrimPrefix(cfg.AWSStorageConfig.DynamoDB.URL.Path, "/")
if len(path) > 0 {
level.Warn(util_log.Logger).Log("msg", "ignoring DynamoDB URL path", "path", path)
}
return aws.NewDynamoDBIndexClient(cfg.AWSStorageConfig.DynamoDBConfig, schemaCfg, registerer)
case config.StorageTypeGCP:
return gcp.NewStorageClientV1(context.Background(), cfg.GCPStorageConfig, schemaCfg)
case config.StorageTypeGCPColumnKey, config.StorageTypeBigTable:
return gcp.NewStorageClientColumnKey(context.Background(), cfg.GCPStorageConfig, schemaCfg)
case config.StorageTypeBigTableHashed:
cfg.GCPStorageConfig.DistributeKeys = true
return gcp.NewStorageClientColumnKey(context.Background(), cfg.GCPStorageConfig, schemaCfg)
case config.StorageTypeCassandra:
return cassandra.NewStorageClient(cfg.CassandraStorageConfig, schemaCfg, registerer)
case config.StorageTypeBoltDB:
return local.NewBoltDBIndexClient(cfg.BoltDBConfig)
case config.StorageTypeGrpc:
return grpc.NewStorageClient(cfg.GrpcConfig, schemaCfg)
}
}
return nil, fmt.Errorf("unrecognized index client type %s, choose one of: %s", periodCfg.IndexType, strings.Join(supportedIndexTypes, ","))
}
// NewChunkClient makes a new chunk.Client of the desired types.
Dynamic client-side throttling to avoid object storage rate-limits (GCS only) (#10140) **What this PR does / why we need it**: Across the various cloud providers' object storage services, there are different rate-limits implemented. Rate-limits can be imposed under multiple conditions, such as server-side scale up (ramping up from low volume to high, "local" limit), reaching some defined upper limit ("absolute" limit), etc. We cannot know apriori when these rate-limits will be imposed, so we can't set up a client-side limiter to only allow a certain number of requests through per second. Additionally, that would require global coordination between queriers - which is difficult. With the above constraints, I have instead taken inspiration from TCP's [congestion control algorithms](https://en.wikipedia.org/wiki/TCP_congestion_control). This PR implements [AIMD](https://en.wikipedia.org/wiki/Additive_increase/multiplicative_decrease) (Additive Increase, Multiplicative Decrease), which is used in the congestion _avoidance_ phase of congestion control. The default window size (object store requests per second) is 2000; in other words, we skip the "slow start" phase. The controller uses the Go [`rate.Limiter`](https://pkg.go.dev/golang.org/x/time/rate), which implements the token-bucket algorithm. To put it simply: - every successful request widens the window (per second client rate-limit) - every rate-limited response reduces the window size by a backoff factor (0.5 by default, so it will halve) - when the limit has been reached, the querier will be delayed from making further requests until tokens are available
2 years ago
func NewChunkClient(name string, cfg Config, schemaCfg config.SchemaConfig, cc congestion.Controller, registerer prometheus.Registerer, clientMetrics ClientMetrics) (client.Client, error) {
var storeType = name
// lookup storeType for named stores
if nsType, ok := cfg.NamedStores.storeType[name]; ok {
storeType = nsType
}
switch true {
case util.StringsContain(testingStorageTypes, storeType):
switch storeType {
case config.StorageTypeInMemory:
c, err := NewObjectClient(name, cfg, clientMetrics)
if err != nil {
return nil, err
}
return client.NewClientWithMaxParallel(c, nil, 1, schemaCfg), nil
}
case util.StringsContain(supportedStorageTypes, storeType):
switch storeType {
case config.StorageTypeFileSystem:
c, err := NewObjectClient(name, cfg, clientMetrics)
if err != nil {
return nil, err
}
return client.NewClientWithMaxParallel(c, client.FSEncoder, cfg.MaxParallelGetChunk, schemaCfg), nil
Dynamic client-side throttling to avoid object storage rate-limits (GCS only) (#10140) **What this PR does / why we need it**: Across the various cloud providers' object storage services, there are different rate-limits implemented. Rate-limits can be imposed under multiple conditions, such as server-side scale up (ramping up from low volume to high, "local" limit), reaching some defined upper limit ("absolute" limit), etc. We cannot know apriori when these rate-limits will be imposed, so we can't set up a client-side limiter to only allow a certain number of requests through per second. Additionally, that would require global coordination between queriers - which is difficult. With the above constraints, I have instead taken inspiration from TCP's [congestion control algorithms](https://en.wikipedia.org/wiki/TCP_congestion_control). This PR implements [AIMD](https://en.wikipedia.org/wiki/Additive_increase/multiplicative_decrease) (Additive Increase, Multiplicative Decrease), which is used in the congestion _avoidance_ phase of congestion control. The default window size (object store requests per second) is 2000; in other words, we skip the "slow start" phase. The controller uses the Go [`rate.Limiter`](https://pkg.go.dev/golang.org/x/time/rate), which implements the token-bucket algorithm. To put it simply: - every successful request widens the window (per second client rate-limit) - every rate-limited response reduces the window size by a backoff factor (0.5 by default, so it will halve) - when the limit has been reached, the querier will be delayed from making further requests until tokens are available
2 years ago
case config.StorageTypeAWS, config.StorageTypeS3, config.StorageTypeAzure, config.StorageTypeBOS, config.StorageTypeSwift, config.StorageTypeCOS, config.StorageTypeAlibabaCloud:
c, err := NewObjectClient(name, cfg, clientMetrics)
if err != nil {
return nil, err
}
return client.NewClientWithMaxParallel(c, nil, cfg.MaxParallelGetChunk, schemaCfg), nil
Dynamic client-side throttling to avoid object storage rate-limits (GCS only) (#10140) **What this PR does / why we need it**: Across the various cloud providers' object storage services, there are different rate-limits implemented. Rate-limits can be imposed under multiple conditions, such as server-side scale up (ramping up from low volume to high, "local" limit), reaching some defined upper limit ("absolute" limit), etc. We cannot know apriori when these rate-limits will be imposed, so we can't set up a client-side limiter to only allow a certain number of requests through per second. Additionally, that would require global coordination between queriers - which is difficult. With the above constraints, I have instead taken inspiration from TCP's [congestion control algorithms](https://en.wikipedia.org/wiki/TCP_congestion_control). This PR implements [AIMD](https://en.wikipedia.org/wiki/Additive_increase/multiplicative_decrease) (Additive Increase, Multiplicative Decrease), which is used in the congestion _avoidance_ phase of congestion control. The default window size (object store requests per second) is 2000; in other words, we skip the "slow start" phase. The controller uses the Go [`rate.Limiter`](https://pkg.go.dev/golang.org/x/time/rate), which implements the token-bucket algorithm. To put it simply: - every successful request widens the window (per second client rate-limit) - every rate-limited response reduces the window size by a backoff factor (0.5 by default, so it will halve) - when the limit has been reached, the querier will be delayed from making further requests until tokens are available
2 years ago
case config.StorageTypeGCS:
c, err := NewObjectClient(name, cfg, clientMetrics)
if err != nil {
return nil, err
}
// TODO(dannyk): expand congestion control to all other object clients
// this switch statement can be simplified; all the branches like this one are alike
if cfg.CongestionControl.Enabled {
c = cc.Wrap(c)
}
return client.NewClientWithMaxParallel(c, nil, cfg.MaxParallelGetChunk, schemaCfg), nil
}
case util.StringsContain(deprecatedStorageTypes, storeType):
level.Warn(util_log.Logger).Log("msg", fmt.Sprintf("%s is deprecated. Please use one of the supported object stores: %s", storeType, strings.Join(supportedStorageTypes, ", ")))
switch storeType {
case config.StorageTypeAWSDynamo:
if cfg.AWSStorageConfig.DynamoDB.URL == nil {
return nil, fmt.Errorf("Must set -dynamodb.url in aws mode")
}
path := strings.TrimPrefix(cfg.AWSStorageConfig.DynamoDB.URL.Path, "/")
if len(path) > 0 {
level.Warn(util_log.Logger).Log("msg", "ignoring DynamoDB URL path", "path", path)
}
return aws.NewDynamoDBChunkClient(cfg.AWSStorageConfig.DynamoDBConfig, schemaCfg, registerer)
case config.StorageTypeGCP, config.StorageTypeGCPColumnKey, config.StorageTypeBigTable, config.StorageTypeBigTableHashed:
return gcp.NewBigtableObjectClient(context.Background(), cfg.GCPStorageConfig, schemaCfg)
case config.StorageTypeCassandra:
return cassandra.NewObjectClient(cfg.CassandraStorageConfig, schemaCfg, registerer, cfg.MaxParallelGetChunk)
case config.StorageTypeGrpc:
return grpc.NewStorageClient(cfg.GrpcConfig, schemaCfg)
}
}
return nil, fmt.Errorf("unrecognized chunk client type %s, choose one of: %s", name, strings.Join(supportedStorageTypes, ", "))
}
// NewTableClient makes a new table client based on the configuration.
func NewTableClient(name string, cfg Config, cm ClientMetrics, registerer prometheus.Registerer) (index.TableClient, error) {
switch true {
case util.StringsContain(testingStorageTypes, name):
switch name {
case config.StorageTypeInMemory:
return testutils.NewMockStorage(), nil
}
case util.StringsContain(supportedIndexTypes, name):
var sharedStoreKeyPrefix string
var objectType string
switch name {
case config.BoltDBShipperType:
objectType = cfg.BoltDBShipperConfig.SharedStoreType
sharedStoreKeyPrefix = cfg.BoltDBShipperConfig.SharedStoreKeyPrefix
case config.TSDBType:
objectType = cfg.TSDBShipperConfig.SharedStoreType
sharedStoreKeyPrefix = cfg.TSDBShipperConfig.SharedStoreKeyPrefix
}
objectClient, err := NewObjectClient(objectType, cfg, cm)
if err != nil {
return nil, err
}
return indexshipper.NewTableClient(objectClient, sharedStoreKeyPrefix), nil
case util.StringsContain(deprecatedIndexTypes, name):
switch name {
case config.StorageTypeAWS, config.StorageTypeAWSDynamo:
if cfg.AWSStorageConfig.DynamoDB.URL == nil {
return nil, fmt.Errorf("Must set -dynamodb.url in aws mode")
}
path := strings.TrimPrefix(cfg.AWSStorageConfig.DynamoDB.URL.Path, "/")
if len(path) > 0 {
level.Warn(util_log.Logger).Log("msg", "ignoring DynamoDB URL path", "path", path)
}
return aws.NewDynamoDBTableClient(cfg.AWSStorageConfig.DynamoDBConfig, registerer)
case config.StorageTypeGCP, config.StorageTypeGCPColumnKey, config.StorageTypeBigTable, config.StorageTypeBigTableHashed:
return gcp.NewTableClient(context.Background(), cfg.GCPStorageConfig)
case config.StorageTypeCassandra:
return cassandra.NewTableClient(context.Background(), cfg.CassandraStorageConfig, registerer)
case config.StorageTypeBoltDB:
return local.NewTableClient(cfg.BoltDBConfig.Directory)
case config.StorageTypeGrpc:
return grpc.NewTableClient(cfg.GrpcConfig)
}
}
return nil, fmt.Errorf("unrecognized table client type %s, choose one of: %s", name, strings.Join(supportedIndexTypes, ", "))
}
// NewBucketClient makes a new bucket client based on the configuration.
func NewBucketClient(storageConfig Config) (index.BucketClient, error) {
if storageConfig.FSConfig.Directory != "" {
return local.NewFSObjectClient(storageConfig.FSConfig)
}
return nil, nil
}
type ClientMetrics struct {
AzureMetrics azure.BlobStorageMetrics
}
func NewClientMetrics() ClientMetrics {
return ClientMetrics{
AzureMetrics: azure.NewBlobStorageMetrics(),
}
}
func (c *ClientMetrics) Unregister() {
c.AzureMetrics.Unregister()
}
// NewObjectClient makes a new StorageClient of the desired types.
func NewObjectClient(name string, cfg Config, clientMetrics ClientMetrics) (client.ObjectClient, error) {
var (
namedStore string
storeType = name
)
// lookup storeType for named stores
if nsType, ok := cfg.NamedStores.storeType[name]; ok {
storeType = nsType
namedStore = name
}
switch storeType {
case config.StorageTypeInMemory:
return testutils.NewMockStorage(), nil
case config.StorageTypeAWS, config.StorageTypeS3:
s3Cfg := cfg.AWSStorageConfig.S3Config
if namedStore != "" {
awsCfg, ok := cfg.NamedStores.AWS[namedStore]
if !ok {
return nil, fmt.Errorf("Unrecognized named aws storage config %s", name)
}
s3Cfg = awsCfg.S3Config
}
return aws.NewS3ObjectClient(s3Cfg, cfg.Hedging)
case config.StorageTypeAlibabaCloud:
ossCfg := cfg.AlibabaStorageConfig
if namedStore != "" {
nsCfg, ok := cfg.NamedStores.AlibabaCloud[namedStore]
if !ok {
return nil, fmt.Errorf("Unrecognized named alibabacloud oss storage config %s", name)
}
ossCfg = (alibaba.OssConfig)(nsCfg)
}
return alibaba.NewOssObjectClient(context.Background(), ossCfg)
case config.StorageTypeGCS:
gcsCfg := cfg.GCSConfig
if namedStore != "" {
nsCfg, ok := cfg.NamedStores.GCS[namedStore]
if !ok {
return nil, fmt.Errorf("Unrecognized named gcs storage config %s", name)
}
gcsCfg = (gcp.GCSConfig)(nsCfg)
}
Dynamic client-side throttling to avoid object storage rate-limits (GCS only) (#10140) **What this PR does / why we need it**: Across the various cloud providers' object storage services, there are different rate-limits implemented. Rate-limits can be imposed under multiple conditions, such as server-side scale up (ramping up from low volume to high, "local" limit), reaching some defined upper limit ("absolute" limit), etc. We cannot know apriori when these rate-limits will be imposed, so we can't set up a client-side limiter to only allow a certain number of requests through per second. Additionally, that would require global coordination between queriers - which is difficult. With the above constraints, I have instead taken inspiration from TCP's [congestion control algorithms](https://en.wikipedia.org/wiki/TCP_congestion_control). This PR implements [AIMD](https://en.wikipedia.org/wiki/Additive_increase/multiplicative_decrease) (Additive Increase, Multiplicative Decrease), which is used in the congestion _avoidance_ phase of congestion control. The default window size (object store requests per second) is 2000; in other words, we skip the "slow start" phase. The controller uses the Go [`rate.Limiter`](https://pkg.go.dev/golang.org/x/time/rate), which implements the token-bucket algorithm. To put it simply: - every successful request widens the window (per second client rate-limit) - every rate-limited response reduces the window size by a backoff factor (0.5 by default, so it will halve) - when the limit has been reached, the querier will be delayed from making further requests until tokens are available
2 years ago
// ensure the GCS client's internal retry mechanism is disabled if we're using congestion control,
// which has its own retry mechanism
// TODO(dannyk): implement hedging in controller
if cfg.CongestionControl.Enabled {
gcsCfg.EnableRetries = false
}
return gcp.NewGCSObjectClient(context.Background(), gcsCfg, cfg.Hedging)
case config.StorageTypeAzure:
azureCfg := cfg.AzureStorageConfig
if namedStore != "" {
nsCfg, ok := cfg.NamedStores.Azure[namedStore]
if !ok {
return nil, fmt.Errorf("Unrecognized named azure storage config %s", name)
}
azureCfg = (azure.BlobStorageConfig)(nsCfg)
}
return azure.NewBlobStorage(&azureCfg, clientMetrics.AzureMetrics, cfg.Hedging)
case config.StorageTypeSwift:
swiftCfg := cfg.Swift
if namedStore != "" {
nsCfg, ok := cfg.NamedStores.Swift[namedStore]
if !ok {
return nil, fmt.Errorf("Unrecognized named swift storage config %s", name)
}
swiftCfg = (openstack.SwiftConfig)(nsCfg)
}
return openstack.NewSwiftObjectClient(swiftCfg, cfg.Hedging)
case config.StorageTypeFileSystem:
fsCfg := cfg.FSConfig
if namedStore != "" {
nsCfg, ok := cfg.NamedStores.Filesystem[namedStore]
if !ok {
return nil, fmt.Errorf("Unrecognized named filesystem storage config %s", name)
}
fsCfg = (local.FSConfig)(nsCfg)
}
return local.NewFSObjectClient(fsCfg)
Feat: add Baidu Cloud BOS as storage backends for Loki #4788 (#5848) * feat: add baidu bce bos storage support Signed-off-by: arcosx <arcosx@outlook.com> * add baidu bce bos as chunk storage backend Signed-off-by: arcosx <arcosx@outlook.com> * fix: some doc error && rewrite bad code Signed-off-by: arcosx <arcosx@outlook.com> * fix: add the `BceServiceError` source link Signed-off-by: arcosx <arcosx@outlook.com> * Update CHANGELOG.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> Signed-off-by: arcosx <arcosx@outlook.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/configuration/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update pkg/storage/chunk/client/baidubce/bos_storage_client.go Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com>
3 years ago
case config.StorageTypeBOS:
bosCfg := cfg.BOSStorageConfig
if namedStore != "" {
nsCfg, ok := cfg.NamedStores.BOS[namedStore]
if !ok {
return nil, fmt.Errorf("Unrecognized named bos storage config %s", name)
}
bosCfg = (baidubce.BOSStorageConfig)(nsCfg)
}
return baidubce.NewBOSObjectStorage(&bosCfg)
case config.StorageTypeCOS:
cosCfg := cfg.COSConfig
if namedStore != "" {
nsCfg, ok := cfg.NamedStores.COS[namedStore]
if !ok {
return nil, fmt.Errorf("Unrecognized named cos storage config %s", name)
}
cosCfg = (ibmcloud.COSConfig)(nsCfg)
}
return ibmcloud.NewCOSObjectClient(cosCfg, cfg.Hedging)
default:
return nil, fmt.Errorf("Unrecognized storage client %v, choose one of: %v, %v, %v, %v, %v, %v, %v, %v, %v", name, config.StorageTypeAWS, config.StorageTypeS3, config.StorageTypeGCS, config.StorageTypeAzure, config.StorageTypeAlibabaCloud, config.StorageTypeSwift, config.StorageTypeBOS, config.StorageTypeCOS, config.StorageTypeFileSystem)
}
}