Add configuration documentation generation tool (#7916)

**What this PR does / why we need it**:

Add a tool to generate configuration flags documentation based on the
flags properties defined on registration on the code. This tool is based
on the [Mimir doc generation
tool](https://github.com/grafana/mimir/tree/main/tools/doc-generator)
and adapted according to Loki configuration specifications.

Prior to this PR, the configuration flags documentation was dispersed
across two sources:
*
[_index.md](5550cd65ec/docs/sources/configuration/_index.md)
* configuration flags registration in the code

This meant that there was no single source of truth. 
In this PR, the previous `_index.md` file is replaced with the new file
generated by the tool.

The next step includes adding a CI step that validates if the _index.md
file was generated according to the flags settings. This will be done in
a follow-up PR.

**NOTE:** this is not a documentation update PR. Apart from some minor
typo fixes, the documentation changes on the code, were copied from the
`_index.md` file.

**Which issue(s) this PR fixes**:
Fixes https://github.com/grafana/loki-private/issues/83

**Special notes for your reviewer**:

Files:
*
[docs/sources/configuration/index.template](5550cd65ec/docs/sources/configuration/index.template):
template used to generate the final configuration file
*
[/docs/sources/configuration/_index.md](c32e5d0acb/docs/sources/configuration/_index.md):
file generated by tool
* `loki/pkg` directory files updated with up-to-date documentation from
`_index.md` file
*
[tools/doc-generator](5550cd65ec/tools/doc-generator)
directory with documentation generation tool.

**Checklist**
- [ ] Reviewed the `CONTRIBUTING.md` guide
- [ ] Documentation added
- [ ] Tests updated
- [ ] `CHANGELOG.md` updated
- [ ] Changes that require user attention or interaction to upgrade are
documented in `docs/sources/upgrading/_index.md`
pull/7934/head
Susana Ferreira 3 years ago committed by GitHub
parent 4768b6d997
commit f93b91bfb5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 5038
      docs/sources/configuration/_index.md
  2. 98
      docs/sources/configuration/index.template
  3. 1
      go.mod
  4. 1
      go.sum
  5. 8
      pkg/ingester/client/client.go
  6. 28
      pkg/ingester/ingester.go
  7. 4
      pkg/ingester/wal.go
  8. 4
      pkg/logql/engine.go
  9. 22
      pkg/loki/common/common.go
  10. 6
      pkg/loki/config_test.go
  11. 70
      pkg/loki/loki.go
  12. 2
      pkg/lokifrontend/config.go
  13. 2
      pkg/lokifrontend/frontend/v1/frontend.go
  14. 12
      pkg/querier/querier.go
  15. 2
      pkg/querier/queryrange/queryrangebase/roundtrip.go
  16. 22
      pkg/ruler/base/ruler.go
  17. 8
      pkg/ruler/base/ruler_ring.go
  18. 14
      pkg/ruler/base/storage.go
  19. 12
      pkg/ruler/config.go
  20. 2
      pkg/ruler/config/alertmanager.go
  21. 1
      pkg/ruler/storage/cleaner/cleaner.go
  22. 5
      pkg/ruler/storage/cleaner/config.go
  23. 12
      pkg/ruler/storage/instance/instance.go
  24. 6
      pkg/scheduler/scheduler.go
  25. 2
      pkg/storage/chunk/client/aws/s3_storage_client.go
  26. 8
      pkg/storage/chunk/client/baidubce/bos_storage_client.go
  27. 4
      pkg/storage/chunk/client/hedging/hedging.go
  28. 9
      pkg/storage/config/schema_config.go
  29. 14
      pkg/storage/factory.go
  30. 16
      pkg/storage/stores/indexshipper/compactor/compactor.go
  31. 6
      pkg/storage/stores/shipper/indexgateway/config.go
  32. 2
      pkg/util/ring_config.go
  33. 2
      pkg/util/validation/limits.go
  34. 84
      pkg/validation/limits.go
  35. 185
      tools/doc-generator/main.go
  36. 645
      tools/doc-generator/parse/parser.go
  37. 224
      tools/doc-generator/parse/root_blocks.go
  38. 62
      tools/doc-generator/parse/util.go
  39. 52
      tools/doc-generator/parse/util_test.go
  40. 245
      tools/doc-generator/writer.go
  41. 21
      vendor/github.com/mitchellh/go-wordwrap/LICENSE.md
  42. 39
      vendor/github.com/mitchellh/go-wordwrap/README.md
  43. 73
      vendor/github.com/mitchellh/go-wordwrap/wordwrap.go
  44. 3
      vendor/modules.txt

File diff suppressed because it is too large Load Diff

@ -0,0 +1,98 @@
---
description: Describes parameters used to configure Grafana Loki.
menuTitle: Configuration parameters
title: Grafana Loki configuration parameters
weight: 500
---
# Grafana Loki configuration parameters
{{ .GeneratedFileWarning }}
Grafana Loki is configured in a YAML file (usually referred to as `loki.yaml` )
which contains information on the Loki server and its individual components,
depending on which mode Loki is launched in.
Configuration examples can be found in the [Configuration Examples](examples/) document.
## Printing Loki Config At Runtime
If you pass Loki the flag `-print-config-stderr` or `-log-config-reverse-order`, (or `-print-config-stderr=true`)
Loki will dump the entire config object it has created from the built-in defaults combined first with
overrides from config file, and second by overrides from flags.
The result is the value for every config object in the Loki config struct, which is very large...
Many values will not be relevant to your install such as storage configs which you are not using and which you did not define,
this is expected as every option has a default value if it is being used or not.
This config is what Loki will use to run, it can be invaluable for debugging issues related to configuration and
is especially useful in making sure your config files and flags are being read and loaded properly.
`-print-config-stderr` is nice when running Loki directly e.g. `./loki ` as you can get a quick output of the entire Loki config.
`-log-config-reverse-order` is the flag we run Loki with in all our environments, the config entries are reversed so
that the order of configs reads correctly top to bottom when viewed in Grafana's Explore.
## Reload At Runtime
Promtail can reload its configuration at runtime. If the new configuration
is not well-formed, the changes will not be applied.
A configuration reload is triggered by sending a `SIGHUP` to the Promtail process or
sending a HTTP POST request to the `/reload` endpoint (when the `--server.enable-runtime-reload` flag is enabled).
## Configuration File Reference
To specify which configuration file to load, pass the `-config.file` flag at the
command line. The value can be a list of comma separated paths, then the first
file that exists will be used.
If no `-config.file` argument is specified, Loki will look up the `config.yaml` in the
current working directory and the `config/` subdirectory and try to use that.
The file is written in [YAML
format](https://en.wikipedia.org/wiki/YAML), defined by the scheme below.
Brackets indicate that a parameter is optional. For non-list parameters the
value is set to the specified default.
### Use environment variables in the configuration
> **Note:** This feature is only available in Loki 2.1+.
You can use environment variable references in the configuration file to set values that need to be configurable during deployment.
To do this, pass `-config.expand-env=true` and use:
```
${VAR}
```
Where VAR is the name of the environment variable.
Each variable reference is replaced at startup by the value of the environment variable.
The replacement is case-sensitive and occurs before the YAML file is parsed.
References to undefined variables are replaced by empty strings unless you specify a default value or custom error text.
To specify a default value, use:
```
${VAR:-default_value}
```
Where default_value is the value to use if the environment variable is undefined.
Pass the `-config.expand-env` flag at the command line to enable this way of setting configs.
### Generic placeholders
- `<boolean>` : a boolean that can take the values `true` or `false`
- `<int>` : any integer matching the regular expression `[1-9]+[0-9]*`
- `<duration>` : a duration matching the regular expression `[0-9]+(ns|us|µs|ms|[smh])`
- `<labelname>` : a string matching the regular expression `[a-zA-Z_][a-zA-Z0-9_]*`
- `<labelvalue>` : a string of unicode characters
- `<filename>` : a valid path relative to current working directory or an absolute path.
- `<host>` : a valid string consisting of a hostname or IP followed by an optional port number
- `<string>` : a string
- `<secret>` : a string that represents a secret, such as a password
### Supported contents and default values of `loki.yaml`
{{ .ConfigFile }}

@ -67,6 +67,7 @@ require (
github.com/klauspost/pgzip v1.2.5
github.com/mattn/go-ieproxy v0.0.1
github.com/minio/minio-go/v7 v7.0.32-0.20220706200439-ef3e45ed9cdb
github.com/mitchellh/go-wordwrap v1.0.0
github.com/mitchellh/mapstructure v1.5.0
github.com/modern-go/reflect2 v1.0.2
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f

@ -1024,6 +1024,7 @@ github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrk
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/go-wordwrap v1.0.0 h1:6GlHJ/LTGMrIJbwgdqdl2eEH8o+Exx/0m8ir9Gns0u4=
github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg=
github.com/mitchellh/hashstructure v0.0.0-20170609045927-2bca23e0e452/go.mod h1:QjSHrPWS+BGUVBYkbTZWEnOh3G1DutKwClXU/ABz6AQ=

@ -41,9 +41,9 @@ type ClosableHealthAndIngesterClient struct {
// Config for an ingester client.
type Config struct {
PoolConfig clientpool.PoolConfig `yaml:"pool_config,omitempty"`
PoolConfig clientpool.PoolConfig `yaml:"pool_config,omitempty" doc:"description=Configures how connections are pooled."`
RemoteTimeout time.Duration `yaml:"remote_timeout,omitempty"`
GRPCClientConfig grpcclient.Config `yaml:"grpc_client_config"`
GRPCClientConfig grpcclient.Config `yaml:"grpc_client_config" doc:"description=Configures how the gRPC connection to ingesters work as a client."`
GRPCUnaryClientInterceptors []grpc.UnaryClientInterceptor `yaml:"-"`
GRCPStreamClientInterceptors []grpc.StreamClientInterceptor `yaml:"-"`
@ -58,8 +58,8 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
cfg.GRPCClientConfig.RegisterFlagsWithPrefix("ingester.client", f)
cfg.PoolConfig.RegisterFlags(f)
f.DurationVar(&cfg.PoolConfig.RemoteTimeout, "ingester.client.healthcheck-timeout", 1*time.Second, "Timeout for healthcheck rpcs.")
f.DurationVar(&cfg.RemoteTimeout, "ingester.client.timeout", 5*time.Second, "Timeout for ingester client RPCs.")
f.DurationVar(&cfg.PoolConfig.RemoteTimeout, "ingester.client.healthcheck-timeout", 1*time.Second, "How quickly a dead client will be removed after it has been detected to disappear. Set this to a value to allow time for a secondary health check to recover the missing client.")
f.DurationVar(&cfg.RemoteTimeout, "ingester.client.timeout", 5*time.Second, "The remote request timeout on the client side.")
}
// New returns a new ingester client.

@ -67,7 +67,7 @@ var (
// Config for an ingester.
type Config struct {
LifecyclerConfig ring.LifecyclerConfig `yaml:"lifecycler,omitempty"`
LifecyclerConfig ring.LifecyclerConfig `yaml:"lifecycler,omitempty" doc:"description=Configures how the lifecycle of the ingester will operate and where it will register for discovery."`
// Config for transferring chunks.
MaxTransferRetries int `yaml:"max_transfer_retries,omitempty"`
@ -96,7 +96,7 @@ type Config struct {
QueryStore bool `yaml:"-"`
QueryStoreMaxLookBackPeriod time.Duration `yaml:"query_store_max_look_back_period"`
WAL WALConfig `yaml:"wal,omitempty"`
WAL WALConfig `yaml:"wal,omitempty" doc:"description=The ingester WAL (Write Ahead Log) records incoming logs and stores them on the local file systems in order to guarantee persistence of acknowledged data in the event of a process crash."`
ChunkFilterer chunk.RequestChunkFilterer `yaml:"-"`
// Optional wrapper that can be used to modify the behaviour of the ingester
@ -113,22 +113,22 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
cfg.WAL.RegisterFlags(f)
f.IntVar(&cfg.MaxTransferRetries, "ingester.max-transfer-retries", 0, "Number of times to try and transfer chunks before falling back to flushing. If set to 0 or negative value, transfers are disabled.")
f.IntVar(&cfg.ConcurrentFlushes, "ingester.concurrent-flushes", 32, "")
f.DurationVar(&cfg.FlushCheckPeriod, "ingester.flush-check-period", 30*time.Second, "")
f.DurationVar(&cfg.FlushOpTimeout, "ingester.flush-op-timeout", 10*time.Minute, "")
f.DurationVar(&cfg.RetainPeriod, "ingester.chunks-retain-period", 0, "")
f.DurationVar(&cfg.MaxChunkIdle, "ingester.chunks-idle-period", 30*time.Minute, "")
f.IntVar(&cfg.BlockSize, "ingester.chunks-block-size", 256*1024, "")
f.IntVar(&cfg.TargetChunkSize, "ingester.chunk-target-size", 1572864, "") // 1.5 MB
f.IntVar(&cfg.ConcurrentFlushes, "ingester.concurrent-flushes", 32, "How many flushes can happen concurrently from each stream.")
f.DurationVar(&cfg.FlushCheckPeriod, "ingester.flush-check-period", 30*time.Second, "How often should the ingester see if there are any blocks to flush.")
f.DurationVar(&cfg.FlushOpTimeout, "ingester.flush-op-timeout", 10*time.Minute, "The timeout before a flush is cancelled.")
f.DurationVar(&cfg.RetainPeriod, "ingester.chunks-retain-period", 0, "How long chunks should be retained in-memory after they've been flushed.")
f.DurationVar(&cfg.MaxChunkIdle, "ingester.chunks-idle-period", 30*time.Minute, "How long chunks should sit in-memory with no updates before being flushed if they don't hit the max block size. This means that half-empty chunks will still be flushed after a certain period as long as they receive no further activity.")
f.IntVar(&cfg.BlockSize, "ingester.chunks-block-size", 256*1024, "The targeted _uncompressed_ size in bytes of a chunk block When this threshold is exceeded the head block will be cut and compressed inside the chunk.")
f.IntVar(&cfg.TargetChunkSize, "ingester.chunk-target-size", 1572864, "A target _compressed_ size in bytes for chunks. This is a desired size not an exact size, chunks may be slightly bigger or significantly smaller if they get flushed for other reasons (e.g. chunk_idle_period). A value of 0 creates chunks with a fixed 10 blocks, a non zero value will create chunks with a variable number of blocks to meet the target size.") // 1.5 MB
f.StringVar(&cfg.ChunkEncoding, "ingester.chunk-encoding", chunkenc.EncGZIP.String(), fmt.Sprintf("The algorithm to use for compressing chunk. (%s)", chunkenc.SupportedEncoding()))
f.DurationVar(&cfg.SyncPeriod, "ingester.sync-period", 0, "How often to cut chunks to synchronize ingesters.")
f.DurationVar(&cfg.SyncPeriod, "ingester.sync-period", 0, "Parameters used to synchronize ingesters to cut chunks at the same moment. Sync period is used to roll over incoming entry to a new chunk. If chunk's utilization isn't high enough (eg. less than 50% when sync_min_utilization is set to 0.5), then this chunk rollover doesn't happen.")
f.Float64Var(&cfg.SyncMinUtilization, "ingester.sync-min-utilization", 0, "Minimum utilization of chunk when doing synchronization.")
f.IntVar(&cfg.MaxReturnedErrors, "ingester.max-ignored-stream-errors", 10, "Maximum number of ignored stream errors to return. 0 to return all errors.")
f.DurationVar(&cfg.MaxChunkAge, "ingester.max-chunk-age", 2*time.Hour, "Maximum chunk age before flushing.")
f.IntVar(&cfg.MaxReturnedErrors, "ingester.max-ignored-stream-errors", 10, "The maximum number of errors a stream will report to the user when a push fails. 0 to make unlimited.")
f.DurationVar(&cfg.MaxChunkAge, "ingester.max-chunk-age", 2*time.Hour, "The maximum duration of a timeseries chunk in memory. If a timeseries runs for longer than this, the current chunk will be flushed to the store and a new chunk created.")
f.DurationVar(&cfg.QueryStoreMaxLookBackPeriod, "ingester.query-store-max-look-back-period", 0, "How far back should an ingester be allowed to query the store for data, for use only with boltdb-shipper/tsdb index and filesystem object store. -1 for infinite.")
f.BoolVar(&cfg.AutoForgetUnhealthy, "ingester.autoforget-unhealthy", false, "Enable to remove unhealthy ingesters from the ring after `ring.kvstore.heartbeat_timeout`")
f.BoolVar(&cfg.AutoForgetUnhealthy, "ingester.autoforget-unhealthy", false, "Forget about ingesters having heartbeat timestamps older than `ring.kvstore.heartbeat_timeout`. This is equivalent to clicking on the `/ring` `forget` button in the UI: the ingester is removed from the ring. This is a useful setting when you are sure that an unhealthy node won't return. An example is when not using stateful sets or the equivalent. Use `memberlist.rejoin_interval` > 0 to handle network partition cases when using a memberlist.")
f.IntVar(&cfg.IndexShards, "ingester.index-shards", index.DefaultIndexShards, "Shard factor used in the ingesters for the in process reverse index. This MUST be evenly divisible by ALL schema shard factors or Loki will not start.")
f.IntVar(&cfg.MaxDroppedStreams, "ingester.tailer.max-dropped-streams", 10, "Maximum number of dropped streams to keep in memory during tailing")
f.IntVar(&cfg.MaxDroppedStreams, "ingester.tailer.max-dropped-streams", 10, "Maximum number of dropped streams to keep in memory during tailing.")
}
func (cfg *Config) Validate() error {

@ -40,14 +40,14 @@ func (cfg *WALConfig) Validate() error {
// RegisterFlags adds the flags required to config this to the given FlagSet
func (cfg *WALConfig) RegisterFlags(f *flag.FlagSet) {
f.StringVar(&cfg.Dir, "ingester.wal-dir", "wal", "Directory to store the WAL and/or recover from WAL.")
f.StringVar(&cfg.Dir, "ingester.wal-dir", "wal", "Directory where the WAL data should be stored and/or recovered from.")
f.BoolVar(&cfg.Enabled, "ingester.wal-enabled", true, "Enable writing of ingested data into WAL.")
f.DurationVar(&cfg.CheckpointDuration, "ingester.checkpoint-duration", 5*time.Minute, "Interval at which checkpoints should be created.")
f.BoolVar(&cfg.FlushOnShutdown, "ingester.flush-on-shutdown", false, "When WAL is enabled, should chunks be flushed to long-term storage on shutdown.")
// Need to set default here
cfg.ReplayMemoryCeiling = flagext.ByteSize(defaultCeiling)
f.Var(&cfg.ReplayMemoryCeiling, "ingester.wal-replay-memory-ceiling", "How much memory the WAL may use during replay before it needs to flush chunks to storage, i.e. 10GB. We suggest setting this to a high percentage (~75%) of available memory.")
f.Var(&cfg.ReplayMemoryCeiling, "ingester.wal-replay-memory-ceiling", "Maximum memory size the WAL may use during replay. After hitting this, it will flush data to storage before continuing. A unit suffix (KB, MB, GB) may be applied.")
}
// WAL interface allows us to have a no-op WAL when the WAL is disabled.

@ -113,7 +113,7 @@ type Querier interface {
type EngineOpts struct {
// TODO: remove this after next release.
// Timeout for queries execution
Timeout time.Duration `yaml:"timeout"`
Timeout time.Duration `yaml:"timeout" doc:"deprecated"`
// MaxLookBackPeriod is the maximum amount of time to look back for log lines.
// only used for instant log queries.
@ -122,7 +122,7 @@ type EngineOpts struct {
func (opts *EngineOpts) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
// TODO: remove this configuration after next release.
f.DurationVar(&opts.Timeout, prefix+".engine.timeout", DefaultEngineTimeout, "Timeout for query execution. Instead, rely only on querier.query-timeout. (deprecated)")
f.DurationVar(&opts.Timeout, prefix+".engine.timeout", DefaultEngineTimeout, "Use querier.query-timeout instead. Timeout for query execution.")
f.DurationVar(&opts.MaxLookBackPeriod, prefix+".engine.max-lookback-period", 30*time.Second, "The maximum amount of time to look back for log lines. Used only for instant log queries.")
}

@ -51,8 +51,10 @@ type Config struct {
func (c *Config) RegisterFlags(f *flag.FlagSet) {
throwaway := flag.NewFlagSet("throwaway", flag.PanicOnError)
throwaway.IntVar(&c.ReplicationFactor, "common.replication-factor", 3, "How many ingesters incoming data should be replicated to.")
c.Storage.RegisterFlagsWithPrefix("common.storage", throwaway)
c.Ring.RegisterFlagsWithPrefix("", "collectors/", throwaway)
c.Storage.RegisterFlagsWithPrefix("common.storage.", f)
c.Storage.RegisterFlagsWithPrefix("common.storage.", throwaway)
c.Ring.RegisterFlagsWithPrefix("common.storage.", "collectors/", f)
c.Ring.RegisterFlagsWithPrefix("common.storage.", "collectors/", throwaway)
// instance related flags.
c.InstanceInterfaceNames = netutil.PrivateNetworkInterfacesWithFallback([]string{"eth0", "en0"}, util_log.Logger)
@ -74,12 +76,12 @@ type Storage struct {
}
func (s *Storage) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
s.S3.RegisterFlagsWithPrefix(prefix+".s3", f)
s.GCS.RegisterFlagsWithPrefix(prefix+".gcs", f)
s.Azure.RegisterFlagsWithPrefix(prefix+".azure", f)
s.Swift.RegisterFlagsWithPrefix(prefix+".swift", f)
s.BOS.RegisterFlagsWithPrefix(prefix+".bos", f)
s.FSConfig.RegisterFlagsWithPrefix(prefix+".filesystem", f)
s.S3.RegisterFlagsWithPrefix(prefix, f)
s.GCS.RegisterFlagsWithPrefix(prefix, f)
s.Azure.RegisterFlagsWithPrefix(prefix, f)
s.Swift.RegisterFlagsWithPrefix(prefix, f)
s.BOS.RegisterFlagsWithPrefix(prefix, f)
s.FSConfig.RegisterFlagsWithPrefix(prefix, f)
s.Hedging.RegisterFlagsWithPrefix(prefix, f)
}
@ -89,6 +91,6 @@ type FilesystemConfig struct {
}
func (cfg *FilesystemConfig) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
f.StringVar(&cfg.ChunksDirectory, prefix+".chunk-directory", "", "Directory to store chunks in.")
f.StringVar(&cfg.RulesDirectory, prefix+".rules-directory", "", "Directory to store rules in.")
f.StringVar(&cfg.ChunksDirectory, prefix+"filesystem.chunk-directory", "", "Directory to store chunks in.")
f.StringVar(&cfg.RulesDirectory, prefix+"filesystem.rules-directory", "", "Directory to store rules in.")
}

@ -5,11 +5,11 @@ import (
"testing"
"time"
"github.com/grafana/loki/pkg/ingester"
"github.com/grafana/loki/pkg/storage/config"
"github.com/prometheus/common/model"
"github.com/stretchr/testify/require"
"github.com/grafana/loki/pkg/ingester"
"github.com/grafana/loki/pkg/storage/config"
)
func TestCrossComponentValidation(t *testing.T) {

@ -28,7 +28,7 @@ import (
"github.com/grafana/loki/pkg/distributor"
"github.com/grafana/loki/pkg/ingester"
"github.com/grafana/loki/pkg/ingester/client"
ingester_client "github.com/grafana/loki/pkg/ingester/client"
"github.com/grafana/loki/pkg/logql"
"github.com/grafana/loki/pkg/loki/common"
"github.com/grafana/loki/pkg/lokifrontend"
@ -63,41 +63,43 @@ import (
type Config struct {
Target flagext.StringSliceCSV `yaml:"target,omitempty"`
AuthEnabled bool `yaml:"auth_enabled,omitempty"`
HTTPPrefix string `yaml:"http_prefix"`
HTTPPrefix string `yaml:"http_prefix" doc:"hidden"`
BallastBytes int `yaml:"ballast_bytes"`
// TODO(dannyk): Remove these config options before next release; they don't need to be configurable.
// These are only here to allow us to test the new functionality.
UseBufferedLogger bool `yaml:"use_buffered_logger"`
UseSyncLogger bool `yaml:"use_sync_logger"`
UseBufferedLogger bool `yaml:"use_buffered_logger" doc:"hidden"`
UseSyncLogger bool `yaml:"use_sync_logger" doc:"hidden"`
LegacyReadTarget bool `yaml:"legacy_read_target,omitempty"`
Common common.Config `yaml:"common,omitempty"`
Server server.Config `yaml:"server,omitempty"`
InternalServer internalserver.Config `yaml:"internal_server,omitempty"`
InternalServer internalserver.Config `yaml:"internal_server,omitempty" doc:"hidden"`
Distributor distributor.Config `yaml:"distributor,omitempty"`
Querier querier.Config `yaml:"querier,omitempty"`
CompactorHTTPClient compactor_client.HTTPConfig `yaml:"compactor_client,omitempty"`
CompactorGRPCClient compactor_client.GRPCConfig `yaml:"compactor_grpc_client,omitempty"`
IngesterClient client.Config `yaml:"ingester_client,omitempty"`
QueryScheduler scheduler.Config `yaml:"query_scheduler"`
Frontend lokifrontend.Config `yaml:"frontend,omitempty"`
QueryRange queryrange.Config `yaml:"query_range,omitempty"`
Ruler ruler.Config `yaml:"ruler,omitempty"`
IngesterClient ingester_client.Config `yaml:"ingester_client,omitempty"`
Ingester ingester.Config `yaml:"ingester,omitempty"`
StorageConfig storage.Config `yaml:"storage_config,omitempty"`
IndexGateway indexgateway.Config `yaml:"index_gateway"`
StorageConfig storage.Config `yaml:"storage_config,omitempty"`
ChunkStoreConfig config.ChunkStoreConfig `yaml:"chunk_store_config,omitempty"`
SchemaConfig config.SchemaConfig `yaml:"schema_config,omitempty"`
CompactorConfig compactor.Config `yaml:"compactor,omitempty"`
CompactorHTTPClient compactor_client.HTTPConfig `yaml:"compactor_client,omitempty" doc:"hidden"`
CompactorGRPCClient compactor_client.GRPCConfig `yaml:"compactor_grpc_client,omitempty" doc:"hidden"`
LimitsConfig validation.Limits `yaml:"limits_config,omitempty"`
TableManager index.TableManagerConfig `yaml:"table_manager,omitempty"`
Worker worker.Config `yaml:"frontend_worker,omitempty"`
Frontend lokifrontend.Config `yaml:"frontend,omitempty"`
Ruler ruler.Config `yaml:"ruler,omitempty"`
QueryRange queryrange.Config `yaml:"query_range,omitempty"`
RuntimeConfig runtimeconfig.Config `yaml:"runtime_config,omitempty"`
MemberlistKV memberlist.KVConfig `yaml:"memberlist"`
Tracing tracing.Config `yaml:"tracing"`
CompactorConfig compactor.Config `yaml:"compactor,omitempty"`
QueryScheduler scheduler.Config `yaml:"query_scheduler"`
UsageReport usagestats.Config `yaml:"analytics"`
TableManager index.TableManagerConfig `yaml:"table_manager,omitempty"`
MemberlistKV memberlist.KVConfig `yaml:"memberlist" doc:"hidden"`
RuntimeConfig runtimeconfig.Config `yaml:"runtime_config,omitempty"`
Tracing tracing.Config `yaml:"tracing"`
UsageReport usagestats.Config `yaml:"analytics"`
LegacyReadTarget bool `yaml:"legacy_read_target,omitempty" doc:"hidden"`
Common common.Config `yaml:"common,omitempty"`
}
// RegisterFlags registers flag.
@ -107,12 +109,24 @@ func (c *Config) RegisterFlags(f *flag.FlagSet) {
// Set the default module list to 'all'
c.Target = []string{All}
f.Var(&c.Target, "target", "Comma-separated list of Loki modules to load. "+
"The alias 'all' can be used in the list to load a number of core modules and will enable single-binary mode. "+
"The aliases 'read' and 'write' can be used to only run components related to the read path or write path, respectively.")
f.BoolVar(&c.AuthEnabled, "auth.enabled", true, "Set to false to disable auth.")
f.IntVar(&c.BallastBytes, "config.ballast-bytes", 0, "The amount of virtual memory to reserve as a ballast in order to optimise "+
"garbage collection. Larger ballasts result in fewer garbage collection passes, reducing compute overhead at the cost of memory usage.")
f.Var(&c.Target, "target",
"A comma-separated list of components to run. "+
"The default value 'all' runs Loki in single binary mode. "+
"The value 'read' is an alias to run only read-path related components such as the querier and query-frontend, but all in the same process. "+
"The value 'write' is an alias to run only write-path related components such as the distributor and compactor, but all in the same process. "+
"Supported values: all, compactor, distributor, ingester, querier, query-scheduler, ingester-querier, query-frontend, index-gateway, ruler, table-manager, read, write. "+
"A full list of available targets can be printed when running Loki with the '-list-targets' command line flag. ",
)
f.BoolVar(&c.AuthEnabled, "auth.enabled", true,
"Enables authentication through the X-Scope-OrgID header, which must be present if true. "+
"If false, the OrgID will always be set to 'fake'.",
)
f.IntVar(&c.BallastBytes, "config.ballast-bytes", 0,
"The amount of virtual memory in bytes to reserve as ballast in order to optimize garbage collection. "+
"Larger ballasts result in fewer garbage collection passes, reducing CPU overhead at the cost of heap size. "+
"The ballast will not consume physical memory, because it is never read from. "+
"It will, however, distort metrics, because it is counted as live memory. ",
)
f.BoolVar(&c.UseBufferedLogger, "log.use-buffered", true, "Uses a line-buffered logger to improve performance.")
f.BoolVar(&c.UseSyncLogger, "log.use-sync", true, "Forces all lines logged to hold a mutex to serialize writes.")

@ -30,6 +30,6 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
cfg.TLS.RegisterFlagsWithPrefix("frontend.tail-tls-config", f)
f.BoolVar(&cfg.CompressResponses, "querier.compress-http-responses", false, "Compress HTTP responses.")
f.StringVar(&cfg.DownstreamURL, "frontend.downstream-url", "", "URL of downstream Prometheus.")
f.StringVar(&cfg.DownstreamURL, "frontend.downstream-url", "", "URL of downstream Loki.")
f.StringVar(&cfg.TailProxyURL, "frontend.tail-proxy-url", "", "URL of querier for tail proxy.")
}

@ -37,7 +37,7 @@ type Config struct {
// RegisterFlags adds the flags required to config this to the given FlagSet.
func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
f.IntVar(&cfg.MaxOutstandingPerTenant, "querier.max-outstanding-requests-per-tenant", 2048, "Maximum number of outstanding requests per tenant per frontend; requests beyond this error with HTTP 429.")
f.DurationVar(&cfg.QuerierForgetDelay, "query-frontend.querier-forget-delay", 0, "If a querier disconnects without sending notification about graceful shutdown, the query-frontend will keep the querier in the tenant's shard until the forget delay has passed. This feature is useful to reduce the blast radius when shuffle-sharding is enabled.")
f.DurationVar(&cfg.QuerierForgetDelay, "query-frontend.querier-forget-delay", 0, "In the event a tenant is repeatedly sending queries that lead the querier to crash or be killed due to an out-of-memory error, the crashed querier will be disconnected from the query frontend and a new querier will be immediately assigned to the tenant’s shard. This invalidates the assumption that shuffle sharding can be used to reduce the impact on tenants. This option mitigates the impact by configuring a delay between when a querier disconnects because of a crash and when the crashed querier is actually removed from the tenant's shard.")
}
type Limits interface {

@ -54,19 +54,19 @@ type Config struct {
QueryStoreOnly bool `yaml:"query_store_only"`
QueryIngesterOnly bool `yaml:"query_ingester_only"`
MultiTenantQueriesEnabled bool `yaml:"multi_tenant_queries_enabled"`
QueryTimeout time.Duration `yaml:"query_timeout"`
QueryTimeout time.Duration `yaml:"query_timeout" doc:"hidden"`
}
// RegisterFlags register flags.
func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
cfg.Engine.RegisterFlagsWithPrefix("querier", f)
f.DurationVar(&cfg.TailMaxDuration, "querier.tail-max-duration", 1*time.Hour, "Limit the duration for which live tailing request would be served")
f.DurationVar(&cfg.TailMaxDuration, "querier.tail-max-duration", 1*time.Hour, "Maximum duration for which the live tailing requests should be served.")
f.DurationVar(&cfg.ExtraQueryDelay, "querier.extra-query-delay", 0, "Time to wait before sending more than the minimum successful query requests.")
f.DurationVar(&cfg.QueryIngestersWithin, "querier.query-ingesters-within", 3*time.Hour, "Maximum lookback beyond which queries are not sent to ingester. 0 means all queries are sent to ingester.")
f.IntVar(&cfg.MaxConcurrent, "querier.max-concurrent", 10, "The maximum number of concurrent queries.")
f.BoolVar(&cfg.QueryStoreOnly, "querier.query-store-only", false, "Queriers should only query the store and not try to query any ingesters")
f.BoolVar(&cfg.QueryIngesterOnly, "querier.query-ingester-only", false, "Queriers should only query the ingesters and not try to query any store")
f.BoolVar(&cfg.MultiTenantQueriesEnabled, "querier.multi-tenant-queries-enabled", false, "Enable queries across multiple tenants. (Experimental)")
f.IntVar(&cfg.MaxConcurrent, "querier.max-concurrent", 10, "The maximum number of concurrent queries allowed.")
f.BoolVar(&cfg.QueryStoreOnly, "querier.query-store-only", false, "Only query the store, and not attempt any ingesters. This is useful for running a standalone querier pool operating only against stored data.")
f.BoolVar(&cfg.QueryIngesterOnly, "querier.query-ingester-only", false, "When true, queriers only query the ingesters, and not stored data. This is useful when the object store is unavailable.")
f.BoolVar(&cfg.MultiTenantQueriesEnabled, "querier.multi-tenant-queries-enabled", false, "When true, allow queries to span multiple tenants.")
}
// Validate validates the config.

@ -39,7 +39,7 @@ var PassthroughMiddleware = MiddlewareFunc(func(next Handler) Handler {
// Config for query_range middleware chain.
type Config struct {
// Deprecated: SplitQueriesByInterval will be removed in the next major release
SplitQueriesByInterval time.Duration `yaml:"split_queries_by_interval"`
SplitQueriesByInterval time.Duration `yaml:"split_queries_by_interval" doc:"deprecated|description=Use -querier.split-queries-by-interval instead. CLI flag: -querier.split-queries-by-day. Split queries by day and execute in parallel."`
AlignQueriesWithStep bool `yaml:"align_queries_with_step"`
ResultsCacheConfig `yaml:"results_cache"`

@ -80,7 +80,7 @@ type Config struct {
// This is used for template expansion in alerts; must be a valid URL.
ExternalURL flagext.URLValue `yaml:"external_url"`
// Labels to add to all alerts
ExternalLabels labels.Labels `yaml:"external_labels,omitempty"`
ExternalLabels labels.Labels `yaml:"external_labels,omitempty" doc:"description=Labels to add to all alerts."`
// GRPC Client configuration.
ClientTLSConfig grpcclient.Config `yaml:"ruler_client"`
// How frequently to evaluate rules by default.
@ -88,7 +88,7 @@ type Config struct {
// How frequently to poll for updated rules.
PollInterval time.Duration `yaml:"poll_interval"`
// Rule Storage and Polling configuration.
StoreConfig RuleStoreConfig `yaml:"storage" doc:"description=Deprecated. Use -ruler-storage.* CLI flags and their respective YAML config options instead."`
StoreConfig RuleStoreConfig `yaml:"storage" doc:"deprecated|description=Use -ruler-storage. CLI flags and their respective YAML config options instead."`
// Path to store rule files for prom manager.
RulePath string `yaml:"rule_path"`
@ -106,7 +106,7 @@ type Config struct {
EnableSharding bool `yaml:"enable_sharding"`
ShardingStrategy string `yaml:"sharding_strategy"`
SearchPendingFor time.Duration `yaml:"search_pending_for"`
Ring RingConfig `yaml:"ring"`
Ring RingConfig `yaml:"ring" doc:"description=Ring used by Loki ruler. The CLI flags prefix for this block configuration is 'ruler.ring'."`
FlushCheckPeriod time.Duration `yaml:"flush_period"`
EnableAPI bool `yaml:"enable_api"`
@ -157,10 +157,10 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
cfg.ExternalURL.URL, _ = url.Parse("") // Must be non-nil
f.Var(&cfg.ExternalURL, "ruler.external.url", "URL of alerts return path.")
f.DurationVar(&cfg.EvaluationInterval, "ruler.evaluation-interval", 1*time.Minute, "How frequently to evaluate rules")
f.DurationVar(&cfg.PollInterval, "ruler.poll-interval", 1*time.Minute, "How frequently to poll for rule changes")
f.DurationVar(&cfg.EvaluationInterval, "ruler.evaluation-interval", 1*time.Minute, "How frequently to evaluate rules.")
f.DurationVar(&cfg.PollInterval, "ruler.poll-interval", 1*time.Minute, "How frequently to poll for rule changes.")
f.StringVar(&cfg.AlertmanagerURL, "ruler.alertmanager-url", "", "Comma-separated list of URL(s) of the Alertmanager(s) to send notifications to. Each Alertmanager URL is treated as a separate group in the configuration. Multiple Alertmanagers in HA per group can be supported by using DNS resolution via -ruler.alertmanager-discovery.")
f.StringVar(&cfg.AlertmanagerURL, "ruler.alertmanager-url", "", "Comma-separated list of Alertmanager URLs to send notifications to. Each Alertmanager URL is treated as a separate group in the configuration. Multiple Alertmanagers in HA per group can be supported by using DNS resolution via '-ruler.alertmanager-discovery'.")
f.BoolVar(&cfg.AlertmanagerDiscovery, "ruler.alertmanager-discovery", false, "Use DNS SRV records to discover Alertmanager hosts.")
f.DurationVar(&cfg.AlertmanagerRefreshInterval, "ruler.alertmanager-refresh-interval", alertmanagerRefreshIntervalDefault, "How long to wait between refreshing DNS resolutions of Alertmanager hosts.")
f.BoolVar(&cfg.AlertmanangerEnableV2API, "ruler.alertmanager-use-v2", false, "If enabled requests to Alertmanager will utilize the V2 API.")
@ -168,20 +168,20 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
f.DurationVar(&cfg.NotificationTimeout, "ruler.notification-timeout", alertmanagerNotificationTimeoutDefault, "HTTP timeout duration when sending notifications to the Alertmanager.")
f.DurationVar(&cfg.SearchPendingFor, "ruler.search-pending-for", 5*time.Minute, "Time to spend searching for a pending ruler when shutting down.")
f.BoolVar(&cfg.EnableSharding, "ruler.enable-sharding", false, "Distribute rule evaluation using ring backend")
f.BoolVar(&cfg.EnableSharding, "ruler.enable-sharding", false, "Distribute rule evaluation using ring backend.")
f.StringVar(&cfg.ShardingStrategy, "ruler.sharding-strategy", util.ShardingStrategyDefault, fmt.Sprintf("The sharding strategy to use. Supported values are: %s.", strings.Join(supportedShardingStrategies, ", ")))
f.DurationVar(&cfg.FlushCheckPeriod, "ruler.flush-period", 1*time.Minute, "Period with which to attempt to flush rule groups.")
f.StringVar(&cfg.RulePath, "ruler.rule-path", "/rules", "file path to store temporary rule files for the prometheus rule managers")
f.BoolVar(&cfg.EnableAPI, "experimental.ruler.enable-api", false, "Enable the ruler api")
f.StringVar(&cfg.RulePath, "ruler.rule-path", "/rules", "File path to store temporary rule files.")
f.BoolVar(&cfg.EnableAPI, "experimental.ruler.enable-api", false, "Enable the ruler api.")
f.DurationVar(&cfg.OutageTolerance, "ruler.for-outage-tolerance", time.Hour, `Max time to tolerate outage for restoring "for" state of alert.`)
f.DurationVar(&cfg.ForGracePeriod, "ruler.for-grace-period", 10*time.Minute, `Minimum duration between alert and restored "for" state. This is maintained only for alerts with configured "for" time greater than grace period.`)
f.DurationVar(&cfg.ForGracePeriod, "ruler.for-grace-period", 10*time.Minute, `Minimum duration between alert and restored "for" state. This is maintained only for alerts with configured "for" time greater than the grace period.`)
f.DurationVar(&cfg.ResendDelay, "ruler.resend-delay", time.Minute, `Minimum amount of time to wait before resending an alert to Alertmanager.`)
f.Var(&cfg.EnabledTenants, "ruler.enabled-tenants", "Comma separated list of tenants whose rules this ruler can evaluate. If specified, only these tenants will be handled by ruler, otherwise this ruler can process rules from all tenants. Subject to sharding.")
f.Var(&cfg.DisabledTenants, "ruler.disabled-tenants", "Comma separated list of tenants whose rules this ruler cannot evaluate. If specified, a ruler that would normally pick the specified tenant(s) for processing will ignore them instead. Subject to sharding.")
f.BoolVar(&cfg.EnableQueryStats, "ruler.query-stats-enabled", false, "Report the wall time for ruler queries to complete as a per user metric and as an info level log message.")
f.BoolVar(&cfg.DisableRuleGroupLabel, "ruler.disable-rule-group-label", false, "Disable the rule_group label on exported metrics")
f.BoolVar(&cfg.DisableRuleGroupLabel, "ruler.disable-rule-group-label", false, "Disable the rule_group label on exported metrics.")
cfg.RingCheckPeriod = 5 * time.Second
}

@ -60,16 +60,16 @@ func (cfg *RingConfig) RegisterFlags(f *flag.FlagSet) {
// Ring flags
cfg.KVStore.RegisterFlagsWithPrefix("ruler.ring.", "rulers/", f)
f.DurationVar(&cfg.HeartbeatPeriod, "ruler.ring.heartbeat-period", 5*time.Second, "Period at which to heartbeat to the ring. 0 = disabled.")
f.DurationVar(&cfg.HeartbeatTimeout, "ruler.ring.heartbeat-timeout", time.Minute, "The heartbeat timeout after which rulers are considered unhealthy within the ring. 0 = never (timeout disabled).")
f.DurationVar(&cfg.HeartbeatPeriod, "ruler.ring.heartbeat-period", 5*time.Second, "Interval between heartbeats sent to the ring. 0 = disabled.")
f.DurationVar(&cfg.HeartbeatTimeout, "ruler.ring.heartbeat-timeout", time.Minute, "The heartbeat timeout after which ruler ring members are considered unhealthy within the ring. 0 = never (timeout disabled).")
// Instance flags
cfg.InstanceInterfaceNames = netutil.PrivateNetworkInterfacesWithFallback([]string{"eth0", "en0"}, util_log.Logger)
f.Var((*flagext.StringSlice)(&cfg.InstanceInterfaceNames), "ruler.ring.instance-interface-names", "Name of network interface to read address from.")
f.Var((*flagext.StringSlice)(&cfg.InstanceInterfaceNames), "ruler.ring.instance-interface-names", "Name of network interface to read addresses from.")
f.StringVar(&cfg.InstanceAddr, "ruler.ring.instance-addr", "", "IP address to advertise in the ring.")
f.IntVar(&cfg.InstancePort, "ruler.ring.instance-port", 0, "Port to advertise in the ring (defaults to server.grpc-listen-port).")
f.StringVar(&cfg.InstanceID, "ruler.ring.instance-id", hostname, "Instance ID to register in the ring.")
f.IntVar(&cfg.NumTokens, "ruler.ring.num-tokens", 128, "Number of tokens for each ruler.")
f.IntVar(&cfg.NumTokens, "ruler.ring.num-tokens", 128, "The number of tokens the lifecycler will generate and put into the ring if it joined without transferring tokens from another lifecycler.")
}
// ToLifecyclerConfig returns a LifecyclerConfig based on the ruler

@ -33,12 +33,12 @@ type RuleStoreConfig struct {
Type string `yaml:"type"`
// Object Storage Configs
Azure azure.BlobStorageConfig `yaml:"azure"`
GCS gcp.GCSConfig `yaml:"gcs"`
S3 aws.S3Config `yaml:"s3"`
BOS baidubce.BOSStorageConfig `yaml:"bos"`
Swift openstack.SwiftConfig `yaml:"swift"`
Local local.Config `yaml:"local"`
Azure azure.BlobStorageConfig `yaml:"azure" doc:"description=Configures backend rule storage for Azure."`
GCS gcp.GCSConfig `yaml:"gcs" doc:"description=Configures backend rule storage for GCS."`
S3 aws.S3Config `yaml:"s3" doc:"description=Configures backend rule storage for S3."`
BOS baidubce.BOSStorageConfig `yaml:"bos" doc:"description=Configures backend rule storage for Baidu Object Storage (BOS)."`
Swift openstack.SwiftConfig `yaml:"swift" doc:"description=Configures backend rule storage for Swift."`
Local local.Config `yaml:"local" doc:"description=Configures backend rule storage for a local file system directory."`
mock rulestore.RuleStore `yaml:"-"`
}
@ -51,7 +51,7 @@ func (cfg *RuleStoreConfig) RegisterFlags(f *flag.FlagSet) {
cfg.Swift.RegisterFlagsWithPrefix("ruler.storage.", f)
cfg.Local.RegisterFlagsWithPrefix("ruler.storage.", f)
cfg.BOS.RegisterFlagsWithPrefix("ruler.storage.", f)
f.StringVar(&cfg.Type, "ruler.storage.type", "", "Method to use for backend rule storage (configdb, azure, gcs, s3, swift, local)")
f.StringVar(&cfg.Type, "ruler.storage.type", "", "Method to use for backend rule storage (configdb, azure, gcs, s3, swift, local, bos)")
}
// Validate config and returns error on failure

@ -21,7 +21,7 @@ type Config struct {
// we cannot define this in the WAL config since it creates an import cycle
WALCleaner cleaner.Config `yaml:"wal_cleaner,omitempty"`
RemoteWrite RemoteWriteConfig `yaml:"remote_write,omitempty"`
RemoteWrite RemoteWriteConfig `yaml:"remote_write,omitempty" doc:"description=Remote-write configuration to send rule samples to a Prometheus remote-write endpoint."`
}
func (c *Config) RegisterFlags(f *flag.FlagSet) {
@ -31,10 +31,10 @@ func (c *Config) RegisterFlags(f *flag.FlagSet) {
c.WALCleaner.RegisterFlags(f)
// TODO(owen-d, 3.0.0): remove deprecated experimental prefix in Cortex if they'll accept it.
f.BoolVar(&c.Config.EnableAPI, "ruler.enable-api", true, "Enable the ruler api")
f.BoolVar(&c.Config.EnableAPI, "ruler.enable-api", true, "Enable the ruler api.")
}
// Validate overrides the embedded cortex variant which expects a cortex limits struct. Instead copy the relevant bits over.
// Validate overrides the embedded cortex variant which expects a cortex limits struct. Instead, copy the relevant bits over.
func (c *Config) Validate() error {
if err := c.StoreConfig.Validate(); err != nil {
return fmt.Errorf("invalid ruler store config: %w", err)
@ -48,8 +48,8 @@ func (c *Config) Validate() error {
}
type RemoteWriteConfig struct {
Client *config.RemoteWriteConfig `yaml:"client,omitempty"`
Clients map[string]config.RemoteWriteConfig `yaml:"clients,omitempty"`
Client *config.RemoteWriteConfig `yaml:"client,omitempty" doc:"deprecated|description=Use 'clients' instead. Configure remote write client."`
Clients map[string]config.RemoteWriteConfig `yaml:"clients,omitempty" doc:"description=Configure remote write clients. A map with remote client id as key."`
Enabled bool `yaml:"enabled"`
ConfigRefreshPeriod time.Duration `yaml:"config_refresh_period"`
}
@ -104,7 +104,7 @@ func (c *RemoteWriteConfig) Clone() (*RemoteWriteConfig, error) {
// RegisterFlags adds the flags required to config this to the given FlagSet.
func (c *RemoteWriteConfig) RegisterFlags(f *flag.FlagSet) {
f.BoolVar(&c.Enabled, "ruler.remote-write.enabled", false, "Remote-write recording rule samples to Prometheus-compatible remote-write receiver.")
f.BoolVar(&c.Enabled, "ruler.remote-write.enabled", false, "Enable remote-write functionality.")
f.DurationVar(&c.ConfigRefreshPeriod, "ruler.remote-write.config-refresh-period", 10*time.Second, "Minimum period to wait between refreshing remote-write reconfigurations. This should be greater than or equivalent to -limits.per-user-override-period.")
if c.Clients == nil {

@ -20,7 +20,7 @@ type AlertManagerConfig struct {
// Enables the ruler notifier to use the Alertmananger V2 API.
AlertmanangerEnableV2API bool `yaml:"enable_alertmanager_v2"`
// Configuration for alert relabeling.
AlertRelabelConfigs []*relabel.Config `yaml:"alert_relabel_configs,omitempty"`
AlertRelabelConfigs []*relabel.Config `yaml:"alert_relabel_configs,omitempty" doc:"description=List of alert relabel configs."`
// Capacity of the queue for notifications to be sent to the Alertmanager.
NotificationQueueCapacity int `yaml:"notification_queue_capacity"`
// HTTP timeout duration when sending notifications to the Alertmanager.

@ -1,6 +1,7 @@
// This directory was copied and adapted from https://github.com/grafana/agent/tree/main/pkg/metrics.
// We cannot vendor the agent in since the agent vendors loki in, which would cause a cyclic dependency.
// NOTE: many changes have been made to the original code for our use-case.
package cleaner
import (

@ -1,6 +1,7 @@
// This directory was copied and adapted from https://github.com/grafana/agent/tree/main/pkg/metrics.
// We cannot vendor the agent in since the agent vendors loki in, which would cause a cyclic dependency.
// NOTE: many changes have been made to the original code for our use-case.
package cleaner
import (
@ -14,7 +15,7 @@ type Config struct {
Period time.Duration `yaml:"period,omitempty"`
}
func (c Config) RegisterFlags(f *flag.FlagSet) {
func (c *Config) RegisterFlags(f *flag.FlagSet) {
f.DurationVar(&c.MinAge, "ruler.wal-cleaner.min-age", DefaultCleanupAge, "The minimum age of a WAL to consider for cleaning.")
f.DurationVar(&c.Period, "ruler.wal-cleaer.period", DefaultCleanupPeriod, "How often to run the WAL cleaner.")
f.DurationVar(&c.Period, "ruler.wal-cleaer.period", DefaultCleanupPeriod, "How often to run the WAL cleaner. 0 = disabled.")
}

@ -53,9 +53,9 @@ var (
// Config is a specific agent that runs within the overall Prometheus
// agent. It has its own set of scrape_configs and remote_write rules.
type Config struct {
Tenant string
Name string
RemoteWrite []*config.RemoteWriteConfig
Tenant string `doc:"hidden"`
Name string `doc:"hidden"`
RemoteWrite []*config.RemoteWriteConfig `doc:"hidden"`
Dir string `yaml:"dir"`
@ -66,7 +66,7 @@ type Config struct {
MinAge time.Duration `yaml:"min_age,omitempty"`
MaxAge time.Duration `yaml:"max_age,omitempty"`
RemoteFlushDeadline time.Duration `yaml:"remote_flush_deadline,omitempty"`
RemoteFlushDeadline time.Duration `yaml:"remote_flush_deadline,omitempty" doc:"hidden"`
}
// UnmarshalYAML implements yaml.Unmarshaler.
@ -143,8 +143,8 @@ func (c *Config) Clone() (Config, error) {
}
func (c *Config) RegisterFlags(f *flag.FlagSet) {
f.StringVar(&c.Dir, "ruler.wal.dir", DefaultConfig.Dir, "Directory to store the WAL and/or recover from WAL.")
f.DurationVar(&c.TruncateFrequency, "ruler.wal.truncate-frequency", DefaultConfig.TruncateFrequency, "How often to run the WAL truncation.")
f.StringVar(&c.Dir, "ruler.wal.dir", DefaultConfig.Dir, "The directory in which to write tenant WAL files. Each tenant will have its own directory one level below this directory.")
f.DurationVar(&c.TruncateFrequency, "ruler.wal.truncate-frequency", DefaultConfig.TruncateFrequency, "Frequency with which to run the WAL truncation process.")
f.DurationVar(&c.MinAge, "ruler.wal.min-age", DefaultConfig.MinAge, "Minimum age that samples must exist in the WAL before being truncated.")
f.DurationVar(&c.MaxAge, "ruler.wal.max-age", DefaultConfig.MaxAge, "Maximum age that samples must exist in the WAL before being truncated.")
}

@ -124,14 +124,14 @@ type Config struct {
GRPCClientConfig grpcclient.Config `yaml:"grpc_client_config" doc:"description=This configures the gRPC client used to report errors back to the query-frontend."`
// Schedulers ring
UseSchedulerRing bool `yaml:"use_scheduler_ring"`
SchedulerRing lokiutil.RingConfig `yaml:"scheduler_ring,omitempty"`
SchedulerRing lokiutil.RingConfig `yaml:"scheduler_ring,omitempty" doc:"description=The hash ring configuration. This option is required only if use_scheduler_ring is true."`
}
func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
f.IntVar(&cfg.MaxOutstandingPerTenant, "query-scheduler.max-outstanding-requests-per-tenant", 100, "Maximum number of outstanding requests per tenant per query scheduler. In-flight requests above this limit will fail with HTTP response status code 429.")
f.IntVar(&cfg.MaxOutstandingPerTenant, "query-scheduler.max-outstanding-requests-per-tenant", 100, "Maximum number of outstanding requests per tenant per query-scheduler. In-flight requests above this limit will fail with HTTP response status code 429.")
f.DurationVar(&cfg.QuerierForgetDelay, "query-scheduler.querier-forget-delay", 0, "If a querier disconnects without sending notification about graceful shutdown, the query-scheduler will keep the querier in the tenant's shard until the forget delay has passed. This feature is useful to reduce the blast radius when shuffle-sharding is enabled.")
cfg.GRPCClientConfig.RegisterFlagsWithPrefix("query-scheduler.grpc-client-config", f)
f.BoolVar(&cfg.UseSchedulerRing, "query-scheduler.use-scheduler-ring", false, "Set to true to have the query scheduler create a ring and the frontend and frontend_worker use this ring to get the addresses of the query schedulers. If frontend_address and scheduler_address are not present in the config this value will be toggle by Loki to true")
f.BoolVar(&cfg.UseSchedulerRing, "query-scheduler.use-scheduler-ring", false, "Set to true to have the query schedulers create and place themselves in a ring. If no frontend_address or scheduler_address are present anywhere else in the configuration, Loki will toggle this value to true.")
cfg.SchedulerRing.RegisterFlagsWithPrefix("query-scheduler.", "collectors/", f)
}

@ -76,7 +76,7 @@ type S3Config struct {
HTTPConfig HTTPConfig `yaml:"http_config"`
SignatureVersion string `yaml:"signature_version"`
SSEConfig bucket_s3.SSEConfig `yaml:"sse"`
BackoffConfig backoff.Config `yaml:"backoff_config"`
BackoffConfig backoff.Config `yaml:"backoff_config" doc:"description=Configures back off when S3 get Object."`
Inject InjectRequestMiddleware `yaml:"-"`
}

@ -51,10 +51,10 @@ func (cfg *BOSStorageConfig) RegisterFlags(f *flag.FlagSet) {
// RegisterFlagsWithPrefix adds the flags required to config this to the given FlagSet
func (cfg *BOSStorageConfig) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
f.StringVar(&cfg.BucketName, prefix+"baidubce.bucket-name", "", "Name of BOS bucket.")
f.StringVar(&cfg.Endpoint, prefix+"baidubce.endpoint", DefaultEndpoint, "BOS endpoint to connect to.")
f.StringVar(&cfg.AccessKeyID, prefix+"baidubce.access-key-id", "", "Baidu Cloud Engine (BCE) Access Key ID.")
f.Var(&cfg.SecretAccessKey, prefix+"baidubce.secret-access-key", "Baidu Cloud Engine (BCE) Secret Access Key.")
f.StringVar(&cfg.BucketName, prefix+"bos.bucket-name", "", "Name of BOS bucket.")
f.StringVar(&cfg.Endpoint, prefix+"bos.endpoint", DefaultEndpoint, "BOS endpoint to connect to.")
f.StringVar(&cfg.AccessKeyID, prefix+"bos.access-key-id", "", "Baidu Cloud Engine (BCE) Access Key ID.")
f.Var(&cfg.SecretAccessKey, prefix+"bos.secret-access-key", "Baidu Cloud Engine (BCE) Secret Access Key.")
}
type BOSObjectStorage struct {

@ -53,9 +53,9 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
// RegisterFlagsWithPrefix registers flags with prefix.
func (cfg *Config) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
f.IntVar(&cfg.UpTo, prefix+"hedge-requests-up-to", 2, "The maximun of hedge requests allowed.")
f.IntVar(&cfg.UpTo, prefix+"hedge-requests-up-to", 2, "The maximum of hedge requests allowed.")
f.DurationVar(&cfg.At, prefix+"hedge-requests-at", 0, "If set to a non-zero value a second request will be issued at the provided duration. Default is 0 (disabled)")
f.IntVar(&cfg.MaxPerSecond, prefix+"hedge-max-per-second", 5, "The maximun of hedge requests allowed per seconds.")
f.IntVar(&cfg.MaxPerSecond, prefix+"hedge-max-per-second", 5, "The maximum of hedge requests allowed per seconds.")
}
// Client returns a hedged http client.

@ -88,9 +88,12 @@ func (t TableRanges) ConfigForTableNumber(tableNumber int64) *PeriodConfig {
// PeriodConfig defines the schema and tables to use for a period of time
type PeriodConfig struct {
From DayTime `yaml:"from"` // used when working with config
IndexType string `yaml:"store"` // type of index client to use.
ObjectType string `yaml:"object_store"` // type of object client to use; if omitted, defaults to store.
// used when working with config
From DayTime `yaml:"from" doc:"description=The date of the first day that index buckets should be created. Use a date in the past if this is your only period_config, otherwise use a date when you want the schema to switch over. In YYYY-MM-DD format, for example: 2018-04-15."`
// type of index client to use.
IndexType string `yaml:"store"`
// type of object client to use; if omitted, defaults to store.
ObjectType string `yaml:"object_store"`
Schema string `yaml:"schema"`
IndexTables PeriodicTableConfig `yaml:"index"`
ChunkTables PeriodicTableConfig `yaml:"chunks"`

@ -57,14 +57,14 @@ type StoreLimits interface {
// Config chooses which storage client to use.
type Config struct {
AWSStorageConfig aws.StorageConfig `yaml:"aws"`
AWSStorageConfig aws.StorageConfig `yaml:"aws" doc:"description=Configures storing chunks in AWS. Required options only required when aws is present."`
AzureStorageConfig azure.BlobStorageConfig `yaml:"azure"`
BOSStorageConfig baidubce.BOSStorageConfig `yaml:"bos"`
GCPStorageConfig gcp.Config `yaml:"bigtable"`
GCSConfig gcp.GCSConfig `yaml:"gcs"`
CassandraStorageConfig cassandra.Config `yaml:"cassandra"`
BoltDBConfig local.BoltDBConfig `yaml:"boltdb"`
FSConfig local.FSConfig `yaml:"filesystem"`
GCPStorageConfig gcp.Config `yaml:"bigtable" doc:"description=Configures storing indexes in Bigtable. Required fields only required when bigtable is defined in config."`
GCSConfig gcp.GCSConfig `yaml:"gcs" doc:"description=Configures storing chunks in GCS. Required fields only required when gcs is defined in config."`
CassandraStorageConfig cassandra.Config `yaml:"cassandra" doc:"description=Configures storing chunks and/or the index in Cassandra."`
BoltDBConfig local.BoltDBConfig `yaml:"boltdb" doc:"description=Configures storing index in BoltDB. Required fields only required when boltdb is present in the configuration."`
FSConfig local.FSConfig `yaml:"filesystem" doc:"description=Configures storing the chunks on the local file system. Required fields only required when filesystem is present in the configuration."`
Swift openstack.SwiftConfig `yaml:"swift"`
GrpcConfig grpc.Config `yaml:"grpc_store"`
Hedging hedging.Config `yaml:"hedging"`
@ -76,7 +76,7 @@ type Config struct {
MaxParallelGetChunk int `yaml:"max_parallel_get_chunk"`
MaxChunkBatchSize int `yaml:"max_chunk_batch_size"`
BoltDBShipperConfig shipper.Config `yaml:"boltdb_shipper"`
BoltDBShipperConfig shipper.Config `yaml:"boltdb_shipper" doc:"description=Configures storing index in an Object Store (GCS/S3/Azure/Swift/Filesystem) in the form of boltdb files. Required fields only required when boltdb-shipper is defined in config."`
TSDBShipperConfig indexshipper.Config `yaml:"tsdb_shipper"`
// Config for using AsyncStore when using async index stores like `boltdb-shipper`.

@ -86,20 +86,20 @@ type Config struct {
DeleteMaxInterval time.Duration `yaml:"delete_max_interval"`
MaxCompactionParallelism int `yaml:"max_compaction_parallelism"`
UploadParallelism int `yaml:"upload_parallelism"`
CompactorRing util.RingConfig `yaml:"compactor_ring,omitempty"`
RunOnce bool `yaml:"_"`
CompactorRing util.RingConfig `yaml:"compactor_ring,omitempty" doc:"description=The hash ring configuration used by compactors to elect a single instance for running compactions. The CLI flags prefix for this block config is: boltdb.shipper.compactor.ring"`
RunOnce bool `yaml:"_" doc:"hidden"`
TablesToCompact int `yaml:"tables_to_compact"`
SkipLatestNTables int `yaml:"skip_latest_n_tables"`
// Deprecated
DeletionMode string `yaml:"deletion_mode"`
DeletionMode string `yaml:"deletion_mode" doc:"deprecated|description=Use deletion_mode per tenant configuration instead."`
}
// RegisterFlags registers flags.
func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
f.StringVar(&cfg.WorkingDirectory, "boltdb.shipper.compactor.working-directory", "", "Directory where files can be downloaded for compaction.")
f.StringVar(&cfg.SharedStoreType, "boltdb.shipper.compactor.shared-store", "", "Shared store used for storing boltdb files. Supported types: gcs, s3, azure, swift, filesystem")
f.StringVar(&cfg.SharedStoreKeyPrefix, "boltdb.shipper.compactor.shared-store.key-prefix", "index/", "Prefix to add to Object Keys in Shared store. Path separator(if any) should always be a '/'. Prefix should never start with a separator but should always end with it.")
f.StringVar(&cfg.SharedStoreType, "boltdb.shipper.compactor.shared-store", "", "The shared store used for storing boltdb files. Supported types: gcs, s3, azure, swift, filesystem, bos.")
f.StringVar(&cfg.SharedStoreKeyPrefix, "boltdb.shipper.compactor.shared-store.key-prefix", "index/", "Prefix to add to object keys in shared store. Path separator(if any) should always be a '/'. Prefix should never start with a separator but should always end with it.")
f.DurationVar(&cfg.CompactionInterval, "boltdb.shipper.compactor.compaction-interval", 10*time.Minute, "Interval at which to re-run the compaction operation.")
f.DurationVar(&cfg.ApplyRetentionInterval, "boltdb.shipper.compactor.apply-retention-interval", 0, "Interval at which to apply/enforce retention. 0 means run at same interval as compaction. If non-zero, it should always be a multiple of compaction interval.")
f.DurationVar(&cfg.RetentionDeleteDelay, "boltdb.shipper.compactor.retention-delete-delay", 2*time.Hour, "Delay after which chunks will be fully deleted during retention.")
@ -110,15 +110,15 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
f.DurationVar(&cfg.DeleteMaxInterval, "boltdb.shipper.compactor.delete-max-interval", 0, "Constrain the size of any single delete request. When a delete request > delete_max_interval is input, the request is sharded into smaller requests of no more than delete_max_interval")
f.DurationVar(&cfg.RetentionTableTimeout, "boltdb.shipper.compactor.retention-table-timeout", 0, "The maximum amount of time to spend running retention and deletion on any given table in the index.")
f.IntVar(&cfg.MaxCompactionParallelism, "boltdb.shipper.compactor.max-compaction-parallelism", 1, "Maximum number of tables to compact in parallel. While increasing this value, please make sure compactor has enough disk space allocated to be able to store and compact as many tables.")
f.IntVar(&cfg.UploadParallelism, "boltdb.shipper.compactor.upload-parallelism", 10, "Number of upload/remove operations to execute in parallel when finalizing a compaction. ")
f.IntVar(&cfg.UploadParallelism, "boltdb.shipper.compactor.upload-parallelism", 10, "Number of upload/remove operations to execute in parallel when finalizing a compaction. NOTE: This setting is per compaction operation, which can be executed in parallel. The upper bound on the number of concurrent uploads is upload_parallelism * max_compaction_parallelism.")
f.BoolVar(&cfg.RunOnce, "boltdb.shipper.compactor.run-once", false, "Run the compactor one time to cleanup and compact index files only (no retention applied)")
// Deprecated
flagext.DeprecatedFlag(f, "boltdb.shipper.compactor.deletion-mode", "Deprecated. This has been moved to the deletion_mode per tenant configuration.", util_log.Logger)
cfg.CompactorRing.RegisterFlagsWithPrefix("boltdb.shipper.compactor.", "collectors/", f)
f.IntVar(&cfg.TablesToCompact, "boltdb.shipper.compactor.tables-to-compact", 0, "The number of most recent tables to compact in a single run. Default: all")
f.IntVar(&cfg.SkipLatestNTables, "boltdb.shipper.compactor.skip-latest-n-tables", 0, "Skip compacting latest N tables")
f.IntVar(&cfg.TablesToCompact, "boltdb.shipper.compactor.tables-to-compact", 0, "Number of tables that compactor will try to compact. Newer tables are chosen when this is less than the number of tables available.")
f.IntVar(&cfg.SkipLatestNTables, "boltdb.shipper.compactor.skip-latest-n-tables", 0, "Do not compact N latest tables. Together with -boltdb.shipper.compactor.run-once and -boltdb.shipper.compactor.tables-to-compact, this is useful when clearing compactor backlogs.")
}

@ -63,7 +63,7 @@ type RingCfg struct {
// RegisterFlagsWithPrefix register all Index Gateway flags related to its ring but with a proper store prefix to avoid conflicts.
func (cfg *RingCfg) RegisterFlags(prefix, storePrefix string, f *flag.FlagSet) {
cfg.RegisterFlagsWithPrefix(prefix, storePrefix, f)
f.IntVar(&cfg.ReplicationFactor, "replication-factor", 3, "how many index gateway instances are assigned to each tenant")
f.IntVar(&cfg.ReplicationFactor, "replication-factor", 3, "How many index gateway instances are assigned to each tenant.")
}
// Config configures an Index Gateway server.
@ -75,11 +75,11 @@ type Config struct {
//
// In case it isn't explicitly set, it follows the same behavior of the other rings (ex: using the common configuration
// section and the ingester configuration by default).
Ring RingCfg `yaml:"ring,omitempty"`
Ring RingCfg `yaml:"ring,omitempty" doc:"description=Defines the ring to be used by the index gateway servers and clients in case the servers are configured to run in 'ring' mode. In case this isn't configured, this block supports inheriting configuration from the common ring section."`
}
// RegisterFlags register all IndexGatewayClientConfig flags and all the flags of its subconfigs but with a prefix (ex: shipper).
func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
cfg.Ring.RegisterFlags("index-gateway.", "collectors/", f)
f.StringVar((*string)(&cfg.Mode), "index-gateway.mode", SimpleMode.String(), "mode in which the index gateway client will be running")
f.StringVar((*string)(&cfg.Mode), "index-gateway.mode", SimpleMode.String(), "Defines in which mode the index gateway server will operate (default to 'simple'). It supports two modes:\n- 'simple': an index gateway server instance is responsible for handling, storing and returning requests for all indices for all tenants.\n- 'ring': an index gateway server instance is responsible for a subset of tenants instead of all tenants.")
}

@ -29,7 +29,7 @@ type RingConfig struct {
ZoneAwarenessEnabled bool `yaml:"zone_awareness_enabled"`
// Instance details
InstanceID string `yaml:"instance_id"`
InstanceID string `yaml:"instance_id" doc:"default=<hostname>"`
InstanceInterfaceNames []string `yaml:"instance_interface_names" doc:"default=[<private network interfaces>]"`
InstancePort int `yaml:"instance_port"`
InstanceAddr string `yaml:"instance_addr"`

@ -88,7 +88,7 @@ type Limits struct {
RulerTenantShardSize int `yaml:"ruler_tenant_shard_size" json:"ruler_tenant_shard_size"`
RulerMaxRulesPerRuleGroup int `yaml:"ruler_max_rules_per_rule_group" json:"ruler_max_rules_per_rule_group"`
RulerMaxRuleGroupsPerTenant int `yaml:"ruler_max_rule_groups_per_tenant" json:"ruler_max_rule_groups_per_tenant"`
RulerAlertManagerConfig *config.AlertManagerConfig `yaml:"ruler_alertmanager_config" json:"ruler_alertmanager_config"`
RulerAlertManagerConfig *config.AlertManagerConfig `yaml:"ruler_alertmanager_config" json:"ruler_alertmanager_config" doc:"hidden"`
// Store-gateway.
StoreGatewayTenantShardSize int `yaml:"store_gateway_tenant_shard_size" json:"store_gateway_tenant_shard_size"`

@ -103,57 +103,57 @@ type Limits struct {
RulerEvaluationDelay model.Duration `yaml:"ruler_evaluation_delay_duration" json:"ruler_evaluation_delay_duration"`
RulerMaxRulesPerRuleGroup int `yaml:"ruler_max_rules_per_rule_group" json:"ruler_max_rules_per_rule_group"`
RulerMaxRuleGroupsPerTenant int `yaml:"ruler_max_rule_groups_per_tenant" json:"ruler_max_rule_groups_per_tenant"`
RulerAlertManagerConfig *ruler_config.AlertManagerConfig `yaml:"ruler_alertmanager_config" json:"ruler_alertmanager_config"`
RulerAlertManagerConfig *ruler_config.AlertManagerConfig `yaml:"ruler_alertmanager_config" json:"ruler_alertmanager_config" doc:"hidden"`
// TODO(dannyk): add HTTP client overrides (basic auth / tls config, etc)
// Ruler remote-write limits.
// this field is the inversion of the general remote_write.enabled because the zero value of a boolean is false,
// and if it were ruler_remote_write_enabled, it would be impossible to know if the value was explicitly set or default
RulerRemoteWriteDisabled bool `yaml:"ruler_remote_write_disabled" json:"ruler_remote_write_disabled"`
RulerRemoteWriteDisabled bool `yaml:"ruler_remote_write_disabled" json:"ruler_remote_write_disabled" doc:"description=Disable recording rules remote-write."`
// deprecated use RulerRemoteWriteConfig instead.
RulerRemoteWriteURL string `yaml:"ruler_remote_write_url" json:"ruler_remote_write_url"`
RulerRemoteWriteURL string `yaml:"ruler_remote_write_url" json:"ruler_remote_write_url" doc:"deprecated|description=Use 'ruler_remote_write_config' instead. The URL of the endpoint to send samples to."`
// deprecated use RulerRemoteWriteConfig instead
RulerRemoteWriteTimeout time.Duration `yaml:"ruler_remote_write_timeout" json:"ruler_remote_write_timeout"`
RulerRemoteWriteTimeout time.Duration `yaml:"ruler_remote_write_timeout" json:"ruler_remote_write_timeout" doc:"deprecated|description=Use 'ruler_remote_write_config' instead. Timeout for requests to the remote write endpoint."`
// deprecated use RulerRemoteWriteConfig instead
RulerRemoteWriteHeaders OverwriteMarshalingStringMap `yaml:"ruler_remote_write_headers" json:"ruler_remote_write_headers"`
RulerRemoteWriteHeaders OverwriteMarshalingStringMap `yaml:"ruler_remote_write_headers" json:"ruler_remote_write_headers" doc:"deprecated|description=Use 'ruler_remote_write_config' instead. Custom HTTP headers to be sent along with each remote write request. Be aware that headers that are set by Loki itself can't be overwritten."`
// deprecated use RulerRemoteWriteConfig instead
RulerRemoteWriteRelabelConfigs []*util.RelabelConfig `yaml:"ruler_remote_write_relabel_configs,omitempty" json:"ruler_remote_write_relabel_configs,omitempty"`
RulerRemoteWriteRelabelConfigs []*util.RelabelConfig `yaml:"ruler_remote_write_relabel_configs,omitempty" json:"ruler_remote_write_relabel_configs,omitempty" doc:"deprecated|description=Use 'ruler_remote_write_config' instead. List of remote write relabel configurations."`
// deprecated use RulerRemoteWriteConfig instead
RulerRemoteWriteQueueCapacity int `yaml:"ruler_remote_write_queue_capacity" json:"ruler_remote_write_queue_capacity"`
RulerRemoteWriteQueueCapacity int `yaml:"ruler_remote_write_queue_capacity" json:"ruler_remote_write_queue_capacity" doc:"deprecated|description=Use 'ruler_remote_write_config' instead. Number of samples to buffer per shard before we block reading of more samples from the WAL. It is recommended to have enough capacity in each shard to buffer several requests to keep throughput up while processing occasional slow remote requests."`
// deprecated use RulerRemoteWriteConfig instead
RulerRemoteWriteQueueMinShards int `yaml:"ruler_remote_write_queue_min_shards" json:"ruler_remote_write_queue_min_shards"`
RulerRemoteWriteQueueMinShards int `yaml:"ruler_remote_write_queue_min_shards" json:"ruler_remote_write_queue_min_shards" doc:"deprecated|description=Use 'ruler_remote_write_config' instead. Minimum number of shards, i.e. amount of concurrency."`
// deprecated use RulerRemoteWriteConfig instead
RulerRemoteWriteQueueMaxShards int `yaml:"ruler_remote_write_queue_max_shards" json:"ruler_remote_write_queue_max_shards"`
RulerRemoteWriteQueueMaxShards int `yaml:"ruler_remote_write_queue_max_shards" json:"ruler_remote_write_queue_max_shards" doc:"deprecated|description=Use 'ruler_remote_write_config' instead. Maximum number of shards, i.e. amount of concurrency."`
// deprecated use RulerRemoteWriteConfig instead
RulerRemoteWriteQueueMaxSamplesPerSend int `yaml:"ruler_remote_write_queue_max_samples_per_send" json:"ruler_remote_write_queue_max_samples_per_send"`
RulerRemoteWriteQueueMaxSamplesPerSend int `yaml:"ruler_remote_write_queue_max_samples_per_send" json:"ruler_remote_write_queue_max_samples_per_send" doc:"deprecated|description=Use 'ruler_remote_write_config' instead. Maximum number of samples per send."`
// deprecated use RulerRemoteWriteConfig instead
RulerRemoteWriteQueueBatchSendDeadline time.Duration `yaml:"ruler_remote_write_queue_batch_send_deadline" json:"ruler_remote_write_queue_batch_send_deadline"`
RulerRemoteWriteQueueBatchSendDeadline time.Duration `yaml:"ruler_remote_write_queue_batch_send_deadline" json:"ruler_remote_write_queue_batch_send_deadline" doc:"deprecated|description=Use 'ruler_remote_write_config' instead. Maximum time a sample will wait in buffer."`
// deprecated use RulerRemoteWriteConfig instead
RulerRemoteWriteQueueMinBackoff time.Duration `yaml:"ruler_remote_write_queue_min_backoff" json:"ruler_remote_write_queue_min_backoff"`
RulerRemoteWriteQueueMinBackoff time.Duration `yaml:"ruler_remote_write_queue_min_backoff" json:"ruler_remote_write_queue_min_backoff" doc:"deprecated|description=Use 'ruler_remote_write_config' instead. Initial retry delay. Gets doubled for every retry."`
// deprecated use RulerRemoteWriteConfig instead
RulerRemoteWriteQueueMaxBackoff time.Duration `yaml:"ruler_remote_write_queue_max_backoff" json:"ruler_remote_write_queue_max_backoff"`
RulerRemoteWriteQueueMaxBackoff time.Duration `yaml:"ruler_remote_write_queue_max_backoff" json:"ruler_remote_write_queue_max_backoff" doc:"deprecated|description=Use 'ruler_remote_write_config' instead. Maximum retry delay."`
// deprecated use RulerRemoteWriteConfig instead
RulerRemoteWriteQueueRetryOnRateLimit bool `yaml:"ruler_remote_write_queue_retry_on_ratelimit" json:"ruler_remote_write_queue_retry_on_ratelimit"`
RulerRemoteWriteQueueRetryOnRateLimit bool `yaml:"ruler_remote_write_queue_retry_on_ratelimit" json:"ruler_remote_write_queue_retry_on_ratelimit" doc:"deprecated|description=Use 'ruler_remote_write_config' instead. Retry upon receiving a 429 status code from the remote-write storage. This is experimental and might change in the future."`
// deprecated use RulerRemoteWriteConfig instead
RulerRemoteWriteSigV4Config *sigv4.SigV4Config `yaml:"ruler_remote_write_sigv4_config" json:"ruler_remote_write_sigv4_config"`
RulerRemoteWriteSigV4Config *sigv4.SigV4Config `yaml:"ruler_remote_write_sigv4_config" json:"ruler_remote_write_sigv4_config" doc:"deprecated|description=Use 'ruler_remote_write_config' instead. Configures AWS's Signature Verification 4 signing process to sign every remote write request."`
RulerRemoteWriteConfig map[string]config.RemoteWriteConfig `yaml:"ruler_remote_write_config,omitempty" json:"ruler_remote_write_config,omitempty"`
RulerRemoteWriteConfig map[string]config.RemoteWriteConfig `yaml:"ruler_remote_write_config,omitempty" json:"ruler_remote_write_config,omitempty" doc:"description=Configures global and per-tenant limits for remote write clients. A map with remote client id as key."`
// Global and per tenant deletion mode
DeletionMode string `yaml:"deletion_mode" json:"deletion_mode"`
// Global and per tenant retention
RetentionPeriod model.Duration `yaml:"retention_period" json:"retention_period"`
StreamRetention []StreamRetention `yaml:"retention_stream,omitempty" json:"retention_stream,omitempty"`
StreamRetention []StreamRetention `yaml:"retention_stream,omitempty" json:"retention_stream,omitempty" doc:"description=Per-stream retention to apply, if the retention is enable on the compactor side.\nExample:\n retention_stream:\n - selector: '{namespace=\"dev\"}'\n priority: 1\n period: 24h\n- selector: '{container=\"nginx\"}'\n priority: 1\n period: 744h\nSelector is a Prometheus labels matchers that will apply the 'period' retention only if the stream is matching. In case multiple stream are matching, the highest priority will be picked. If no rule is matched the 'retention_period' is used."`
// Config for overrides, convenient if it goes here.
PerTenantOverrideConfig string `yaml:"per_tenant_override_config" json:"per_tenant_override_config"`
PerTenantOverridePeriod model.Duration `yaml:"per_tenant_override_period" json:"per_tenant_override_period"`
// Deprecated
CompactorDeletionEnabled bool `yaml:"allow_deletes" json:"allow_deletes"`
CompactorDeletionEnabled bool `yaml:"allow_deletes" json:"allow_deletes" doc:"deprecated|description=Use deletion_mode per tenant configuration instead."`
ShardStreams *shardstreams.Config `yaml:"shard_streams" json:"shard_streams"`
@ -169,51 +169,51 @@ type StreamRetention struct {
// RegisterFlags adds the flags required to config this to the given FlagSet
func (l *Limits) RegisterFlags(f *flag.FlagSet) {
f.StringVar(&l.IngestionRateStrategy, "distributor.ingestion-rate-limit-strategy", "global", "Whether the ingestion rate limit should be applied individually to each distributor instance (local), or evenly shared across the cluster (global).")
f.StringVar(&l.IngestionRateStrategy, "distributor.ingestion-rate-limit-strategy", "global", "Whether the ingestion rate limit should be applied individually to each distributor instance (local), or evenly shared across the cluster (global). The ingestion rate strategy cannot be overridden on a per-tenant basis.\n- local: enforces the limit on a per distributor basis. The actual effective rate limit will be N times higher, where N is the number of distributor replicas.\n- global: enforces the limit globally, configuring a per-distributor local rate limiter as 'ingestion_rate / N', where N is the number of distributor replicas (it's automatically adjusted if the number of replicas change). The global strategy requires the distributors to form their own ring, which is used to keep track of the current number of healthy distributor replicas.")
f.Float64Var(&l.IngestionRateMB, "distributor.ingestion-rate-limit-mb", 4, "Per-user ingestion rate limit in sample size per second. Units in MB.")
f.Float64Var(&l.IngestionBurstSizeMB, "distributor.ingestion-burst-size-mb", 6, "Per-user allowed ingestion burst size (in sample size). Units in MB.")
f.Var(&l.MaxLineSize, "distributor.max-line-size", "maximum line length allowed, i.e. 100mb. Default (0) means unlimited.")
f.BoolVar(&l.MaxLineSizeTruncate, "distributor.max-line-size-truncate", false, "Whether to truncate lines that exceed max_line_size")
f.IntVar(&l.MaxLabelNameLength, "validation.max-length-label-name", 1024, "Maximum length accepted for label names")
f.IntVar(&l.MaxLabelValueLength, "validation.max-length-label-value", 2048, "Maximum length accepted for label value. This setting also applies to the metric name")
f.Float64Var(&l.IngestionBurstSizeMB, "distributor.ingestion-burst-size-mb", 6, "Per-user allowed ingestion burst size (in sample size). Units in MB. The burst size refers to the per-distributor local rate limiter even in the case of the 'global' strategy, and should be set at least to the maximum logs size expected in a single push request.")
f.Var(&l.MaxLineSize, "distributor.max-line-size", "Maximum line size on ingestion path. Example: 256kb. There is no limit when unset or set to 0.")
f.BoolVar(&l.MaxLineSizeTruncate, "distributor.max-line-size-truncate", false, "Whether to truncate lines that exceed max_line_size.")
f.IntVar(&l.MaxLabelNameLength, "validation.max-length-label-name", 1024, "Maximum length accepted for label names.")
f.IntVar(&l.MaxLabelValueLength, "validation.max-length-label-value", 2048, "Maximum length accepted for label value. This setting also applies to the metric name.")
f.IntVar(&l.MaxLabelNamesPerSeries, "validation.max-label-names-per-series", 30, "Maximum number of label names per series.")
f.BoolVar(&l.RejectOldSamples, "validation.reject-old-samples", true, "Reject old samples.")
f.BoolVar(&l.IncrementDuplicateTimestamp, "validation.increment-duplicate-timestamps", false, "Increment the timestamp of a log line by one nanosecond in the future from a previous entry for the same stream with the same timestamp; guarantees sort order at query time.")
f.BoolVar(&l.RejectOldSamples, "validation.reject-old-samples", true, "Whether or not old samples will be rejected.")
f.BoolVar(&l.IncrementDuplicateTimestamp, "validation.increment-duplicate-timestamps", false, "Alter the log line timestamp during ingestion when the timestamp is the same as the previous entry for the same stream. When enabled, if a log line in a push request has the same timestamp as the previous line for the same stream, one nanosecond is added to the log line. This will preserve the received order of log lines with the exact same timestamp when they are queried, by slightly altering their stored timestamp. NOTE: This is imperfect, because Loki accepts out of order writes, and another push request for the same stream could contain duplicate timestamps to existing entries and they will not be incremented.")
_ = l.RejectOldSamplesMaxAge.Set("7d")
f.Var(&l.RejectOldSamplesMaxAge, "validation.reject-old-samples.max-age", "Maximum accepted sample age before rejecting.")
_ = l.CreationGracePeriod.Set("10m")
f.Var(&l.CreationGracePeriod, "validation.create-grace-period", "Duration which table will be created/deleted before/after it's needed; we won't accept sample from before this time.")
f.BoolVar(&l.EnforceMetricName, "validation.enforce-metric-name", true, "Enforce every sample has a metric name.")
f.IntVar(&l.MaxEntriesLimitPerQuery, "validation.max-entries-limit", 5000, "Per-user entries limit per query")
f.IntVar(&l.MaxEntriesLimitPerQuery, "validation.max-entries-limit", 5000, "Maximum number of log entries that will be returned for a query.")
f.IntVar(&l.MaxLocalStreamsPerUser, "ingester.max-streams-per-user", 0, "Maximum number of active streams per user, per ingester. 0 to disable.")
f.IntVar(&l.MaxGlobalStreamsPerUser, "ingester.max-global-streams-per-user", 5000, "Maximum number of active streams per user, across the cluster. 0 to disable.")
f.BoolVar(&l.UnorderedWrites, "ingester.unordered-writes", true, "Allow out of order writes.")
f.IntVar(&l.MaxGlobalStreamsPerUser, "ingester.max-global-streams-per-user", 5000, "Maximum number of active streams per user, across the cluster. 0 to disable. When the global limit is enabled, each ingester is configured with a dynamic local limit based on the replication factor and the current number of healthy ingesters, and is kept updated whenever the number of ingesters change.")
f.BoolVar(&l.UnorderedWrites, "ingester.unordered-writes", true, "When true, out-of-order writes are accepted.")
_ = l.PerStreamRateLimit.Set(strconv.Itoa(defaultPerStreamRateLimit))
f.Var(&l.PerStreamRateLimit, "ingester.per-stream-rate-limit", "Maximum byte rate per second per stream, also expressible in human readable forms (1MB, 256KB, etc).")
_ = l.PerStreamRateLimitBurst.Set(strconv.Itoa(defaultPerStreamBurstLimit))
f.Var(&l.PerStreamRateLimitBurst, "ingester.per-stream-rate-limit-burst", "Maximum burst bytes per stream, also expressible in human readable forms (1MB, 256KB, etc).")
f.Var(&l.PerStreamRateLimitBurst, "ingester.per-stream-rate-limit-burst", "Maximum burst bytes per stream, also expressible in human readable forms (1MB, 256KB, etc). This is how far above the rate limit a stream can 'burst' before the stream is limited.")
f.IntVar(&l.MaxChunksPerQuery, "store.query-chunk-limit", 2e6, "Maximum number of chunks that can be fetched in a single query.")
_ = l.MaxQueryLength.Set("721h")
f.Var(&l.MaxQueryLength, "store.max-query-length", "Limit to length of chunk store queries, 0 to disable.")
f.IntVar(&l.MaxQuerySeries, "querier.max-query-series", 500, "Limit the maximum of unique series returned by a metric query. When the limit is reached an error is returned.")
f.Var(&l.MaxQueryLength, "store.max-query-length", "The limit to length of chunk store queries. 0 to disable.")
f.IntVar(&l.MaxQuerySeries, "querier.max-query-series", 500, "Limit the maximum of unique series that is returned by a metric query. When the limit is reached an error is returned.")
_ = l.QueryTimeout.Set(DefaultPerTenantQueryTimeout)
f.Var(&l.QueryTimeout, "querier.query-timeout", "Timeout when querying backends (ingesters or storage) during the execution of a query request. If a specific per-tenant timeout is used, this timeout is ignored.")
_ = l.MaxQueryLookback.Set("0s")
f.Var(&l.MaxQueryLookback, "querier.max-query-lookback", "Limit how long back data (series and metadata) can be queried, up until <lookback> duration ago. This limit is enforced in the query-frontend, querier and ruler. If the requested time range is outside the allowed range, the request will not fail but will be manipulated to only query data within the allowed time range. 0 to disable.")
f.IntVar(&l.MaxQueryParallelism, "querier.max-query-parallelism", 32, "Maximum number of queries will be scheduled in parallel by the frontend.")
f.Var(&l.MaxQueryLookback, "querier.max-query-lookback", "Limit how far back in time series data and metadata can be queried, up until lookback duration ago. This limit is enforced in the query frontend, the querier and the ruler. If the requested time range is outside the allowed range, the request will not fail, but will be modified to only query data within the allowed time range. The default value of 0 does not set a limit.")
f.IntVar(&l.MaxQueryParallelism, "querier.max-query-parallelism", 32, "Maximum number of queries that will be scheduled in parallel by the frontend.")
f.IntVar(&l.TSDBMaxQueryParallelism, "querier.tsdb-max-query-parallelism", 512, "Maximum number of queries will be scheduled in parallel by the frontend for TSDB schemas.")
f.IntVar(&l.CardinalityLimit, "store.cardinality-limit", 1e5, "Cardinality limit for index queries.")
f.IntVar(&l.MaxStreamsMatchersPerQuery, "querier.max-streams-matcher-per-query", 1000, "Limit the number of streams matchers per query")
f.IntVar(&l.MaxConcurrentTailRequests, "querier.max-concurrent-tail-requests", 10, "Limit the number of concurrent tail requests")
f.IntVar(&l.MaxStreamsMatchersPerQuery, "querier.max-streams-matcher-per-query", 1000, "Maximum number of stream matchers per query.")
f.IntVar(&l.MaxConcurrentTailRequests, "querier.max-concurrent-tail-requests", 10, "Maximum number of concurrent tail requests.")
_ = l.MinShardingLookback.Set("0s")
f.Var(&l.MinShardingLookback, "frontend.min-sharding-lookback", "Limit the sharding time range.Queries with time range that fall between now and now minus the sharding lookback are not sharded. 0 to disable.")
f.Var(&l.MinShardingLookback, "frontend.min-sharding-lookback", "Limit queries that can be sharded. Queries within the time range of now and now minus this sharding lookback are not sharded. The default value of 0s disables the lookback, causing sharding of all queries at all times.")
_ = l.MaxCacheFreshness.Set("1m")
f.Var(&l.MaxCacheFreshness, "frontend.max-cache-freshness", "Most recent allowed cacheable result per-tenant, to prevent caching very recent results that might still be in flux.")
@ -227,17 +227,17 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) {
f.IntVar(&l.RulerMaxRulesPerRuleGroup, "ruler.max-rules-per-rule-group", 0, "Maximum number of rules per rule group per-tenant. 0 to disable.")
f.IntVar(&l.RulerMaxRuleGroupsPerTenant, "ruler.max-rule-groups-per-tenant", 0, "Maximum number of rule groups per-tenant. 0 to disable.")
f.StringVar(&l.PerTenantOverrideConfig, "limits.per-user-override-config", "", "File name of per-user overrides.")
f.StringVar(&l.PerTenantOverrideConfig, "limits.per-user-override-config", "", "Feature renamed to 'runtime configuration', flag deprecated in favor of -runtime-config.file (runtime_config.file in YAML).")
_ = l.RetentionPeriod.Set("744h")
f.Var(&l.RetentionPeriod, "store.retention", "How long before chunks will be deleted from the store. (requires compactor retention enabled).")
f.Var(&l.RetentionPeriod, "store.retention", "Retention to apply for the store, if the retention is enabled on the compactor side.")
_ = l.PerTenantOverridePeriod.Set("10s")
f.Var(&l.PerTenantOverridePeriod, "limits.per-user-override-period", "Period with this to reload the overrides.")
f.Var(&l.PerTenantOverridePeriod, "limits.per-user-override-period", "Feature renamed to 'runtime configuration'; flag deprecated in favor of -runtime-config.reload-period (runtime_config.period in YAML).")
_ = l.QuerySplitDuration.Set("30m")
f.Var(&l.QuerySplitDuration, "querier.split-queries-by-interval", "Split queries by an interval and execute in parallel, 0 disables it. This also determines how cache keys are chosen when result caching is enabled")
f.Var(&l.QuerySplitDuration, "querier.split-queries-by-interval", "Split queries by a time interval and execute in parallel. The value 0 disables splitting by time. This also determines how cache keys are chosen when result caching is enabled.")
f.StringVar(&l.DeletionMode, "compactor.deletion-mode", "filter-and-delete", "Set the deletion mode for the user. Options are: disabled, filter-only, and filter-and-delete")
f.StringVar(&l.DeletionMode, "compactor.deletion-mode", "filter-and-delete", "Deletion mode. Can be one of 'disabled', 'filter-only', or 'filter-and-delete'. When set to 'filter-only' or 'filter-and-delete', and if retention_enabled is true, then the log entry deletion API endpoints are available.")
// Deprecated
dskit_flagext.DeprecatedFlag(f, "compactor.allow-deletes", "Deprecated. Instead, see compactor.deletion-mode which is another per tenant configuration", util_log.Logger)

@ -0,0 +1,185 @@
// SPDX-License-Identifier: AGPL-3.0-only
// Provenance-includes-location: https://github.com/cortexproject/cortex/blob/master/tools/doc-generator/main.go
// Provenance-includes-license: Apache-2.0
// Provenance-includes-copyright: The Cortex Authors.
package main
import (
"flag"
"fmt"
"os"
"path/filepath"
"strings"
"text/template"
"github.com/grafana/loki/pkg/loki"
"github.com/grafana/loki/tools/doc-generator/parse"
)
const (
maxLineWidth = 80
tabWidth = 2
)
func removeFlagPrefix(block *parse.ConfigBlock, prefix string) {
for _, entry := range block.Entries {
switch entry.Kind {
case parse.KindBlock:
// Skip root blocks
if !entry.Root {
removeFlagPrefix(entry.Block, prefix)
}
case parse.KindField:
if strings.HasPrefix(entry.FieldFlag, prefix) {
entry.FieldFlag = "<prefix>" + entry.FieldFlag[len(prefix):]
}
}
}
}
func annotateFlagPrefix(blocks []*parse.ConfigBlock) {
// Find duplicated blocks
groups := map[string][]*parse.ConfigBlock{}
for _, block := range blocks {
groups[block.Name] = append(groups[block.Name], block)
}
// For each duplicated block, we need to fix the CLI flags, because
// in the documentation each block will be displayed only once but
// since they're duplicated they will have a different CLI flag
// prefix, which we want to correctly document.
for _, group := range groups {
if len(group) == 1 {
continue
}
// We need to find the CLI flags prefix of each config block. To do it,
// we pick the first entry from each config block and then find the
// different prefix across all of them.
var flags []string
for _, block := range group {
for _, entry := range block.Entries {
if entry.Kind == parse.KindField {
if len(entry.FieldFlag) > 0 {
flags = append(flags, entry.FieldFlag)
}
break
}
}
}
var allPrefixes []string
for i, prefix := range parse.FindFlagsPrefix(flags) {
if len(prefix) > 0 {
group[i].FlagsPrefix = prefix
allPrefixes = append(allPrefixes, prefix)
}
}
// Store all found prefixes into each block so that when we generate the
// markdown we also know which are all the prefixes for each root block.
for _, block := range group {
block.FlagsPrefixes = allPrefixes
}
}
// Finally, we can remove the CLI flags prefix from the blocks
// which have one annotated.
for _, block := range blocks {
if block.FlagsPrefix != "" {
removeFlagPrefix(block, block.FlagsPrefix)
}
}
}
func generateBlocksMarkdown(blocks []*parse.ConfigBlock) string {
md := &markdownWriter{}
md.writeConfigDoc(blocks)
return md.string()
}
func generateBlockMarkdown(blocks []*parse.ConfigBlock, blockName, fieldName string) string {
// Look for the requested block.
for _, block := range blocks {
if block.Name != blockName {
continue
}
md := &markdownWriter{}
// Wrap the root block with another block, so that we can show the name of the
// root field containing the block specs.
md.writeConfigBlock(&parse.ConfigBlock{
Name: blockName,
Desc: block.Desc,
Entries: []*parse.ConfigEntry{
{
Kind: parse.KindBlock,
Name: fieldName,
Required: true,
Block: block,
BlockDesc: "",
Root: false,
},
},
})
return md.string()
}
// If the block has not been found, we return an empty string.
return ""
}
func main() {
// Parse the generator flags.
flag.Parse()
if flag.NArg() != 1 {
fmt.Fprintf(os.Stderr, "Usage: doc-generator template-file")
os.Exit(1)
}
templatePath := flag.Arg(0)
// In order to match YAML config fields with CLI flags, we map
// the memory address of the CLI flag variables and match them with
// the config struct fields' addresses.
cfg := &loki.Config{}
flags := parse.Flags(cfg)
// Parse the config, mapping each config field with the related CLI flag.
blocks, err := parse.Config(cfg, flags, parse.RootBlocks)
if err != nil {
fmt.Fprintf(os.Stderr, "An error occurred while generating the doc: %s\n", err.Error())
os.Exit(1)
}
// Annotate the flags prefix for each root block, and remove the
// prefix wherever encountered in the config blocks.
annotateFlagPrefix(blocks)
// Generate documentation markdown.
data := struct {
ConfigFile string
GeneratedFileWarning string
}{
GeneratedFileWarning: "<!-- DO NOT EDIT THIS FILE - This file has been automatically generated from its .template -->",
ConfigFile: generateBlocksMarkdown(blocks),
}
// Load the template file.
tpl := template.New(filepath.Base(templatePath))
tpl, err = tpl.ParseFiles(templatePath)
if err != nil {
fmt.Fprintf(os.Stderr, "An error occurred while loading the template %s: %s\n", templatePath, err.Error())
os.Exit(1)
}
// Execute the template to inject generated doc.
if err := tpl.Execute(os.Stdout, data); err != nil {
fmt.Fprintf(os.Stderr, "An error occurred while executing the template %s: %s\n", templatePath, err.Error())
os.Exit(1)
}
}

@ -0,0 +1,645 @@
// SPDX-License-Identifier: AGPL-3.0-only
// Provenance-includes-location: https://github.com/cortexproject/cortex/blob/master/tools/doc-generator/parser.go
// Provenance-includes-license: Apache-2.0
// Provenance-includes-copyright: The Cortex Authors.
package parse
import (
"flag"
"fmt"
"net/url"
"reflect"
"strings"
"time"
"unicode"
"github.com/grafana/dskit/flagext"
"github.com/grafana/regexp"
"github.com/pkg/errors"
"github.com/prometheus/common/model"
prometheus_config "github.com/prometheus/prometheus/config"
"github.com/prometheus/prometheus/model/relabel"
"github.com/weaveworks/common/logging"
"github.com/grafana/loki/pkg/ruler/util"
storage_config "github.com/grafana/loki/pkg/storage/config"
util_validation "github.com/grafana/loki/pkg/util/validation"
"github.com/grafana/loki/pkg/validation"
)
var (
yamlFieldNameParser = regexp.MustCompile("^[^,]+")
yamlFieldInlineParser = regexp.MustCompile("^[^,]*,inline$")
)
// ExamplerConfig can be implemented by configs to provide examples.
// If string is non-empty, it will be added as comment.
// If yaml value is non-empty, it will be marshaled as yaml under the same key as it would appear in config.
type ExamplerConfig interface {
ExampleDoc() (comment string, yaml interface{})
}
type FieldExample struct {
Comment string
Yaml interface{}
}
type ConfigBlock struct {
Name string
Desc string
Entries []*ConfigEntry
FlagsPrefix string
FlagsPrefixes []string
}
func (b *ConfigBlock) Add(entry *ConfigEntry) {
b.Entries = append(b.Entries, entry)
}
type EntryKind string
const (
fieldString = "string"
fieldRelabelConfig = "relabel_config..."
)
const (
KindBlock EntryKind = "block"
KindField EntryKind = "field"
KindSlice EntryKind = "slice"
KindMap EntryKind = "map"
)
type ConfigEntry struct {
Kind EntryKind
Name string
Required bool
// In case the Kind is KindBlock
Block *ConfigBlock
BlockDesc string
Root bool
// In case the Kind is KindField
FieldFlag string
FieldDesc string
FieldType string
FieldDefault string
FieldExample *FieldExample
// In case the Kind is KindMap or KindSlice
Element *ConfigBlock
}
func (e ConfigEntry) Description() string {
return e.FieldDesc
}
type RootBlock struct {
Name string
Desc string
StructType reflect.Type
}
func Flags(cfg flagext.Registerer) map[uintptr]*flag.Flag {
fs := flag.NewFlagSet("", flag.PanicOnError)
cfg.RegisterFlags(fs)
flags := map[uintptr]*flag.Flag{}
fs.VisitAll(func(f *flag.Flag) {
// Skip deprecated flags
if f.Value.String() == "deprecated" {
return
}
ptr := reflect.ValueOf(f.Value).Pointer()
flags[ptr] = f
})
return flags
}
// Config returns a slice of ConfigBlocks. The first ConfigBlock is a recursively expanded cfg.
// The remaining entries in the slice are all (root or not) ConfigBlocks.
func Config(cfg interface{}, flags map[uintptr]*flag.Flag, rootBlocks []RootBlock) ([]*ConfigBlock, error) {
return config(nil, cfg, flags, rootBlocks)
}
func config(block *ConfigBlock, cfg interface{}, flags map[uintptr]*flag.Flag, rootBlocks []RootBlock) ([]*ConfigBlock, error) {
var blocks []*ConfigBlock
// If the input block is nil it means we're generating the doc for the top-level block
if block == nil {
block = &ConfigBlock{}
blocks = append(blocks, block)
}
// The input config is expected to be addressable.
if reflect.TypeOf(cfg).Kind() != reflect.Ptr {
t := reflect.TypeOf(cfg)
return nil, fmt.Errorf("%s is a %s while a %s is expected", t, t.Kind(), reflect.Ptr)
}
// The input config is expected to be a pointer to struct.
v := reflect.ValueOf(cfg).Elem()
t := v.Type()
if v.Kind() != reflect.Struct {
return nil, fmt.Errorf("%s is a %s while a %s is expected", v, v.Kind(), reflect.Struct)
}
for i := 0; i < t.NumField(); i++ {
field := t.Field(i)
fieldValue := v.FieldByIndex(field.Index)
// Skip fields explicitly marked as "hidden" in the doc
if isFieldHidden(field) {
continue
}
// Skip fields not exported via yaml (unless they're inline)
fieldName := getFieldName(field)
if fieldName == "" && !isFieldInline(field) {
continue
}
// Skip field types which are non-configurable
if field.Type.Kind() == reflect.Func {
continue
}
// Skip deprecated fields we're still keeping for backward compatibility
// reasons (by convention we prefix them by UnusedFlag)
if strings.HasPrefix(field.Name, "UnusedFlag") {
continue
}
// Handle custom fields in vendored libs upon which we have no control.
fieldEntry, err := getCustomFieldEntry(cfg, field, fieldValue, flags)
if err != nil {
return nil, err
}
if fieldEntry != nil {
block.Add(fieldEntry)
continue
}
// Recursively re-iterate if it's a struct, and it's not a custom type.
if _, custom := getCustomFieldType(field.Type); (field.Type.Kind() == reflect.Struct || field.Type.Kind() == reflect.Ptr) && !custom {
// Check whether the sub-block is a root config block
rootName, rootDesc, isRoot := isRootBlock(field.Type, rootBlocks)
// Since we're going to recursively iterate, we need to create a new sub
// block and pass it to the doc generation function.
var subBlock *ConfigBlock
if !isFieldInline(field) {
var blockName string
var blockDesc string
if isRoot {
blockName = rootName
// Honor the custom description if available.
blockDesc = getFieldDescription(cfg, field, rootDesc)
} else {
blockName = fieldName
blockDesc = getFieldDescription(cfg, field, "")
}
subBlock = &ConfigBlock{
Name: blockName,
Desc: blockDesc,
}
block.Add(&ConfigEntry{
Kind: KindBlock,
Name: fieldName,
Required: isFieldRequired(field),
Block: subBlock,
BlockDesc: blockDesc,
Root: isRoot,
})
if isRoot {
blocks = append(blocks, subBlock)
}
} else {
subBlock = block
}
if field.Type.Kind() == reflect.Ptr {
// If this is a pointer, it's probably nil, so we initialize it.
fieldValue = reflect.New(field.Type.Elem())
} else if field.Type.Kind() == reflect.Struct {
fieldValue = fieldValue.Addr()
}
// Recursively generate the doc for the sub-block
otherBlocks, err := config(subBlock, fieldValue.Interface(), flags, rootBlocks)
if err != nil {
return nil, err
}
blocks = append(blocks, otherBlocks...)
continue
}
var (
element *ConfigBlock
kind = KindField
)
{
// Add ConfigBlock for slices only if the field isn't a custom type,
// which shouldn't be inspected because doesn't have YAML tags, flag registrations, etc.
_, isCustomType := getFieldCustomType(field.Type)
isSliceOfStructs := field.Type.Kind() == reflect.Slice && (field.Type.Elem().Kind() == reflect.Struct || field.Type.Elem().Kind() == reflect.Ptr)
if !isCustomType && isSliceOfStructs {
element = &ConfigBlock{
Name: fieldName,
Desc: getFieldDescription(cfg, field, ""),
}
kind = KindSlice
_, err = config(element, reflect.New(field.Type.Elem()).Interface(), flags, rootBlocks)
if err != nil {
return nil, errors.Wrapf(err, "couldn't inspect slice, element_type=%s", field.Type.Elem())
}
}
}
fieldType, err := getFieldType(field.Type)
if err != nil {
return nil, errors.Wrapf(err, "config=%s.%s", t.PkgPath(), t.Name())
}
fieldFlag, err := getFieldFlag(field, fieldValue, flags)
if err != nil {
return nil, errors.Wrapf(err, "config=%s.%s", t.PkgPath(), t.Name())
}
if fieldFlag == nil {
block.Add(&ConfigEntry{
Kind: kind,
Name: fieldName,
Required: isFieldRequired(field),
FieldDesc: getFieldDescription(cfg, field, ""),
FieldType: fieldType,
FieldExample: getFieldExample(fieldName, field.Type),
Element: element,
})
continue
}
block.Add(&ConfigEntry{
Kind: kind,
Name: fieldName,
Required: isFieldRequired(field),
FieldFlag: fieldFlag.Name,
FieldDesc: getFieldDescription(cfg, field, fieldFlag.Usage),
FieldType: fieldType,
FieldDefault: getFieldDefault(field, fieldFlag.DefValue),
FieldExample: getFieldExample(fieldName, field.Type),
Element: element,
})
}
return blocks, nil
}
func getFieldName(field reflect.StructField) string {
name := field.Name
tag := field.Tag.Get("yaml")
// If the tag is not specified, then an exported field can be
// configured via the field name (lowercase), while an unexported
// field can't be configured.
if tag == "" {
if unicode.IsLower(rune(name[0])) {
return ""
}
return strings.ToLower(name)
}
// Parse the field name
fieldName := yamlFieldNameParser.FindString(tag)
if fieldName == "-" {
return ""
}
return fieldName
}
func getFieldCustomType(t reflect.Type) (string, bool) {
// Handle custom data types used in the config
switch t.String() {
case reflect.TypeOf(&url.URL{}).String():
return "url", true
case reflect.TypeOf(time.Duration(0)).String():
return "duration", true
case reflect.TypeOf(flagext.StringSliceCSV{}).String():
return fieldString, true
case reflect.TypeOf(flagext.CIDRSliceCSV{}).String():
return fieldString, true
case reflect.TypeOf([]*util.RelabelConfig{}).String():
return fieldRelabelConfig, true
case reflect.TypeOf([]*relabel.Config{}).String():
return fieldRelabelConfig, true
case reflect.TypeOf([]*util_validation.BlockedQuery{}).String():
return "blocked_query...", true
case reflect.TypeOf([]*prometheus_config.RemoteWriteConfig{}).String():
return "remote_write_config...", true
case reflect.TypeOf(storage_config.PeriodConfig{}).String():
return "period_config", true
case reflect.TypeOf(validation.OverwriteMarshalingStringMap{}).String():
return "headers", true
default:
return "", false
}
}
func getFieldType(t reflect.Type) (string, error) {
if typ, isCustom := getFieldCustomType(t); isCustom {
return typ, nil
}
// Fallback to auto-detection of built-in data types
switch t.Kind() {
case reflect.Bool:
return "boolean", nil
case reflect.Int:
fallthrough
case reflect.Int8:
fallthrough
case reflect.Int16:
fallthrough
case reflect.Int32:
fallthrough
case reflect.Int64:
fallthrough
case reflect.Uint:
fallthrough
case reflect.Uint8:
fallthrough
case reflect.Uint16:
fallthrough
case reflect.Uint32:
fallthrough
case reflect.Uint64:
return "int", nil
case reflect.Float32:
fallthrough
case reflect.Float64:
return "float", nil
case reflect.String:
return fieldString, nil
case reflect.Slice:
// Get the type of elements
elemType, err := getFieldType(t.Elem())
if err != nil {
return "", err
}
return "list of " + elemType + "s", nil
case reflect.Map:
return fmt.Sprintf("map of %s to %s", t.Key(), t.Elem().String()), nil
case reflect.Struct:
return t.Name(), nil
case reflect.Ptr:
return getFieldType(t.Elem())
case reflect.Interface:
return t.Name(), nil
default:
return "", fmt.Errorf("unsupported data type %s", t.Kind())
}
}
func getCustomFieldType(t reflect.Type) (string, bool) {
// Handle custom data types used in the config
switch t.String() {
case reflect.TypeOf(&url.URL{}).String():
return "url", true
case reflect.TypeOf(time.Duration(0)).String():
return "duration", true
case reflect.TypeOf(flagext.StringSliceCSV{}).String():
return fieldString, true
case reflect.TypeOf(flagext.CIDRSliceCSV{}).String():
return fieldString, true
case reflect.TypeOf([]*relabel.Config{}).String():
return fieldRelabelConfig, true
case reflect.TypeOf([]*util.RelabelConfig{}).String():
return fieldRelabelConfig, true
case reflect.TypeOf(&prometheus_config.RemoteWriteConfig{}).String():
return "remote_write_config...", true
case reflect.TypeOf(validation.OverwriteMarshalingStringMap{}).String():
return "headers", true
default:
return "", false
}
}
func getFieldFlag(field reflect.StructField, fieldValue reflect.Value, flags map[uintptr]*flag.Flag) (*flag.Flag, error) {
if isAbsentInCLI(field) {
return nil, nil
}
fieldPtr := fieldValue.Addr().Pointer()
fieldFlag, ok := flags[fieldPtr]
if !ok {
return nil, nil
}
return fieldFlag, nil
}
func getFieldExample(fieldKey string, fieldType reflect.Type) *FieldExample {
ex, ok := reflect.New(fieldType).Interface().(ExamplerConfig)
if !ok {
return nil
}
comment, yml := ex.ExampleDoc()
return &FieldExample{
Comment: comment,
Yaml: map[string]interface{}{fieldKey: yml},
}
}
func getCustomFieldEntry(cfg interface{}, field reflect.StructField, fieldValue reflect.Value, flags map[uintptr]*flag.Flag) (*ConfigEntry, error) {
if field.Type == reflect.TypeOf(logging.Level{}) || field.Type == reflect.TypeOf(logging.Format{}) {
fieldFlag, err := getFieldFlag(field, fieldValue, flags)
if err != nil || fieldFlag == nil {
return nil, err
}
return &ConfigEntry{
Kind: KindField,
Name: getFieldName(field),
Required: isFieldRequired(field),
FieldFlag: fieldFlag.Name,
FieldDesc: getFieldDescription(cfg, field, fieldFlag.Usage),
FieldType: fieldString,
FieldDefault: getFieldDefault(field, fieldFlag.DefValue),
}, nil
}
if field.Type == reflect.TypeOf(flagext.URLValue{}) {
fieldFlag, err := getFieldFlag(field, fieldValue, flags)
if err != nil || fieldFlag == nil {
return nil, err
}
return &ConfigEntry{
Kind: KindField,
Name: getFieldName(field),
Required: isFieldRequired(field),
FieldFlag: fieldFlag.Name,
FieldDesc: getFieldDescription(cfg, field, fieldFlag.Usage),
FieldType: "url",
FieldDefault: getFieldDefault(field, fieldFlag.DefValue),
}, nil
}
if field.Type == reflect.TypeOf(flagext.Secret{}) {
fieldFlag, err := getFieldFlag(field, fieldValue, flags)
if err != nil || fieldFlag == nil {
return nil, err
}
return &ConfigEntry{
Kind: KindField,
Name: getFieldName(field),
Required: isFieldRequired(field),
FieldFlag: fieldFlag.Name,
FieldDesc: getFieldDescription(cfg, field, fieldFlag.Usage),
FieldType: fieldString,
FieldDefault: getFieldDefault(field, fieldFlag.DefValue),
}, nil
}
if field.Type == reflect.TypeOf(model.Duration(0)) {
fieldFlag, err := getFieldFlag(field, fieldValue, flags)
if err != nil || fieldFlag == nil {
return nil, err
}
return &ConfigEntry{
Kind: KindField,
Name: getFieldName(field),
Required: isFieldRequired(field),
FieldFlag: fieldFlag.Name,
FieldDesc: getFieldDescription(cfg, field, fieldFlag.Usage),
FieldType: "duration",
FieldDefault: getFieldDefault(field, fieldFlag.DefValue),
}, nil
}
if field.Type == reflect.TypeOf(flagext.Time{}) {
fieldFlag, err := getFieldFlag(field, fieldValue, flags)
if err != nil || fieldFlag == nil {
return nil, err
}
return &ConfigEntry{
Kind: KindField,
Name: getFieldName(field),
Required: isFieldRequired(field),
FieldFlag: fieldFlag.Name,
FieldDesc: getFieldDescription(cfg, field, fieldFlag.Usage),
FieldType: "time",
FieldDefault: getFieldDefault(field, fieldFlag.DefValue),
}, nil
}
return nil, nil
}
func getFieldDefault(field reflect.StructField, fallback string) string {
if v := getDocTagValue(field, "default"); v != "" {
return v
}
return fallback
}
func isFieldDeprecated(f reflect.StructField) bool {
return getDocTagFlag(f, "deprecated")
}
func isFieldHidden(f reflect.StructField) bool {
return getDocTagFlag(f, "hidden")
}
func isAbsentInCLI(f reflect.StructField) bool {
return getDocTagFlag(f, "nocli")
}
func isFieldRequired(f reflect.StructField) bool {
return getDocTagFlag(f, "required")
}
func isFieldInline(f reflect.StructField) bool {
return yamlFieldInlineParser.MatchString(f.Tag.Get("yaml"))
}
func getFieldDescription(cfg interface{}, field reflect.StructField, fallback string) string {
// Set prefix
prefix := ""
if isFieldDeprecated(field) {
prefix += "Deprecated: "
}
if desc := getDocTagValue(field, "description"); desc != "" {
return prefix + desc
}
if methodName := getDocTagValue(field, "description_method"); methodName != "" {
structRef := reflect.ValueOf(cfg)
if method, ok := structRef.Type().MethodByName(methodName); ok {
out := method.Func.Call([]reflect.Value{structRef})
if len(out) == 1 {
return prefix + out[0].String()
}
}
}
return prefix + fallback
}
func isRootBlock(t reflect.Type, rootBlocks []RootBlock) (string, string, bool) {
for _, rootBlock := range rootBlocks {
if t == rootBlock.StructType {
return rootBlock.Name, rootBlock.Desc, true
}
}
return "", "", false
}
func getDocTagFlag(f reflect.StructField, name string) bool {
cfg := parseDocTag(f)
_, ok := cfg[name]
return ok
}
func getDocTagValue(f reflect.StructField, name string) string {
cfg := parseDocTag(f)
return cfg[name]
}
func parseDocTag(f reflect.StructField) map[string]string {
cfg := map[string]string{}
tag := f.Tag.Get("doc")
if tag == "" {
return cfg
}
for _, entry := range strings.Split(tag, "|") {
parts := strings.SplitN(entry, "=", 2)
switch len(parts) {
case 1:
cfg[parts[0]] = ""
case 2:
cfg[parts[0]] = parts[1]
}
}
return cfg
}

@ -0,0 +1,224 @@
// SPDX-License-Identifier: AGPL-3.0-only
package parse
import (
"reflect"
"github.com/grafana/dskit/crypto/tls"
"github.com/grafana/dskit/grpcclient"
"github.com/grafana/dskit/kv/consul"
"github.com/grafana/dskit/kv/etcd"
"github.com/grafana/dskit/runtimeconfig"
"github.com/weaveworks/common/server"
"github.com/grafana/loki/pkg/distributor"
"github.com/grafana/loki/pkg/ingester"
ingester_client "github.com/grafana/loki/pkg/ingester/client"
"github.com/grafana/loki/pkg/loki/common"
frontend "github.com/grafana/loki/pkg/lokifrontend"
"github.com/grafana/loki/pkg/querier"
"github.com/grafana/loki/pkg/querier/queryrange"
querier_worker "github.com/grafana/loki/pkg/querier/worker"
"github.com/grafana/loki/pkg/ruler"
"github.com/grafana/loki/pkg/ruler/rulestore/local"
"github.com/grafana/loki/pkg/scheduler"
"github.com/grafana/loki/pkg/storage"
"github.com/grafana/loki/pkg/storage/chunk/cache"
"github.com/grafana/loki/pkg/storage/chunk/client/aws"
"github.com/grafana/loki/pkg/storage/chunk/client/azure"
"github.com/grafana/loki/pkg/storage/chunk/client/baidubce"
"github.com/grafana/loki/pkg/storage/chunk/client/gcp"
"github.com/grafana/loki/pkg/storage/chunk/client/openstack"
storage_config "github.com/grafana/loki/pkg/storage/config"
"github.com/grafana/loki/pkg/storage/stores/indexshipper/compactor"
"github.com/grafana/loki/pkg/storage/stores/series/index"
"github.com/grafana/loki/pkg/storage/stores/shipper/indexgateway"
"github.com/grafana/loki/pkg/tracing"
"github.com/grafana/loki/pkg/usagestats"
"github.com/grafana/loki/pkg/validation"
)
var (
// RootBlocks is an ordered list of root blocks with their associated descriptions.
// The order is the same order that will follow the markdown generation.
// Root blocks map to the configuration variables defined in Config of pkg/loki/loki.go
RootBlocks = []RootBlock{
{
Name: "server",
StructType: reflect.TypeOf(server.Config{}),
Desc: "Configures the server of the launched module(s).",
},
{
Name: "distributor",
StructType: reflect.TypeOf(distributor.Config{}),
Desc: "Configures the distributor.",
},
{
Name: "querier",
StructType: reflect.TypeOf(querier.Config{}),
Desc: "Configures the querier. Only appropriate when running all modules or just the querier.",
},
{
Name: "query_scheduler",
StructType: reflect.TypeOf(scheduler.Config{}),
Desc: "The query_scheduler block configures the Loki query scheduler. When configured it separates the tenant query queues from the query-frontend.",
},
{
Name: "frontend",
StructType: reflect.TypeOf(frontend.Config{}),
Desc: "The frontend block configures the Loki query-frontend.",
},
{
Name: "query_range",
StructType: reflect.TypeOf(queryrange.Config{}),
Desc: "The query_range block configures the query splitting and caching in the Loki query-frontend.",
},
{
Name: "ruler",
StructType: reflect.TypeOf(ruler.Config{}),
Desc: "The ruler block configures the Loki ruler.",
},
{
Name: "ingester_client",
StructType: reflect.TypeOf(ingester_client.Config{}),
Desc: "The ingester_client block configures how the distributor will connect to ingesters. Only appropriate when running all components, the distributor, or the querier.",
},
{
Name: "ingester",
StructType: reflect.TypeOf(ingester.Config{}),
Desc: "The ingester block configures the ingester and how the ingester will register itself to a key value store.",
},
{
Name: "index_gateway",
StructType: reflect.TypeOf(indexgateway.Config{}),
Desc: "The index_gateway block configures the Loki index gateway server, responsible for serving index queries without the need to constantly interact with the object store.",
},
{
Name: "storage_config",
StructType: reflect.TypeOf(storage.Config{}),
Desc: "The storage_config block configures one of many possible stores for both the index and chunks. Which configuration to be picked should be defined in schema_config block.",
},
{
Name: "chunk_store_config",
StructType: reflect.TypeOf(storage_config.ChunkStoreConfig{}),
Desc: "The chunk_store_config block configures how chunks will be cached and how long to wait before saving them to the backing store.",
},
{
Name: "schema_config",
StructType: reflect.TypeOf(storage_config.SchemaConfig{}),
Desc: "Configures the chunk index schema and where it is stored.",
},
{
Name: "compactor",
StructType: reflect.TypeOf(compactor.Config{}),
Desc: "The compactor block configures the compactor component, which compacts index shards for performance.",
},
{
Name: "limits_config",
StructType: reflect.TypeOf(validation.Limits{}),
Desc: "The limits_config block configures global and per-tenant limits in Loki.",
},
{
Name: "frontend_worker",
StructType: reflect.TypeOf(querier_worker.Config{}),
Desc: "The frontend_worker configures the worker - running within the Loki querier - picking up and executing queries enqueued by the query-frontend.",
},
{
Name: "table_manager",
StructType: reflect.TypeOf(index.TableManagerConfig{}),
Desc: "The table_manager block configures the table manager for retention.",
},
{
Name: "runtime_config",
StructType: reflect.TypeOf(runtimeconfig.Config{}),
Desc: "Configuration for 'runtime config' module, responsible for reloading runtime configuration file.",
},
{
Name: "tracing",
StructType: reflect.TypeOf(tracing.Config{}),
Desc: "Configuration for tracing.",
},
{
Name: "analytics",
StructType: reflect.TypeOf(usagestats.Config{}),
Desc: "Configuration for usage report.",
},
{
Name: "common",
StructType: reflect.TypeOf(common.Config{}),
Desc: "Common configuration to be shared between multiple modules. If a more specific configuration is given in other sections, the related configuration within this section will be ignored.",
},
// Non-root blocks
// StoreConfig dskit type: https://github.com/grafana/dskit/blob/main/kv/client.go#L44-L52
{
Name: "consul",
StructType: reflect.TypeOf(consul.Config{}),
Desc: "Configuration for a Consul client. Only applies if store is consul.",
},
{
Name: "etcd",
StructType: reflect.TypeOf(etcd.Config{}),
Desc: "Configuration for an ETCD v3 client. Only applies if store is etcd.",
},
// GRPC client
{
Name: "grpc_client",
StructType: reflect.TypeOf(grpcclient.Config{}),
Desc: "The grpc_client block configures the gRPC client used to communicate between two Loki components.",
},
// TLS config
{
Name: "tls_config",
StructType: reflect.TypeOf(tls.ClientConfig{}),
Desc: "The TLS configuration.",
},
// Cache config
{
Name: "cache_config",
StructType: reflect.TypeOf(cache.Config{}),
Desc: "The cache block configures the cache backend.",
},
// Schema periodic config
{
Name: "period_config",
StructType: reflect.TypeOf(storage_config.PeriodConfig{}),
Desc: "The period_config block configures what index schemas should be used for from specific time periods.",
},
// Storage config
{
Name: "azure_storage_config",
StructType: reflect.TypeOf(azure.BlobStorageConfig{}),
Desc: "The azure_storage_config block configures the connection to Azure object storage backend.",
},
{
Name: "gcs_storage_config",
StructType: reflect.TypeOf(gcp.GCSConfig{}),
Desc: "The gcs_storage_config block configures the connection to Google Cloud Storage object storage backend.",
},
{
Name: "s3_storage_config",
StructType: reflect.TypeOf(aws.S3Config{}),
Desc: "The s3_storage_config block configures the connection to Amazon S3 object storage backend.",
},
{
Name: "bos_storage_config",
StructType: reflect.TypeOf(baidubce.BOSStorageConfig{}),
Desc: "The bos_storage_config block configures the connection to Baidu Object Storage (BOS) object storage backend.",
},
{
Name: "swift_storage_config",
StructType: reflect.TypeOf(openstack.SwiftConfig{}),
Desc: "The swift_storage_config block configures the connection to OpenStack Object Storage (Swift) object storage backend.",
},
{
Name: "local_storage_config",
StructType: reflect.TypeOf(local.Config{}),
Desc: "The local_storage_config block configures the usage of local file system as object storage backend.",
},
}
)

@ -0,0 +1,62 @@
// SPDX-License-Identifier: AGPL-3.0-only
// Provenance-includes-location: https://github.com/cortexproject/cortex/blob/master/tools/doc-generator/util.go
// Provenance-includes-license: Apache-2.0
// Provenance-includes-copyright: The Cortex Authors.
package parse
import (
"math"
"strings"
)
func FindFlagsPrefix(flags []string) []string {
if len(flags) == 0 {
return flags
}
// Split the input flags input tokens separated by "."
// because the want to find the prefix where segments
// are dot-separated.
var tokens [][]string
for _, flag := range flags {
tokens = append(tokens, strings.Split(flag, "."))
}
// Find the shortest tokens.
minLength := math.MaxInt32
for _, t := range tokens {
if len(t) < minLength {
minLength = len(t)
}
}
// We iterate backward to find common suffixes. Each time
// a common suffix is found, we remove it from the tokens.
outer:
for i := 0; i < minLength; i++ {
lastToken := tokens[0][len(tokens[0])-1]
// Interrupt if the last token is different across the flags.
for _, t := range tokens {
if t[len(t)-1] != lastToken {
break outer
}
}
// The suffix token is equal across all flags, so we
// remove it from all of them and re-iterate.
for i, t := range tokens {
tokens[i] = t[:len(t)-1]
}
}
// The remaining tokens are the different flags prefix, which we can
// now merge with the ".".
var prefixes []string
for _, t := range tokens {
prefixes = append(prefixes, strings.Join(t, "."))
}
return prefixes
}

@ -0,0 +1,52 @@
// SPDX-License-Identifier: AGPL-3.0-only
// Provenance-includes-location: https://github.com/cortexproject/cortex/blob/master/tools/doc-generator/util_test.go
// Provenance-includes-license: Apache-2.0
// Provenance-includes-copyright: The Cortex Authors.
package parse
import (
"testing"
"github.com/stretchr/testify/assert"
)
func Test_findFlagsPrefix(t *testing.T) {
tests := []struct {
input []string
expected []string
}{
{
input: []string{},
expected: []string{},
},
{
input: []string{""},
expected: []string{""},
},
{
input: []string{"", ""},
expected: []string{"", ""},
},
{
input: []string{"foo", "foo", "foo"},
expected: []string{"", "", ""},
},
{
input: []string{"ruler.endpoint", "alertmanager.endpoint"},
expected: []string{"ruler", "alertmanager"},
},
{
input: []string{"ruler.endpoint.address", "alertmanager.endpoint.address"},
expected: []string{"ruler", "alertmanager"},
},
{
input: []string{"ruler.first.address", "ruler.second.address"},
expected: []string{"ruler.first", "ruler.second"},
},
}
for _, test := range tests {
assert.Equal(t, test.expected, FindFlagsPrefix(test.input))
}
}

@ -0,0 +1,245 @@
// SPDX-License-Identifier: AGPL-3.0-only
// Provenance-includes-location: https://github.com/cortexproject/cortex/blob/master/tools/doc-generator/writer.go
// Provenance-includes-license: Apache-2.0
// Provenance-includes-copyright: The Cortex Authors.
package main
import (
"fmt"
"sort"
"strconv"
"strings"
"github.com/grafana/regexp"
"github.com/mitchellh/go-wordwrap"
"gopkg.in/yaml.v3"
"github.com/grafana/loki/tools/doc-generator/parse"
)
type specWriter struct {
out strings.Builder
}
func (w *specWriter) writeConfigBlock(b *parse.ConfigBlock, indent int) {
if len(b.Entries) == 0 {
return
}
for i, entry := range b.Entries {
// Add a new line to separate from the previous entry
if i > 0 {
w.out.WriteString("\n")
}
w.writeConfigEntry(entry, indent)
}
}
func (w *specWriter) writeConfigEntry(e *parse.ConfigEntry, indent int) {
if e.Kind == parse.KindBlock {
// If the block is a root block it will have its dedicated section in the doc,
// so here we've just to write down the reference without re-iterating on it.
if e.Root {
// Description
w.writeComment(e.BlockDesc, indent, 0)
if e.Block.FlagsPrefix != "" {
w.writeComment(fmt.Sprintf("The CLI flags prefix for this block configuration is: %s", e.Block.FlagsPrefix), indent, 0)
}
// Block reference without entries, because it's a root block
w.out.WriteString(pad(indent) + "[" + e.Name + ": <" + e.Block.Name + ">]\n")
} else {
// Description
w.writeComment(e.BlockDesc, indent, 0)
// Name
w.out.WriteString(pad(indent) + e.Name + ":\n")
// Entries
w.writeConfigBlock(e.Block, indent+tabWidth)
}
}
if e.Kind == parse.KindField || e.Kind == parse.KindSlice || e.Kind == parse.KindMap {
// Description
w.writeComment(e.Description(), indent, 0)
w.writeExample(e.FieldExample, indent)
w.writeFlag(e.FieldFlag, indent)
// Specification
fieldDefault := e.FieldDefault
if e.FieldType == "string" {
fieldDefault = strconv.Quote(fieldDefault)
} else if e.FieldType == "duration" {
fieldDefault = cleanupDuration(fieldDefault)
}
if e.Required {
w.out.WriteString(pad(indent) + e.Name + ": <" + e.FieldType + "> | default = " + fieldDefault + "\n")
} else {
defaultValue := ""
if len(fieldDefault) > 0 {
defaultValue = " | default = " + fieldDefault
}
w.out.WriteString(pad(indent) + "[" + e.Name + ": <" + e.FieldType + ">" + defaultValue + "]\n")
}
}
}
func (w *specWriter) writeFlag(name string, indent int) {
if name == "" {
return
}
w.out.WriteString(pad(indent) + "# CLI flag: -" + name + "\n")
}
func (w *specWriter) writeComment(comment string, indent, innerIndent int) {
if comment == "" {
return
}
wrapped := wordwrap.WrapString(comment, uint(maxLineWidth-indent-innerIndent-2))
w.writeWrappedString(wrapped, indent, innerIndent)
}
func (w *specWriter) writeExample(example *parse.FieldExample, indent int) {
if example == nil {
return
}
w.writeComment("Example:", indent, 0)
if example.Comment != "" {
w.writeComment(example.Comment, indent, 2)
}
data, err := yaml.Marshal(example.Yaml)
if err != nil {
panic(fmt.Errorf("can't render example: %w", err))
}
w.writeWrappedString(string(data), indent, 2)
}
func (w *specWriter) writeWrappedString(s string, indent, innerIndent int) {
lines := strings.Split(strings.TrimSpace(s), "\n")
for _, line := range lines {
w.out.WriteString(pad(indent) + "# " + pad(innerIndent) + line + "\n")
}
}
func (w *specWriter) string() string {
return strings.TrimSpace(w.out.String())
}
type markdownWriter struct {
out strings.Builder
}
func (w *markdownWriter) writeConfigDoc(blocks []*parse.ConfigBlock) {
// Deduplicate root blocks.
uniqueBlocks := map[string]*parse.ConfigBlock{}
for _, block := range blocks {
uniqueBlocks[block.Name] = block
}
// Generate the markdown, honoring the root blocks order.
if topBlock, ok := uniqueBlocks[""]; ok {
w.writeConfigBlock(topBlock)
}
for _, rootBlock := range parse.RootBlocks {
if block, ok := uniqueBlocks[rootBlock.Name]; ok {
// Keep the root block description.
blockToWrite := *block
blockToWrite.Desc = rootBlock.Desc
w.writeConfigBlock(&blockToWrite)
}
}
}
func (w *markdownWriter) writeConfigBlock(block *parse.ConfigBlock) {
// Title
if block.Name != "" {
w.out.WriteString("### " + block.Name + "\n")
w.out.WriteString("\n")
}
// Description
if block.Desc != "" {
desc := block.Desc
// Wrap first instance of the config block name with backticks
if block.Name != "" {
var matches int
nameRegexp := regexp.MustCompile(regexp.QuoteMeta(block.Name))
desc = nameRegexp.ReplaceAllStringFunc(desc, func(input string) string {
if matches == 0 {
matches++
return "`" + input + "`"
}
return input
})
}
// List of all prefixes used to reference this config block.
if len(block.FlagsPrefixes) > 1 {
sortedPrefixes := sort.StringSlice(block.FlagsPrefixes)
sortedPrefixes.Sort()
desc += " The supported CLI flags `<prefix>` used to reference this configuration block are:\n\n"
for _, prefix := range sortedPrefixes {
if prefix == "" {
desc += "- _no prefix_\n"
} else {
desc += fmt.Sprintf("- `%s`\n", prefix)
}
}
// Unfortunately the markdown compiler used by the website generator has a bug
// when there's a list followed by a code block (no matter know many newlines
// in between). To workaround, we add a non-breaking space.
desc += "\n&nbsp;"
}
w.out.WriteString(desc + "\n")
w.out.WriteString("\n")
}
// Config specs
spec := &specWriter{}
spec.writeConfigBlock(block, 0)
w.out.WriteString("```yaml\n")
w.out.WriteString(spec.string() + "\n")
w.out.WriteString("```\n")
w.out.WriteString("\n")
}
func (w *markdownWriter) string() string {
return strings.TrimSpace(w.out.String())
}
func pad(length int) string {
return strings.Repeat(" ", length)
}
func cleanupDuration(value string) string {
// This is the list of suffixes to remove from the duration if they're not
// the whole duration value.
suffixes := []string{"0s", "0m"}
for _, suffix := range suffixes {
re := regexp.MustCompile("(^.+\\D)" + suffix + "$")
if groups := re.FindStringSubmatch(value); len(groups) == 2 {
value = groups[1]
}
}
return value
}

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2014 Mitchell Hashimoto
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

@ -0,0 +1,39 @@
# go-wordwrap
`go-wordwrap` (Golang package: `wordwrap`) is a package for Go that
automatically wraps words into multiple lines. The primary use case for this
is in formatting CLI output, but of course word wrapping is a generally useful
thing to do.
## Installation and Usage
Install using `go get github.com/mitchellh/go-wordwrap`.
Full documentation is available at
http://godoc.org/github.com/mitchellh/go-wordwrap
Below is an example of its usage ignoring errors:
```go
wrapped := wordwrap.WrapString("foo bar baz", 3)
fmt.Println(wrapped)
```
Would output:
```
foo
bar
baz
```
## Word Wrap Algorithm
This library doesn't use any clever algorithm for word wrapping. The wrapping
is actually very naive: whenever there is whitespace or an explicit linebreak.
The goal of this library is for word wrapping CLI output, so the input is
typically pretty well controlled human language. Because of this, the naive
approach typically works just fine.
In the future, we'd like to make the algorithm more advanced. We would do
so without breaking the API.

@ -0,0 +1,73 @@
package wordwrap
import (
"bytes"
"unicode"
)
// WrapString wraps the given string within lim width in characters.
//
// Wrapping is currently naive and only happens at white-space. A future
// version of the library will implement smarter wrapping. This means that
// pathological cases can dramatically reach past the limit, such as a very
// long word.
func WrapString(s string, lim uint) string {
// Initialize a buffer with a slightly larger size to account for breaks
init := make([]byte, 0, len(s))
buf := bytes.NewBuffer(init)
var current uint
var wordBuf, spaceBuf bytes.Buffer
for _, char := range s {
if char == '\n' {
if wordBuf.Len() == 0 {
if current+uint(spaceBuf.Len()) > lim {
current = 0
} else {
current += uint(spaceBuf.Len())
spaceBuf.WriteTo(buf)
}
spaceBuf.Reset()
} else {
current += uint(spaceBuf.Len() + wordBuf.Len())
spaceBuf.WriteTo(buf)
spaceBuf.Reset()
wordBuf.WriteTo(buf)
wordBuf.Reset()
}
buf.WriteRune(char)
current = 0
} else if unicode.IsSpace(char) {
if spaceBuf.Len() == 0 || wordBuf.Len() > 0 {
current += uint(spaceBuf.Len() + wordBuf.Len())
spaceBuf.WriteTo(buf)
spaceBuf.Reset()
wordBuf.WriteTo(buf)
wordBuf.Reset()
}
spaceBuf.WriteRune(char)
} else {
wordBuf.WriteRune(char)
if current+uint(spaceBuf.Len()+wordBuf.Len()) > lim && uint(wordBuf.Len()) < lim {
buf.WriteRune('\n')
current = 0
spaceBuf.Reset()
}
}
}
if wordBuf.Len() == 0 {
if current+uint(spaceBuf.Len()) <= lim {
spaceBuf.WriteTo(buf)
}
} else {
spaceBuf.WriteTo(buf)
wordBuf.WriteTo(buf)
}
return buf.String()
}

@ -923,6 +923,9 @@ github.com/mitchellh/copystructure
# github.com/mitchellh/go-homedir v1.1.0
## explicit
github.com/mitchellh/go-homedir
# github.com/mitchellh/go-wordwrap v1.0.0
## explicit
github.com/mitchellh/go-wordwrap
# github.com/mitchellh/mapstructure v1.5.0
## explicit; go 1.14
github.com/mitchellh/mapstructure

Loading…
Cancel
Save