mirror of https://github.com/grafana/loki
Add configuration documentation generation tool (#7916)
**What this PR does / why we need it**: Add a tool to generate configuration flags documentation based on the flags properties defined on registration on the code. This tool is based on the [Mimir doc generation tool](https://github.com/grafana/mimir/tree/main/tools/doc-generator) and adapted according to Loki configuration specifications. Prior to this PR, the configuration flags documentation was dispersed across two sources: * [_index.md](pull/7934/head5550cd65ec/docs/sources/configuration/_index.md) * configuration flags registration in the code This meant that there was no single source of truth. In this PR, the previous `_index.md` file is replaced with the new file generated by the tool. The next step includes adding a CI step that validates if the _index.md file was generated according to the flags settings. This will be done in a follow-up PR. **NOTE:** this is not a documentation update PR. Apart from some minor typo fixes, the documentation changes on the code, were copied from the `_index.md` file. **Which issue(s) this PR fixes**: Fixes https://github.com/grafana/loki-private/issues/83 **Special notes for your reviewer**: Files: * [docs/sources/configuration/index.template](5550cd65ec/docs/sources/configuration/index.template): template used to generate the final configuration file * [/docs/sources/configuration/_index.md](c32e5d0acb/docs/sources/configuration/_index.md): file generated by tool * `loki/pkg` directory files updated with up-to-date documentation from `_index.md` file * [tools/doc-generator](5550cd65ec/tools/doc-generator) directory with documentation generation tool. **Checklist** - [ ] Reviewed the `CONTRIBUTING.md` guide - [ ] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md`
parent
4768b6d997
commit
f93b91bfb5
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,98 @@ |
||||
--- |
||||
description: Describes parameters used to configure Grafana Loki. |
||||
menuTitle: Configuration parameters |
||||
title: Grafana Loki configuration parameters |
||||
weight: 500 |
||||
--- |
||||
|
||||
# Grafana Loki configuration parameters |
||||
|
||||
{{ .GeneratedFileWarning }} |
||||
|
||||
Grafana Loki is configured in a YAML file (usually referred to as `loki.yaml` ) |
||||
which contains information on the Loki server and its individual components, |
||||
depending on which mode Loki is launched in. |
||||
|
||||
Configuration examples can be found in the [Configuration Examples](examples/) document. |
||||
|
||||
## Printing Loki Config At Runtime |
||||
|
||||
If you pass Loki the flag `-print-config-stderr` or `-log-config-reverse-order`, (or `-print-config-stderr=true`) |
||||
Loki will dump the entire config object it has created from the built-in defaults combined first with |
||||
overrides from config file, and second by overrides from flags. |
||||
|
||||
The result is the value for every config object in the Loki config struct, which is very large... |
||||
|
||||
Many values will not be relevant to your install such as storage configs which you are not using and which you did not define, |
||||
this is expected as every option has a default value if it is being used or not. |
||||
|
||||
This config is what Loki will use to run, it can be invaluable for debugging issues related to configuration and |
||||
is especially useful in making sure your config files and flags are being read and loaded properly. |
||||
|
||||
`-print-config-stderr` is nice when running Loki directly e.g. `./loki ` as you can get a quick output of the entire Loki config. |
||||
|
||||
`-log-config-reverse-order` is the flag we run Loki with in all our environments, the config entries are reversed so |
||||
that the order of configs reads correctly top to bottom when viewed in Grafana's Explore. |
||||
|
||||
## Reload At Runtime |
||||
|
||||
Promtail can reload its configuration at runtime. If the new configuration |
||||
is not well-formed, the changes will not be applied. |
||||
A configuration reload is triggered by sending a `SIGHUP` to the Promtail process or |
||||
sending a HTTP POST request to the `/reload` endpoint (when the `--server.enable-runtime-reload` flag is enabled). |
||||
|
||||
## Configuration File Reference |
||||
|
||||
To specify which configuration file to load, pass the `-config.file` flag at the |
||||
command line. The value can be a list of comma separated paths, then the first |
||||
file that exists will be used. |
||||
If no `-config.file` argument is specified, Loki will look up the `config.yaml` in the |
||||
current working directory and the `config/` subdirectory and try to use that. |
||||
|
||||
The file is written in [YAML |
||||
format](https://en.wikipedia.org/wiki/YAML), defined by the scheme below. |
||||
Brackets indicate that a parameter is optional. For non-list parameters the |
||||
value is set to the specified default. |
||||
|
||||
### Use environment variables in the configuration |
||||
|
||||
> **Note:** This feature is only available in Loki 2.1+. |
||||
|
||||
You can use environment variable references in the configuration file to set values that need to be configurable during deployment. |
||||
To do this, pass `-config.expand-env=true` and use: |
||||
|
||||
``` |
||||
${VAR} |
||||
``` |
||||
|
||||
Where VAR is the name of the environment variable. |
||||
|
||||
Each variable reference is replaced at startup by the value of the environment variable. |
||||
The replacement is case-sensitive and occurs before the YAML file is parsed. |
||||
References to undefined variables are replaced by empty strings unless you specify a default value or custom error text. |
||||
|
||||
To specify a default value, use: |
||||
|
||||
``` |
||||
${VAR:-default_value} |
||||
``` |
||||
|
||||
Where default_value is the value to use if the environment variable is undefined. |
||||
|
||||
Pass the `-config.expand-env` flag at the command line to enable this way of setting configs. |
||||
|
||||
### Generic placeholders |
||||
|
||||
- `<boolean>` : a boolean that can take the values `true` or `false` |
||||
- `<int>` : any integer matching the regular expression `[1-9]+[0-9]*` |
||||
- `<duration>` : a duration matching the regular expression `[0-9]+(ns|us|µs|ms|[smh])` |
||||
- `<labelname>` : a string matching the regular expression `[a-zA-Z_][a-zA-Z0-9_]*` |
||||
- `<labelvalue>` : a string of unicode characters |
||||
- `<filename>` : a valid path relative to current working directory or an absolute path. |
||||
- `<host>` : a valid string consisting of a hostname or IP followed by an optional port number |
||||
- `<string>` : a string |
||||
- `<secret>` : a string that represents a secret, such as a password |
||||
|
||||
### Supported contents and default values of `loki.yaml` |
||||
|
||||
{{ .ConfigFile }} |
||||
@ -0,0 +1,185 @@ |
||||
// SPDX-License-Identifier: AGPL-3.0-only
|
||||
// Provenance-includes-location: https://github.com/cortexproject/cortex/blob/master/tools/doc-generator/main.go
|
||||
// Provenance-includes-license: Apache-2.0
|
||||
// Provenance-includes-copyright: The Cortex Authors.
|
||||
|
||||
package main |
||||
|
||||
import ( |
||||
"flag" |
||||
"fmt" |
||||
"os" |
||||
"path/filepath" |
||||
"strings" |
||||
"text/template" |
||||
|
||||
"github.com/grafana/loki/pkg/loki" |
||||
"github.com/grafana/loki/tools/doc-generator/parse" |
||||
) |
||||
|
||||
const ( |
||||
maxLineWidth = 80 |
||||
tabWidth = 2 |
||||
) |
||||
|
||||
func removeFlagPrefix(block *parse.ConfigBlock, prefix string) { |
||||
for _, entry := range block.Entries { |
||||
switch entry.Kind { |
||||
case parse.KindBlock: |
||||
// Skip root blocks
|
||||
if !entry.Root { |
||||
removeFlagPrefix(entry.Block, prefix) |
||||
} |
||||
case parse.KindField: |
||||
if strings.HasPrefix(entry.FieldFlag, prefix) { |
||||
entry.FieldFlag = "<prefix>" + entry.FieldFlag[len(prefix):] |
||||
} |
||||
} |
||||
} |
||||
} |
||||
|
||||
func annotateFlagPrefix(blocks []*parse.ConfigBlock) { |
||||
// Find duplicated blocks
|
||||
groups := map[string][]*parse.ConfigBlock{} |
||||
for _, block := range blocks { |
||||
groups[block.Name] = append(groups[block.Name], block) |
||||
} |
||||
|
||||
// For each duplicated block, we need to fix the CLI flags, because
|
||||
// in the documentation each block will be displayed only once but
|
||||
// since they're duplicated they will have a different CLI flag
|
||||
// prefix, which we want to correctly document.
|
||||
for _, group := range groups { |
||||
if len(group) == 1 { |
||||
continue |
||||
} |
||||
|
||||
// We need to find the CLI flags prefix of each config block. To do it,
|
||||
// we pick the first entry from each config block and then find the
|
||||
// different prefix across all of them.
|
||||
var flags []string |
||||
for _, block := range group { |
||||
for _, entry := range block.Entries { |
||||
if entry.Kind == parse.KindField { |
||||
if len(entry.FieldFlag) > 0 { |
||||
flags = append(flags, entry.FieldFlag) |
||||
} |
||||
break |
||||
} |
||||
} |
||||
} |
||||
|
||||
var allPrefixes []string |
||||
for i, prefix := range parse.FindFlagsPrefix(flags) { |
||||
if len(prefix) > 0 { |
||||
group[i].FlagsPrefix = prefix |
||||
allPrefixes = append(allPrefixes, prefix) |
||||
} |
||||
} |
||||
|
||||
// Store all found prefixes into each block so that when we generate the
|
||||
// markdown we also know which are all the prefixes for each root block.
|
||||
for _, block := range group { |
||||
block.FlagsPrefixes = allPrefixes |
||||
} |
||||
} |
||||
|
||||
// Finally, we can remove the CLI flags prefix from the blocks
|
||||
// which have one annotated.
|
||||
for _, block := range blocks { |
||||
if block.FlagsPrefix != "" { |
||||
removeFlagPrefix(block, block.FlagsPrefix) |
||||
} |
||||
} |
||||
} |
||||
|
||||
func generateBlocksMarkdown(blocks []*parse.ConfigBlock) string { |
||||
md := &markdownWriter{} |
||||
md.writeConfigDoc(blocks) |
||||
return md.string() |
||||
} |
||||
|
||||
func generateBlockMarkdown(blocks []*parse.ConfigBlock, blockName, fieldName string) string { |
||||
// Look for the requested block.
|
||||
for _, block := range blocks { |
||||
if block.Name != blockName { |
||||
continue |
||||
} |
||||
|
||||
md := &markdownWriter{} |
||||
|
||||
// Wrap the root block with another block, so that we can show the name of the
|
||||
// root field containing the block specs.
|
||||
md.writeConfigBlock(&parse.ConfigBlock{ |
||||
Name: blockName, |
||||
Desc: block.Desc, |
||||
Entries: []*parse.ConfigEntry{ |
||||
{ |
||||
Kind: parse.KindBlock, |
||||
Name: fieldName, |
||||
Required: true, |
||||
Block: block, |
||||
BlockDesc: "", |
||||
Root: false, |
||||
}, |
||||
}, |
||||
}) |
||||
|
||||
return md.string() |
||||
} |
||||
|
||||
// If the block has not been found, we return an empty string.
|
||||
return "" |
||||
} |
||||
|
||||
func main() { |
||||
// Parse the generator flags.
|
||||
flag.Parse() |
||||
if flag.NArg() != 1 { |
||||
fmt.Fprintf(os.Stderr, "Usage: doc-generator template-file") |
||||
os.Exit(1) |
||||
} |
||||
|
||||
templatePath := flag.Arg(0) |
||||
|
||||
// In order to match YAML config fields with CLI flags, we map
|
||||
// the memory address of the CLI flag variables and match them with
|
||||
// the config struct fields' addresses.
|
||||
cfg := &loki.Config{} |
||||
flags := parse.Flags(cfg) |
||||
|
||||
// Parse the config, mapping each config field with the related CLI flag.
|
||||
blocks, err := parse.Config(cfg, flags, parse.RootBlocks) |
||||
if err != nil { |
||||
fmt.Fprintf(os.Stderr, "An error occurred while generating the doc: %s\n", err.Error()) |
||||
os.Exit(1) |
||||
} |
||||
|
||||
// Annotate the flags prefix for each root block, and remove the
|
||||
// prefix wherever encountered in the config blocks.
|
||||
annotateFlagPrefix(blocks) |
||||
|
||||
// Generate documentation markdown.
|
||||
data := struct { |
||||
ConfigFile string |
||||
GeneratedFileWarning string |
||||
}{ |
||||
GeneratedFileWarning: "<!-- DO NOT EDIT THIS FILE - This file has been automatically generated from its .template -->", |
||||
ConfigFile: generateBlocksMarkdown(blocks), |
||||
} |
||||
|
||||
// Load the template file.
|
||||
tpl := template.New(filepath.Base(templatePath)) |
||||
|
||||
tpl, err = tpl.ParseFiles(templatePath) |
||||
if err != nil { |
||||
fmt.Fprintf(os.Stderr, "An error occurred while loading the template %s: %s\n", templatePath, err.Error()) |
||||
os.Exit(1) |
||||
} |
||||
|
||||
// Execute the template to inject generated doc.
|
||||
if err := tpl.Execute(os.Stdout, data); err != nil { |
||||
fmt.Fprintf(os.Stderr, "An error occurred while executing the template %s: %s\n", templatePath, err.Error()) |
||||
os.Exit(1) |
||||
} |
||||
} |
||||
@ -0,0 +1,645 @@ |
||||
// SPDX-License-Identifier: AGPL-3.0-only
|
||||
// Provenance-includes-location: https://github.com/cortexproject/cortex/blob/master/tools/doc-generator/parser.go
|
||||
// Provenance-includes-license: Apache-2.0
|
||||
// Provenance-includes-copyright: The Cortex Authors.
|
||||
|
||||
package parse |
||||
|
||||
import ( |
||||
"flag" |
||||
"fmt" |
||||
"net/url" |
||||
"reflect" |
||||
"strings" |
||||
"time" |
||||
"unicode" |
||||
|
||||
"github.com/grafana/dskit/flagext" |
||||
"github.com/grafana/regexp" |
||||
"github.com/pkg/errors" |
||||
"github.com/prometheus/common/model" |
||||
prometheus_config "github.com/prometheus/prometheus/config" |
||||
"github.com/prometheus/prometheus/model/relabel" |
||||
"github.com/weaveworks/common/logging" |
||||
|
||||
"github.com/grafana/loki/pkg/ruler/util" |
||||
storage_config "github.com/grafana/loki/pkg/storage/config" |
||||
util_validation "github.com/grafana/loki/pkg/util/validation" |
||||
"github.com/grafana/loki/pkg/validation" |
||||
) |
||||
|
||||
var ( |
||||
yamlFieldNameParser = regexp.MustCompile("^[^,]+") |
||||
yamlFieldInlineParser = regexp.MustCompile("^[^,]*,inline$") |
||||
) |
||||
|
||||
// ExamplerConfig can be implemented by configs to provide examples.
|
||||
// If string is non-empty, it will be added as comment.
|
||||
// If yaml value is non-empty, it will be marshaled as yaml under the same key as it would appear in config.
|
||||
type ExamplerConfig interface { |
||||
ExampleDoc() (comment string, yaml interface{}) |
||||
} |
||||
|
||||
type FieldExample struct { |
||||
Comment string |
||||
Yaml interface{} |
||||
} |
||||
|
||||
type ConfigBlock struct { |
||||
Name string |
||||
Desc string |
||||
Entries []*ConfigEntry |
||||
FlagsPrefix string |
||||
FlagsPrefixes []string |
||||
} |
||||
|
||||
func (b *ConfigBlock) Add(entry *ConfigEntry) { |
||||
b.Entries = append(b.Entries, entry) |
||||
} |
||||
|
||||
type EntryKind string |
||||
|
||||
const ( |
||||
fieldString = "string" |
||||
fieldRelabelConfig = "relabel_config..." |
||||
) |
||||
|
||||
const ( |
||||
KindBlock EntryKind = "block" |
||||
KindField EntryKind = "field" |
||||
KindSlice EntryKind = "slice" |
||||
KindMap EntryKind = "map" |
||||
) |
||||
|
||||
type ConfigEntry struct { |
||||
Kind EntryKind |
||||
Name string |
||||
Required bool |
||||
|
||||
// In case the Kind is KindBlock
|
||||
Block *ConfigBlock |
||||
BlockDesc string |
||||
Root bool |
||||
|
||||
// In case the Kind is KindField
|
||||
FieldFlag string |
||||
FieldDesc string |
||||
FieldType string |
||||
FieldDefault string |
||||
FieldExample *FieldExample |
||||
|
||||
// In case the Kind is KindMap or KindSlice
|
||||
Element *ConfigBlock |
||||
} |
||||
|
||||
func (e ConfigEntry) Description() string { |
||||
return e.FieldDesc |
||||
} |
||||
|
||||
type RootBlock struct { |
||||
Name string |
||||
Desc string |
||||
StructType reflect.Type |
||||
} |
||||
|
||||
func Flags(cfg flagext.Registerer) map[uintptr]*flag.Flag { |
||||
fs := flag.NewFlagSet("", flag.PanicOnError) |
||||
cfg.RegisterFlags(fs) |
||||
|
||||
flags := map[uintptr]*flag.Flag{} |
||||
fs.VisitAll(func(f *flag.Flag) { |
||||
// Skip deprecated flags
|
||||
if f.Value.String() == "deprecated" { |
||||
return |
||||
} |
||||
|
||||
ptr := reflect.ValueOf(f.Value).Pointer() |
||||
flags[ptr] = f |
||||
}) |
||||
|
||||
return flags |
||||
} |
||||
|
||||
// Config returns a slice of ConfigBlocks. The first ConfigBlock is a recursively expanded cfg.
|
||||
// The remaining entries in the slice are all (root or not) ConfigBlocks.
|
||||
func Config(cfg interface{}, flags map[uintptr]*flag.Flag, rootBlocks []RootBlock) ([]*ConfigBlock, error) { |
||||
return config(nil, cfg, flags, rootBlocks) |
||||
} |
||||
|
||||
func config(block *ConfigBlock, cfg interface{}, flags map[uintptr]*flag.Flag, rootBlocks []RootBlock) ([]*ConfigBlock, error) { |
||||
var blocks []*ConfigBlock |
||||
|
||||
// If the input block is nil it means we're generating the doc for the top-level block
|
||||
if block == nil { |
||||
block = &ConfigBlock{} |
||||
blocks = append(blocks, block) |
||||
} |
||||
|
||||
// The input config is expected to be addressable.
|
||||
if reflect.TypeOf(cfg).Kind() != reflect.Ptr { |
||||
t := reflect.TypeOf(cfg) |
||||
return nil, fmt.Errorf("%s is a %s while a %s is expected", t, t.Kind(), reflect.Ptr) |
||||
} |
||||
|
||||
// The input config is expected to be a pointer to struct.
|
||||
v := reflect.ValueOf(cfg).Elem() |
||||
t := v.Type() |
||||
|
||||
if v.Kind() != reflect.Struct { |
||||
return nil, fmt.Errorf("%s is a %s while a %s is expected", v, v.Kind(), reflect.Struct) |
||||
} |
||||
|
||||
for i := 0; i < t.NumField(); i++ { |
||||
field := t.Field(i) |
||||
fieldValue := v.FieldByIndex(field.Index) |
||||
|
||||
// Skip fields explicitly marked as "hidden" in the doc
|
||||
if isFieldHidden(field) { |
||||
continue |
||||
} |
||||
|
||||
// Skip fields not exported via yaml (unless they're inline)
|
||||
fieldName := getFieldName(field) |
||||
if fieldName == "" && !isFieldInline(field) { |
||||
continue |
||||
} |
||||
|
||||
// Skip field types which are non-configurable
|
||||
if field.Type.Kind() == reflect.Func { |
||||
continue |
||||
} |
||||
|
||||
// Skip deprecated fields we're still keeping for backward compatibility
|
||||
// reasons (by convention we prefix them by UnusedFlag)
|
||||
if strings.HasPrefix(field.Name, "UnusedFlag") { |
||||
continue |
||||
} |
||||
|
||||
// Handle custom fields in vendored libs upon which we have no control.
|
||||
fieldEntry, err := getCustomFieldEntry(cfg, field, fieldValue, flags) |
||||
if err != nil { |
||||
return nil, err |
||||
} |
||||
if fieldEntry != nil { |
||||
block.Add(fieldEntry) |
||||
continue |
||||
} |
||||
|
||||
// Recursively re-iterate if it's a struct, and it's not a custom type.
|
||||
if _, custom := getCustomFieldType(field.Type); (field.Type.Kind() == reflect.Struct || field.Type.Kind() == reflect.Ptr) && !custom { |
||||
// Check whether the sub-block is a root config block
|
||||
rootName, rootDesc, isRoot := isRootBlock(field.Type, rootBlocks) |
||||
|
||||
// Since we're going to recursively iterate, we need to create a new sub
|
||||
// block and pass it to the doc generation function.
|
||||
var subBlock *ConfigBlock |
||||
|
||||
if !isFieldInline(field) { |
||||
var blockName string |
||||
var blockDesc string |
||||
|
||||
if isRoot { |
||||
blockName = rootName |
||||
|
||||
// Honor the custom description if available.
|
||||
blockDesc = getFieldDescription(cfg, field, rootDesc) |
||||
} else { |
||||
blockName = fieldName |
||||
blockDesc = getFieldDescription(cfg, field, "") |
||||
} |
||||
|
||||
subBlock = &ConfigBlock{ |
||||
Name: blockName, |
||||
Desc: blockDesc, |
||||
} |
||||
|
||||
block.Add(&ConfigEntry{ |
||||
Kind: KindBlock, |
||||
Name: fieldName, |
||||
Required: isFieldRequired(field), |
||||
Block: subBlock, |
||||
BlockDesc: blockDesc, |
||||
Root: isRoot, |
||||
}) |
||||
|
||||
if isRoot { |
||||
blocks = append(blocks, subBlock) |
||||
} |
||||
} else { |
||||
subBlock = block |
||||
} |
||||
|
||||
if field.Type.Kind() == reflect.Ptr { |
||||
// If this is a pointer, it's probably nil, so we initialize it.
|
||||
fieldValue = reflect.New(field.Type.Elem()) |
||||
} else if field.Type.Kind() == reflect.Struct { |
||||
fieldValue = fieldValue.Addr() |
||||
} |
||||
|
||||
// Recursively generate the doc for the sub-block
|
||||
otherBlocks, err := config(subBlock, fieldValue.Interface(), flags, rootBlocks) |
||||
if err != nil { |
||||
return nil, err |
||||
} |
||||
|
||||
blocks = append(blocks, otherBlocks...) |
||||
continue |
||||
} |
||||
|
||||
var ( |
||||
element *ConfigBlock |
||||
kind = KindField |
||||
) |
||||
{ |
||||
// Add ConfigBlock for slices only if the field isn't a custom type,
|
||||
// which shouldn't be inspected because doesn't have YAML tags, flag registrations, etc.
|
||||
_, isCustomType := getFieldCustomType(field.Type) |
||||
isSliceOfStructs := field.Type.Kind() == reflect.Slice && (field.Type.Elem().Kind() == reflect.Struct || field.Type.Elem().Kind() == reflect.Ptr) |
||||
if !isCustomType && isSliceOfStructs { |
||||
element = &ConfigBlock{ |
||||
Name: fieldName, |
||||
Desc: getFieldDescription(cfg, field, ""), |
||||
} |
||||
kind = KindSlice |
||||
|
||||
_, err = config(element, reflect.New(field.Type.Elem()).Interface(), flags, rootBlocks) |
||||
if err != nil { |
||||
return nil, errors.Wrapf(err, "couldn't inspect slice, element_type=%s", field.Type.Elem()) |
||||
} |
||||
} |
||||
} |
||||
|
||||
fieldType, err := getFieldType(field.Type) |
||||
if err != nil { |
||||
return nil, errors.Wrapf(err, "config=%s.%s", t.PkgPath(), t.Name()) |
||||
} |
||||
|
||||
fieldFlag, err := getFieldFlag(field, fieldValue, flags) |
||||
if err != nil { |
||||
return nil, errors.Wrapf(err, "config=%s.%s", t.PkgPath(), t.Name()) |
||||
} |
||||
if fieldFlag == nil { |
||||
block.Add(&ConfigEntry{ |
||||
Kind: kind, |
||||
Name: fieldName, |
||||
Required: isFieldRequired(field), |
||||
FieldDesc: getFieldDescription(cfg, field, ""), |
||||
FieldType: fieldType, |
||||
FieldExample: getFieldExample(fieldName, field.Type), |
||||
Element: element, |
||||
}) |
||||
continue |
||||
} |
||||
|
||||
block.Add(&ConfigEntry{ |
||||
Kind: kind, |
||||
Name: fieldName, |
||||
Required: isFieldRequired(field), |
||||
FieldFlag: fieldFlag.Name, |
||||
FieldDesc: getFieldDescription(cfg, field, fieldFlag.Usage), |
||||
FieldType: fieldType, |
||||
FieldDefault: getFieldDefault(field, fieldFlag.DefValue), |
||||
FieldExample: getFieldExample(fieldName, field.Type), |
||||
Element: element, |
||||
}) |
||||
} |
||||
|
||||
return blocks, nil |
||||
} |
||||
|
||||
func getFieldName(field reflect.StructField) string { |
||||
name := field.Name |
||||
tag := field.Tag.Get("yaml") |
||||
|
||||
// If the tag is not specified, then an exported field can be
|
||||
// configured via the field name (lowercase), while an unexported
|
||||
// field can't be configured.
|
||||
if tag == "" { |
||||
if unicode.IsLower(rune(name[0])) { |
||||
return "" |
||||
} |
||||
|
||||
return strings.ToLower(name) |
||||
} |
||||
|
||||
// Parse the field name
|
||||
fieldName := yamlFieldNameParser.FindString(tag) |
||||
if fieldName == "-" { |
||||
return "" |
||||
} |
||||
|
||||
return fieldName |
||||
} |
||||
|
||||
func getFieldCustomType(t reflect.Type) (string, bool) { |
||||
// Handle custom data types used in the config
|
||||
switch t.String() { |
||||
case reflect.TypeOf(&url.URL{}).String(): |
||||
return "url", true |
||||
case reflect.TypeOf(time.Duration(0)).String(): |
||||
return "duration", true |
||||
case reflect.TypeOf(flagext.StringSliceCSV{}).String(): |
||||
return fieldString, true |
||||
case reflect.TypeOf(flagext.CIDRSliceCSV{}).String(): |
||||
return fieldString, true |
||||
case reflect.TypeOf([]*util.RelabelConfig{}).String(): |
||||
return fieldRelabelConfig, true |
||||
case reflect.TypeOf([]*relabel.Config{}).String(): |
||||
return fieldRelabelConfig, true |
||||
case reflect.TypeOf([]*util_validation.BlockedQuery{}).String(): |
||||
return "blocked_query...", true |
||||
case reflect.TypeOf([]*prometheus_config.RemoteWriteConfig{}).String(): |
||||
return "remote_write_config...", true |
||||
case reflect.TypeOf(storage_config.PeriodConfig{}).String(): |
||||
return "period_config", true |
||||
case reflect.TypeOf(validation.OverwriteMarshalingStringMap{}).String(): |
||||
return "headers", true |
||||
default: |
||||
return "", false |
||||
} |
||||
} |
||||
|
||||
func getFieldType(t reflect.Type) (string, error) { |
||||
if typ, isCustom := getFieldCustomType(t); isCustom { |
||||
return typ, nil |
||||
} |
||||
|
||||
// Fallback to auto-detection of built-in data types
|
||||
switch t.Kind() { |
||||
case reflect.Bool: |
||||
return "boolean", nil |
||||
case reflect.Int: |
||||
fallthrough |
||||
case reflect.Int8: |
||||
fallthrough |
||||
case reflect.Int16: |
||||
fallthrough |
||||
case reflect.Int32: |
||||
fallthrough |
||||
case reflect.Int64: |
||||
fallthrough |
||||
case reflect.Uint: |
||||
fallthrough |
||||
case reflect.Uint8: |
||||
fallthrough |
||||
case reflect.Uint16: |
||||
fallthrough |
||||
case reflect.Uint32: |
||||
fallthrough |
||||
case reflect.Uint64: |
||||
return "int", nil |
||||
case reflect.Float32: |
||||
fallthrough |
||||
case reflect.Float64: |
||||
return "float", nil |
||||
case reflect.String: |
||||
return fieldString, nil |
||||
case reflect.Slice: |
||||
// Get the type of elements
|
||||
elemType, err := getFieldType(t.Elem()) |
||||
if err != nil { |
||||
return "", err |
||||
} |
||||
return "list of " + elemType + "s", nil |
||||
case reflect.Map: |
||||
return fmt.Sprintf("map of %s to %s", t.Key(), t.Elem().String()), nil |
||||
case reflect.Struct: |
||||
return t.Name(), nil |
||||
case reflect.Ptr: |
||||
return getFieldType(t.Elem()) |
||||
case reflect.Interface: |
||||
return t.Name(), nil |
||||
default: |
||||
return "", fmt.Errorf("unsupported data type %s", t.Kind()) |
||||
} |
||||
} |
||||
|
||||
func getCustomFieldType(t reflect.Type) (string, bool) { |
||||
// Handle custom data types used in the config
|
||||
switch t.String() { |
||||
case reflect.TypeOf(&url.URL{}).String(): |
||||
return "url", true |
||||
case reflect.TypeOf(time.Duration(0)).String(): |
||||
return "duration", true |
||||
case reflect.TypeOf(flagext.StringSliceCSV{}).String(): |
||||
return fieldString, true |
||||
case reflect.TypeOf(flagext.CIDRSliceCSV{}).String(): |
||||
return fieldString, true |
||||
case reflect.TypeOf([]*relabel.Config{}).String(): |
||||
return fieldRelabelConfig, true |
||||
case reflect.TypeOf([]*util.RelabelConfig{}).String(): |
||||
return fieldRelabelConfig, true |
||||
case reflect.TypeOf(&prometheus_config.RemoteWriteConfig{}).String(): |
||||
return "remote_write_config...", true |
||||
case reflect.TypeOf(validation.OverwriteMarshalingStringMap{}).String(): |
||||
return "headers", true |
||||
default: |
||||
return "", false |
||||
} |
||||
} |
||||
|
||||
func getFieldFlag(field reflect.StructField, fieldValue reflect.Value, flags map[uintptr]*flag.Flag) (*flag.Flag, error) { |
||||
if isAbsentInCLI(field) { |
||||
return nil, nil |
||||
} |
||||
fieldPtr := fieldValue.Addr().Pointer() |
||||
fieldFlag, ok := flags[fieldPtr] |
||||
if !ok { |
||||
return nil, nil |
||||
} |
||||
|
||||
return fieldFlag, nil |
||||
} |
||||
|
||||
func getFieldExample(fieldKey string, fieldType reflect.Type) *FieldExample { |
||||
ex, ok := reflect.New(fieldType).Interface().(ExamplerConfig) |
||||
if !ok { |
||||
return nil |
||||
} |
||||
comment, yml := ex.ExampleDoc() |
||||
return &FieldExample{ |
||||
Comment: comment, |
||||
Yaml: map[string]interface{}{fieldKey: yml}, |
||||
} |
||||
} |
||||
|
||||
func getCustomFieldEntry(cfg interface{}, field reflect.StructField, fieldValue reflect.Value, flags map[uintptr]*flag.Flag) (*ConfigEntry, error) { |
||||
if field.Type == reflect.TypeOf(logging.Level{}) || field.Type == reflect.TypeOf(logging.Format{}) { |
||||
fieldFlag, err := getFieldFlag(field, fieldValue, flags) |
||||
if err != nil || fieldFlag == nil { |
||||
return nil, err |
||||
} |
||||
|
||||
return &ConfigEntry{ |
||||
Kind: KindField, |
||||
Name: getFieldName(field), |
||||
Required: isFieldRequired(field), |
||||
FieldFlag: fieldFlag.Name, |
||||
FieldDesc: getFieldDescription(cfg, field, fieldFlag.Usage), |
||||
FieldType: fieldString, |
||||
FieldDefault: getFieldDefault(field, fieldFlag.DefValue), |
||||
}, nil |
||||
} |
||||
if field.Type == reflect.TypeOf(flagext.URLValue{}) { |
||||
fieldFlag, err := getFieldFlag(field, fieldValue, flags) |
||||
if err != nil || fieldFlag == nil { |
||||
return nil, err |
||||
} |
||||
|
||||
return &ConfigEntry{ |
||||
Kind: KindField, |
||||
Name: getFieldName(field), |
||||
Required: isFieldRequired(field), |
||||
FieldFlag: fieldFlag.Name, |
||||
FieldDesc: getFieldDescription(cfg, field, fieldFlag.Usage), |
||||
FieldType: "url", |
||||
FieldDefault: getFieldDefault(field, fieldFlag.DefValue), |
||||
}, nil |
||||
} |
||||
if field.Type == reflect.TypeOf(flagext.Secret{}) { |
||||
fieldFlag, err := getFieldFlag(field, fieldValue, flags) |
||||
if err != nil || fieldFlag == nil { |
||||
return nil, err |
||||
} |
||||
|
||||
return &ConfigEntry{ |
||||
Kind: KindField, |
||||
Name: getFieldName(field), |
||||
Required: isFieldRequired(field), |
||||
FieldFlag: fieldFlag.Name, |
||||
FieldDesc: getFieldDescription(cfg, field, fieldFlag.Usage), |
||||
FieldType: fieldString, |
||||
FieldDefault: getFieldDefault(field, fieldFlag.DefValue), |
||||
}, nil |
||||
} |
||||
if field.Type == reflect.TypeOf(model.Duration(0)) { |
||||
fieldFlag, err := getFieldFlag(field, fieldValue, flags) |
||||
if err != nil || fieldFlag == nil { |
||||
return nil, err |
||||
} |
||||
|
||||
return &ConfigEntry{ |
||||
Kind: KindField, |
||||
Name: getFieldName(field), |
||||
Required: isFieldRequired(field), |
||||
FieldFlag: fieldFlag.Name, |
||||
FieldDesc: getFieldDescription(cfg, field, fieldFlag.Usage), |
||||
FieldType: "duration", |
||||
FieldDefault: getFieldDefault(field, fieldFlag.DefValue), |
||||
}, nil |
||||
} |
||||
if field.Type == reflect.TypeOf(flagext.Time{}) { |
||||
fieldFlag, err := getFieldFlag(field, fieldValue, flags) |
||||
if err != nil || fieldFlag == nil { |
||||
return nil, err |
||||
} |
||||
|
||||
return &ConfigEntry{ |
||||
Kind: KindField, |
||||
Name: getFieldName(field), |
||||
Required: isFieldRequired(field), |
||||
FieldFlag: fieldFlag.Name, |
||||
FieldDesc: getFieldDescription(cfg, field, fieldFlag.Usage), |
||||
FieldType: "time", |
||||
FieldDefault: getFieldDefault(field, fieldFlag.DefValue), |
||||
}, nil |
||||
} |
||||
|
||||
return nil, nil |
||||
} |
||||
|
||||
func getFieldDefault(field reflect.StructField, fallback string) string { |
||||
if v := getDocTagValue(field, "default"); v != "" { |
||||
return v |
||||
} |
||||
|
||||
return fallback |
||||
} |
||||
|
||||
func isFieldDeprecated(f reflect.StructField) bool { |
||||
return getDocTagFlag(f, "deprecated") |
||||
} |
||||
|
||||
func isFieldHidden(f reflect.StructField) bool { |
||||
return getDocTagFlag(f, "hidden") |
||||
} |
||||
|
||||
func isAbsentInCLI(f reflect.StructField) bool { |
||||
return getDocTagFlag(f, "nocli") |
||||
} |
||||
|
||||
func isFieldRequired(f reflect.StructField) bool { |
||||
return getDocTagFlag(f, "required") |
||||
} |
||||
|
||||
func isFieldInline(f reflect.StructField) bool { |
||||
return yamlFieldInlineParser.MatchString(f.Tag.Get("yaml")) |
||||
} |
||||
|
||||
func getFieldDescription(cfg interface{}, field reflect.StructField, fallback string) string { |
||||
// Set prefix
|
||||
prefix := "" |
||||
if isFieldDeprecated(field) { |
||||
prefix += "Deprecated: " |
||||
} |
||||
|
||||
if desc := getDocTagValue(field, "description"); desc != "" { |
||||
return prefix + desc |
||||
} |
||||
|
||||
if methodName := getDocTagValue(field, "description_method"); methodName != "" { |
||||
structRef := reflect.ValueOf(cfg) |
||||
|
||||
if method, ok := structRef.Type().MethodByName(methodName); ok { |
||||
out := method.Func.Call([]reflect.Value{structRef}) |
||||
if len(out) == 1 { |
||||
return prefix + out[0].String() |
||||
} |
||||
} |
||||
} |
||||
|
||||
return prefix + fallback |
||||
} |
||||
|
||||
func isRootBlock(t reflect.Type, rootBlocks []RootBlock) (string, string, bool) { |
||||
for _, rootBlock := range rootBlocks { |
||||
if t == rootBlock.StructType { |
||||
return rootBlock.Name, rootBlock.Desc, true |
||||
} |
||||
} |
||||
|
||||
return "", "", false |
||||
} |
||||
|
||||
func getDocTagFlag(f reflect.StructField, name string) bool { |
||||
cfg := parseDocTag(f) |
||||
_, ok := cfg[name] |
||||
return ok |
||||
} |
||||
|
||||
func getDocTagValue(f reflect.StructField, name string) string { |
||||
cfg := parseDocTag(f) |
||||
return cfg[name] |
||||
} |
||||
|
||||
func parseDocTag(f reflect.StructField) map[string]string { |
||||
cfg := map[string]string{} |
||||
tag := f.Tag.Get("doc") |
||||
|
||||
if tag == "" { |
||||
return cfg |
||||
} |
||||
|
||||
for _, entry := range strings.Split(tag, "|") { |
||||
parts := strings.SplitN(entry, "=", 2) |
||||
|
||||
switch len(parts) { |
||||
case 1: |
||||
cfg[parts[0]] = "" |
||||
case 2: |
||||
cfg[parts[0]] = parts[1] |
||||
} |
||||
} |
||||
|
||||
return cfg |
||||
} |
||||
@ -0,0 +1,224 @@ |
||||
// SPDX-License-Identifier: AGPL-3.0-only
|
||||
|
||||
package parse |
||||
|
||||
import ( |
||||
"reflect" |
||||
|
||||
"github.com/grafana/dskit/crypto/tls" |
||||
"github.com/grafana/dskit/grpcclient" |
||||
"github.com/grafana/dskit/kv/consul" |
||||
"github.com/grafana/dskit/kv/etcd" |
||||
"github.com/grafana/dskit/runtimeconfig" |
||||
"github.com/weaveworks/common/server" |
||||
|
||||
"github.com/grafana/loki/pkg/distributor" |
||||
"github.com/grafana/loki/pkg/ingester" |
||||
ingester_client "github.com/grafana/loki/pkg/ingester/client" |
||||
"github.com/grafana/loki/pkg/loki/common" |
||||
frontend "github.com/grafana/loki/pkg/lokifrontend" |
||||
"github.com/grafana/loki/pkg/querier" |
||||
"github.com/grafana/loki/pkg/querier/queryrange" |
||||
querier_worker "github.com/grafana/loki/pkg/querier/worker" |
||||
"github.com/grafana/loki/pkg/ruler" |
||||
"github.com/grafana/loki/pkg/ruler/rulestore/local" |
||||
"github.com/grafana/loki/pkg/scheduler" |
||||
"github.com/grafana/loki/pkg/storage" |
||||
"github.com/grafana/loki/pkg/storage/chunk/cache" |
||||
"github.com/grafana/loki/pkg/storage/chunk/client/aws" |
||||
"github.com/grafana/loki/pkg/storage/chunk/client/azure" |
||||
"github.com/grafana/loki/pkg/storage/chunk/client/baidubce" |
||||
"github.com/grafana/loki/pkg/storage/chunk/client/gcp" |
||||
"github.com/grafana/loki/pkg/storage/chunk/client/openstack" |
||||
storage_config "github.com/grafana/loki/pkg/storage/config" |
||||
"github.com/grafana/loki/pkg/storage/stores/indexshipper/compactor" |
||||
"github.com/grafana/loki/pkg/storage/stores/series/index" |
||||
"github.com/grafana/loki/pkg/storage/stores/shipper/indexgateway" |
||||
"github.com/grafana/loki/pkg/tracing" |
||||
"github.com/grafana/loki/pkg/usagestats" |
||||
"github.com/grafana/loki/pkg/validation" |
||||
) |
||||
|
||||
var ( |
||||
// RootBlocks is an ordered list of root blocks with their associated descriptions.
|
||||
// The order is the same order that will follow the markdown generation.
|
||||
// Root blocks map to the configuration variables defined in Config of pkg/loki/loki.go
|
||||
RootBlocks = []RootBlock{ |
||||
{ |
||||
Name: "server", |
||||
StructType: reflect.TypeOf(server.Config{}), |
||||
Desc: "Configures the server of the launched module(s).", |
||||
}, |
||||
{ |
||||
Name: "distributor", |
||||
StructType: reflect.TypeOf(distributor.Config{}), |
||||
Desc: "Configures the distributor.", |
||||
}, |
||||
{ |
||||
Name: "querier", |
||||
StructType: reflect.TypeOf(querier.Config{}), |
||||
Desc: "Configures the querier. Only appropriate when running all modules or just the querier.", |
||||
}, |
||||
{ |
||||
Name: "query_scheduler", |
||||
StructType: reflect.TypeOf(scheduler.Config{}), |
||||
Desc: "The query_scheduler block configures the Loki query scheduler. When configured it separates the tenant query queues from the query-frontend.", |
||||
}, |
||||
{ |
||||
Name: "frontend", |
||||
StructType: reflect.TypeOf(frontend.Config{}), |
||||
Desc: "The frontend block configures the Loki query-frontend.", |
||||
}, |
||||
{ |
||||
Name: "query_range", |
||||
StructType: reflect.TypeOf(queryrange.Config{}), |
||||
Desc: "The query_range block configures the query splitting and caching in the Loki query-frontend.", |
||||
}, |
||||
{ |
||||
Name: "ruler", |
||||
StructType: reflect.TypeOf(ruler.Config{}), |
||||
Desc: "The ruler block configures the Loki ruler.", |
||||
}, |
||||
{ |
||||
Name: "ingester_client", |
||||
StructType: reflect.TypeOf(ingester_client.Config{}), |
||||
Desc: "The ingester_client block configures how the distributor will connect to ingesters. Only appropriate when running all components, the distributor, or the querier.", |
||||
}, |
||||
{ |
||||
Name: "ingester", |
||||
StructType: reflect.TypeOf(ingester.Config{}), |
||||
Desc: "The ingester block configures the ingester and how the ingester will register itself to a key value store.", |
||||
}, |
||||
{ |
||||
Name: "index_gateway", |
||||
StructType: reflect.TypeOf(indexgateway.Config{}), |
||||
Desc: "The index_gateway block configures the Loki index gateway server, responsible for serving index queries without the need to constantly interact with the object store.", |
||||
}, |
||||
{ |
||||
Name: "storage_config", |
||||
StructType: reflect.TypeOf(storage.Config{}), |
||||
Desc: "The storage_config block configures one of many possible stores for both the index and chunks. Which configuration to be picked should be defined in schema_config block.", |
||||
}, |
||||
{ |
||||
Name: "chunk_store_config", |
||||
StructType: reflect.TypeOf(storage_config.ChunkStoreConfig{}), |
||||
Desc: "The chunk_store_config block configures how chunks will be cached and how long to wait before saving them to the backing store.", |
||||
}, |
||||
{ |
||||
Name: "schema_config", |
||||
StructType: reflect.TypeOf(storage_config.SchemaConfig{}), |
||||
Desc: "Configures the chunk index schema and where it is stored.", |
||||
}, |
||||
{ |
||||
Name: "compactor", |
||||
StructType: reflect.TypeOf(compactor.Config{}), |
||||
Desc: "The compactor block configures the compactor component, which compacts index shards for performance.", |
||||
}, |
||||
{ |
||||
Name: "limits_config", |
||||
StructType: reflect.TypeOf(validation.Limits{}), |
||||
Desc: "The limits_config block configures global and per-tenant limits in Loki.", |
||||
}, |
||||
{ |
||||
Name: "frontend_worker", |
||||
StructType: reflect.TypeOf(querier_worker.Config{}), |
||||
Desc: "The frontend_worker configures the worker - running within the Loki querier - picking up and executing queries enqueued by the query-frontend.", |
||||
}, |
||||
{ |
||||
Name: "table_manager", |
||||
StructType: reflect.TypeOf(index.TableManagerConfig{}), |
||||
Desc: "The table_manager block configures the table manager for retention.", |
||||
}, |
||||
|
||||
{ |
||||
Name: "runtime_config", |
||||
StructType: reflect.TypeOf(runtimeconfig.Config{}), |
||||
Desc: "Configuration for 'runtime config' module, responsible for reloading runtime configuration file.", |
||||
}, |
||||
{ |
||||
Name: "tracing", |
||||
StructType: reflect.TypeOf(tracing.Config{}), |
||||
Desc: "Configuration for tracing.", |
||||
}, |
||||
{ |
||||
Name: "analytics", |
||||
StructType: reflect.TypeOf(usagestats.Config{}), |
||||
Desc: "Configuration for usage report.", |
||||
}, |
||||
|
||||
{ |
||||
Name: "common", |
||||
StructType: reflect.TypeOf(common.Config{}), |
||||
Desc: "Common configuration to be shared between multiple modules. If a more specific configuration is given in other sections, the related configuration within this section will be ignored.", |
||||
}, |
||||
|
||||
// Non-root blocks
|
||||
// StoreConfig dskit type: https://github.com/grafana/dskit/blob/main/kv/client.go#L44-L52
|
||||
{ |
||||
Name: "consul", |
||||
StructType: reflect.TypeOf(consul.Config{}), |
||||
Desc: "Configuration for a Consul client. Only applies if store is consul.", |
||||
}, |
||||
{ |
||||
Name: "etcd", |
||||
StructType: reflect.TypeOf(etcd.Config{}), |
||||
Desc: "Configuration for an ETCD v3 client. Only applies if store is etcd.", |
||||
}, |
||||
// GRPC client
|
||||
{ |
||||
Name: "grpc_client", |
||||
StructType: reflect.TypeOf(grpcclient.Config{}), |
||||
Desc: "The grpc_client block configures the gRPC client used to communicate between two Loki components.", |
||||
}, |
||||
// TLS config
|
||||
{ |
||||
Name: "tls_config", |
||||
StructType: reflect.TypeOf(tls.ClientConfig{}), |
||||
Desc: "The TLS configuration.", |
||||
}, |
||||
// Cache config
|
||||
{ |
||||
Name: "cache_config", |
||||
StructType: reflect.TypeOf(cache.Config{}), |
||||
Desc: "The cache block configures the cache backend.", |
||||
}, |
||||
// Schema periodic config
|
||||
{ |
||||
Name: "period_config", |
||||
StructType: reflect.TypeOf(storage_config.PeriodConfig{}), |
||||
Desc: "The period_config block configures what index schemas should be used for from specific time periods.", |
||||
}, |
||||
|
||||
// Storage config
|
||||
{ |
||||
Name: "azure_storage_config", |
||||
StructType: reflect.TypeOf(azure.BlobStorageConfig{}), |
||||
Desc: "The azure_storage_config block configures the connection to Azure object storage backend.", |
||||
}, |
||||
{ |
||||
Name: "gcs_storage_config", |
||||
StructType: reflect.TypeOf(gcp.GCSConfig{}), |
||||
Desc: "The gcs_storage_config block configures the connection to Google Cloud Storage object storage backend.", |
||||
}, |
||||
{ |
||||
Name: "s3_storage_config", |
||||
StructType: reflect.TypeOf(aws.S3Config{}), |
||||
Desc: "The s3_storage_config block configures the connection to Amazon S3 object storage backend.", |
||||
}, |
||||
{ |
||||
Name: "bos_storage_config", |
||||
StructType: reflect.TypeOf(baidubce.BOSStorageConfig{}), |
||||
Desc: "The bos_storage_config block configures the connection to Baidu Object Storage (BOS) object storage backend.", |
||||
}, |
||||
{ |
||||
Name: "swift_storage_config", |
||||
StructType: reflect.TypeOf(openstack.SwiftConfig{}), |
||||
Desc: "The swift_storage_config block configures the connection to OpenStack Object Storage (Swift) object storage backend.", |
||||
}, |
||||
{ |
||||
Name: "local_storage_config", |
||||
StructType: reflect.TypeOf(local.Config{}), |
||||
Desc: "The local_storage_config block configures the usage of local file system as object storage backend.", |
||||
}, |
||||
} |
||||
) |
||||
@ -0,0 +1,62 @@ |
||||
// SPDX-License-Identifier: AGPL-3.0-only
|
||||
// Provenance-includes-location: https://github.com/cortexproject/cortex/blob/master/tools/doc-generator/util.go
|
||||
// Provenance-includes-license: Apache-2.0
|
||||
// Provenance-includes-copyright: The Cortex Authors.
|
||||
|
||||
package parse |
||||
|
||||
import ( |
||||
"math" |
||||
"strings" |
||||
) |
||||
|
||||
func FindFlagsPrefix(flags []string) []string { |
||||
if len(flags) == 0 { |
||||
return flags |
||||
} |
||||
|
||||
// Split the input flags input tokens separated by "."
|
||||
// because the want to find the prefix where segments
|
||||
// are dot-separated.
|
||||
var tokens [][]string |
||||
for _, flag := range flags { |
||||
tokens = append(tokens, strings.Split(flag, ".")) |
||||
} |
||||
|
||||
// Find the shortest tokens.
|
||||
minLength := math.MaxInt32 |
||||
for _, t := range tokens { |
||||
if len(t) < minLength { |
||||
minLength = len(t) |
||||
} |
||||
} |
||||
|
||||
// We iterate backward to find common suffixes. Each time
|
||||
// a common suffix is found, we remove it from the tokens.
|
||||
outer: |
||||
for i := 0; i < minLength; i++ { |
||||
lastToken := tokens[0][len(tokens[0])-1] |
||||
|
||||
// Interrupt if the last token is different across the flags.
|
||||
for _, t := range tokens { |
||||
if t[len(t)-1] != lastToken { |
||||
break outer |
||||
} |
||||
} |
||||
|
||||
// The suffix token is equal across all flags, so we
|
||||
// remove it from all of them and re-iterate.
|
||||
for i, t := range tokens { |
||||
tokens[i] = t[:len(t)-1] |
||||
} |
||||
} |
||||
|
||||
// The remaining tokens are the different flags prefix, which we can
|
||||
// now merge with the ".".
|
||||
var prefixes []string |
||||
for _, t := range tokens { |
||||
prefixes = append(prefixes, strings.Join(t, ".")) |
||||
} |
||||
|
||||
return prefixes |
||||
} |
||||
@ -0,0 +1,52 @@ |
||||
// SPDX-License-Identifier: AGPL-3.0-only
|
||||
// Provenance-includes-location: https://github.com/cortexproject/cortex/blob/master/tools/doc-generator/util_test.go
|
||||
// Provenance-includes-license: Apache-2.0
|
||||
// Provenance-includes-copyright: The Cortex Authors.
|
||||
|
||||
package parse |
||||
|
||||
import ( |
||||
"testing" |
||||
|
||||
"github.com/stretchr/testify/assert" |
||||
) |
||||
|
||||
func Test_findFlagsPrefix(t *testing.T) { |
||||
tests := []struct { |
||||
input []string |
||||
expected []string |
||||
}{ |
||||
{ |
||||
input: []string{}, |
||||
expected: []string{}, |
||||
}, |
||||
{ |
||||
input: []string{""}, |
||||
expected: []string{""}, |
||||
}, |
||||
{ |
||||
input: []string{"", ""}, |
||||
expected: []string{"", ""}, |
||||
}, |
||||
{ |
||||
input: []string{"foo", "foo", "foo"}, |
||||
expected: []string{"", "", ""}, |
||||
}, |
||||
{ |
||||
input: []string{"ruler.endpoint", "alertmanager.endpoint"}, |
||||
expected: []string{"ruler", "alertmanager"}, |
||||
}, |
||||
{ |
||||
input: []string{"ruler.endpoint.address", "alertmanager.endpoint.address"}, |
||||
expected: []string{"ruler", "alertmanager"}, |
||||
}, |
||||
{ |
||||
input: []string{"ruler.first.address", "ruler.second.address"}, |
||||
expected: []string{"ruler.first", "ruler.second"}, |
||||
}, |
||||
} |
||||
|
||||
for _, test := range tests { |
||||
assert.Equal(t, test.expected, FindFlagsPrefix(test.input)) |
||||
} |
||||
} |
||||
@ -0,0 +1,245 @@ |
||||
// SPDX-License-Identifier: AGPL-3.0-only
|
||||
// Provenance-includes-location: https://github.com/cortexproject/cortex/blob/master/tools/doc-generator/writer.go
|
||||
// Provenance-includes-license: Apache-2.0
|
||||
// Provenance-includes-copyright: The Cortex Authors.
|
||||
|
||||
package main |
||||
|
||||
import ( |
||||
"fmt" |
||||
"sort" |
||||
"strconv" |
||||
"strings" |
||||
|
||||
"github.com/grafana/regexp" |
||||
"github.com/mitchellh/go-wordwrap" |
||||
"gopkg.in/yaml.v3" |
||||
|
||||
"github.com/grafana/loki/tools/doc-generator/parse" |
||||
) |
||||
|
||||
type specWriter struct { |
||||
out strings.Builder |
||||
} |
||||
|
||||
func (w *specWriter) writeConfigBlock(b *parse.ConfigBlock, indent int) { |
||||
if len(b.Entries) == 0 { |
||||
return |
||||
} |
||||
|
||||
for i, entry := range b.Entries { |
||||
// Add a new line to separate from the previous entry
|
||||
if i > 0 { |
||||
w.out.WriteString("\n") |
||||
} |
||||
|
||||
w.writeConfigEntry(entry, indent) |
||||
} |
||||
} |
||||
|
||||
func (w *specWriter) writeConfigEntry(e *parse.ConfigEntry, indent int) { |
||||
if e.Kind == parse.KindBlock { |
||||
// If the block is a root block it will have its dedicated section in the doc,
|
||||
// so here we've just to write down the reference without re-iterating on it.
|
||||
if e.Root { |
||||
// Description
|
||||
w.writeComment(e.BlockDesc, indent, 0) |
||||
if e.Block.FlagsPrefix != "" { |
||||
w.writeComment(fmt.Sprintf("The CLI flags prefix for this block configuration is: %s", e.Block.FlagsPrefix), indent, 0) |
||||
} |
||||
|
||||
// Block reference without entries, because it's a root block
|
||||
w.out.WriteString(pad(indent) + "[" + e.Name + ": <" + e.Block.Name + ">]\n") |
||||
} else { |
||||
// Description
|
||||
w.writeComment(e.BlockDesc, indent, 0) |
||||
|
||||
// Name
|
||||
w.out.WriteString(pad(indent) + e.Name + ":\n") |
||||
|
||||
// Entries
|
||||
w.writeConfigBlock(e.Block, indent+tabWidth) |
||||
} |
||||
} |
||||
|
||||
if e.Kind == parse.KindField || e.Kind == parse.KindSlice || e.Kind == parse.KindMap { |
||||
// Description
|
||||
w.writeComment(e.Description(), indent, 0) |
||||
w.writeExample(e.FieldExample, indent) |
||||
w.writeFlag(e.FieldFlag, indent) |
||||
|
||||
// Specification
|
||||
fieldDefault := e.FieldDefault |
||||
if e.FieldType == "string" { |
||||
fieldDefault = strconv.Quote(fieldDefault) |
||||
} else if e.FieldType == "duration" { |
||||
fieldDefault = cleanupDuration(fieldDefault) |
||||
} |
||||
|
||||
if e.Required { |
||||
w.out.WriteString(pad(indent) + e.Name + ": <" + e.FieldType + "> | default = " + fieldDefault + "\n") |
||||
} else { |
||||
defaultValue := "" |
||||
if len(fieldDefault) > 0 { |
||||
defaultValue = " | default = " + fieldDefault |
||||
} |
||||
w.out.WriteString(pad(indent) + "[" + e.Name + ": <" + e.FieldType + ">" + defaultValue + "]\n") |
||||
} |
||||
} |
||||
} |
||||
|
||||
func (w *specWriter) writeFlag(name string, indent int) { |
||||
if name == "" { |
||||
return |
||||
} |
||||
|
||||
w.out.WriteString(pad(indent) + "# CLI flag: -" + name + "\n") |
||||
} |
||||
|
||||
func (w *specWriter) writeComment(comment string, indent, innerIndent int) { |
||||
if comment == "" { |
||||
return |
||||
} |
||||
|
||||
wrapped := wordwrap.WrapString(comment, uint(maxLineWidth-indent-innerIndent-2)) |
||||
w.writeWrappedString(wrapped, indent, innerIndent) |
||||
} |
||||
|
||||
func (w *specWriter) writeExample(example *parse.FieldExample, indent int) { |
||||
if example == nil { |
||||
return |
||||
} |
||||
|
||||
w.writeComment("Example:", indent, 0) |
||||
if example.Comment != "" { |
||||
w.writeComment(example.Comment, indent, 2) |
||||
} |
||||
|
||||
data, err := yaml.Marshal(example.Yaml) |
||||
if err != nil { |
||||
panic(fmt.Errorf("can't render example: %w", err)) |
||||
} |
||||
|
||||
w.writeWrappedString(string(data), indent, 2) |
||||
} |
||||
|
||||
func (w *specWriter) writeWrappedString(s string, indent, innerIndent int) { |
||||
lines := strings.Split(strings.TrimSpace(s), "\n") |
||||
for _, line := range lines { |
||||
w.out.WriteString(pad(indent) + "# " + pad(innerIndent) + line + "\n") |
||||
} |
||||
} |
||||
|
||||
func (w *specWriter) string() string { |
||||
return strings.TrimSpace(w.out.String()) |
||||
} |
||||
|
||||
type markdownWriter struct { |
||||
out strings.Builder |
||||
} |
||||
|
||||
func (w *markdownWriter) writeConfigDoc(blocks []*parse.ConfigBlock) { |
||||
// Deduplicate root blocks.
|
||||
uniqueBlocks := map[string]*parse.ConfigBlock{} |
||||
for _, block := range blocks { |
||||
uniqueBlocks[block.Name] = block |
||||
} |
||||
|
||||
// Generate the markdown, honoring the root blocks order.
|
||||
if topBlock, ok := uniqueBlocks[""]; ok { |
||||
w.writeConfigBlock(topBlock) |
||||
} |
||||
|
||||
for _, rootBlock := range parse.RootBlocks { |
||||
if block, ok := uniqueBlocks[rootBlock.Name]; ok { |
||||
// Keep the root block description.
|
||||
blockToWrite := *block |
||||
blockToWrite.Desc = rootBlock.Desc |
||||
|
||||
w.writeConfigBlock(&blockToWrite) |
||||
} |
||||
} |
||||
} |
||||
|
||||
func (w *markdownWriter) writeConfigBlock(block *parse.ConfigBlock) { |
||||
// Title
|
||||
if block.Name != "" { |
||||
w.out.WriteString("### " + block.Name + "\n") |
||||
w.out.WriteString("\n") |
||||
} |
||||
|
||||
// Description
|
||||
if block.Desc != "" { |
||||
desc := block.Desc |
||||
|
||||
// Wrap first instance of the config block name with backticks
|
||||
if block.Name != "" { |
||||
var matches int |
||||
nameRegexp := regexp.MustCompile(regexp.QuoteMeta(block.Name)) |
||||
desc = nameRegexp.ReplaceAllStringFunc(desc, func(input string) string { |
||||
if matches == 0 { |
||||
matches++ |
||||
return "`" + input + "`" |
||||
} |
||||
return input |
||||
}) |
||||
} |
||||
|
||||
// List of all prefixes used to reference this config block.
|
||||
if len(block.FlagsPrefixes) > 1 { |
||||
sortedPrefixes := sort.StringSlice(block.FlagsPrefixes) |
||||
sortedPrefixes.Sort() |
||||
|
||||
desc += " The supported CLI flags `<prefix>` used to reference this configuration block are:\n\n" |
||||
|
||||
for _, prefix := range sortedPrefixes { |
||||
if prefix == "" { |
||||
desc += "- _no prefix_\n" |
||||
} else { |
||||
desc += fmt.Sprintf("- `%s`\n", prefix) |
||||
} |
||||
} |
||||
|
||||
// Unfortunately the markdown compiler used by the website generator has a bug
|
||||
// when there's a list followed by a code block (no matter know many newlines
|
||||
// in between). To workaround, we add a non-breaking space.
|
||||
desc += "\n " |
||||
} |
||||
|
||||
w.out.WriteString(desc + "\n") |
||||
w.out.WriteString("\n") |
||||
} |
||||
|
||||
// Config specs
|
||||
spec := &specWriter{} |
||||
spec.writeConfigBlock(block, 0) |
||||
|
||||
w.out.WriteString("```yaml\n") |
||||
w.out.WriteString(spec.string() + "\n") |
||||
w.out.WriteString("```\n") |
||||
w.out.WriteString("\n") |
||||
} |
||||
|
||||
func (w *markdownWriter) string() string { |
||||
return strings.TrimSpace(w.out.String()) |
||||
} |
||||
|
||||
func pad(length int) string { |
||||
return strings.Repeat(" ", length) |
||||
} |
||||
|
||||
func cleanupDuration(value string) string { |
||||
// This is the list of suffixes to remove from the duration if they're not
|
||||
// the whole duration value.
|
||||
suffixes := []string{"0s", "0m"} |
||||
|
||||
for _, suffix := range suffixes { |
||||
re := regexp.MustCompile("(^.+\\D)" + suffix + "$") |
||||
|
||||
if groups := re.FindStringSubmatch(value); len(groups) == 2 { |
||||
value = groups[1] |
||||
} |
||||
} |
||||
|
||||
return value |
||||
} |
||||
@ -0,0 +1,21 @@ |
||||
The MIT License (MIT) |
||||
|
||||
Copyright (c) 2014 Mitchell Hashimoto |
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy |
||||
of this software and associated documentation files (the "Software"), to deal |
||||
in the Software without restriction, including without limitation the rights |
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell |
||||
copies of the Software, and to permit persons to whom the Software is |
||||
furnished to do so, subject to the following conditions: |
||||
|
||||
The above copyright notice and this permission notice shall be included in |
||||
all copies or substantial portions of the Software. |
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE |
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER |
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, |
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN |
||||
THE SOFTWARE. |
||||
@ -0,0 +1,39 @@ |
||||
# go-wordwrap |
||||
|
||||
`go-wordwrap` (Golang package: `wordwrap`) is a package for Go that |
||||
automatically wraps words into multiple lines. The primary use case for this |
||||
is in formatting CLI output, but of course word wrapping is a generally useful |
||||
thing to do. |
||||
|
||||
## Installation and Usage |
||||
|
||||
Install using `go get github.com/mitchellh/go-wordwrap`. |
||||
|
||||
Full documentation is available at |
||||
http://godoc.org/github.com/mitchellh/go-wordwrap |
||||
|
||||
Below is an example of its usage ignoring errors: |
||||
|
||||
```go |
||||
wrapped := wordwrap.WrapString("foo bar baz", 3) |
||||
fmt.Println(wrapped) |
||||
``` |
||||
|
||||
Would output: |
||||
|
||||
``` |
||||
foo |
||||
bar |
||||
baz |
||||
``` |
||||
|
||||
## Word Wrap Algorithm |
||||
|
||||
This library doesn't use any clever algorithm for word wrapping. The wrapping |
||||
is actually very naive: whenever there is whitespace or an explicit linebreak. |
||||
The goal of this library is for word wrapping CLI output, so the input is |
||||
typically pretty well controlled human language. Because of this, the naive |
||||
approach typically works just fine. |
||||
|
||||
In the future, we'd like to make the algorithm more advanced. We would do |
||||
so without breaking the API. |
||||
@ -0,0 +1,73 @@ |
||||
package wordwrap |
||||
|
||||
import ( |
||||
"bytes" |
||||
"unicode" |
||||
) |
||||
|
||||
// WrapString wraps the given string within lim width in characters.
|
||||
//
|
||||
// Wrapping is currently naive and only happens at white-space. A future
|
||||
// version of the library will implement smarter wrapping. This means that
|
||||
// pathological cases can dramatically reach past the limit, such as a very
|
||||
// long word.
|
||||
func WrapString(s string, lim uint) string { |
||||
// Initialize a buffer with a slightly larger size to account for breaks
|
||||
init := make([]byte, 0, len(s)) |
||||
buf := bytes.NewBuffer(init) |
||||
|
||||
var current uint |
||||
var wordBuf, spaceBuf bytes.Buffer |
||||
|
||||
for _, char := range s { |
||||
if char == '\n' { |
||||
if wordBuf.Len() == 0 { |
||||
if current+uint(spaceBuf.Len()) > lim { |
||||
current = 0 |
||||
} else { |
||||
current += uint(spaceBuf.Len()) |
||||
spaceBuf.WriteTo(buf) |
||||
} |
||||
spaceBuf.Reset() |
||||
} else { |
||||
current += uint(spaceBuf.Len() + wordBuf.Len()) |
||||
spaceBuf.WriteTo(buf) |
||||
spaceBuf.Reset() |
||||
wordBuf.WriteTo(buf) |
||||
wordBuf.Reset() |
||||
} |
||||
buf.WriteRune(char) |
||||
current = 0 |
||||
} else if unicode.IsSpace(char) { |
||||
if spaceBuf.Len() == 0 || wordBuf.Len() > 0 { |
||||
current += uint(spaceBuf.Len() + wordBuf.Len()) |
||||
spaceBuf.WriteTo(buf) |
||||
spaceBuf.Reset() |
||||
wordBuf.WriteTo(buf) |
||||
wordBuf.Reset() |
||||
} |
||||
|
||||
spaceBuf.WriteRune(char) |
||||
} else { |
||||
|
||||
wordBuf.WriteRune(char) |
||||
|
||||
if current+uint(spaceBuf.Len()+wordBuf.Len()) > lim && uint(wordBuf.Len()) < lim { |
||||
buf.WriteRune('\n') |
||||
current = 0 |
||||
spaceBuf.Reset() |
||||
} |
||||
} |
||||
} |
||||
|
||||
if wordBuf.Len() == 0 { |
||||
if current+uint(spaceBuf.Len()) <= lim { |
||||
spaceBuf.WriteTo(buf) |
||||
} |
||||
} else { |
||||
spaceBuf.WriteTo(buf) |
||||
wordBuf.WriteTo(buf) |
||||
} |
||||
|
||||
return buf.String() |
||||
} |
||||
Loading…
Reference in new issue