* Approach bundling metadata along with samples and exemplars
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Add first test; rebase with main
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Alternative approach: bundle metadata in TimeSeries protobuf
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* update go mod to match main branch
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* fix after rebase
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* we're not going to modify the 1.X format anymore
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Modify AppendMetadata based on the fact that we be putting metadata into
timeseries
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Rename enums for remote write versions to something that makes more
sense + remove the added `sendMetadata` flag.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* rename flag that enables writing of metadata records to the WAL
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* additional clean up
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* lint
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* fix usage of require.Len
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* some clean up from review comments
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* more review fixes
Signed-off-by: Callum Styan <callumstyan@gmail.com>
---------
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
Signed-off-by: Callum Styan <callumstyan@gmail.com>
Co-authored-by: Paschalis Tsilias <paschalist0@gmail.com>
| <codeclass="text-nowrap">--query.timeout</code> | Maximum time a query may take before being aborted. Use with server mode only. | `2m` |
| <codeclass="text-nowrap">--query.max-concurrency</code> | Maximum number of queries executed concurrently. Use with server mode only. | `20` |
| <codeclass="text-nowrap">--query.max-samples</code> | Maximum number of samples a single query can load into memory. Note that queries will fail if they try to load more samples than this into memory, so this also limits the number of samples a query can return. Use with server mode only. | `50000000` |
| <codeclass="text-nowrap">--enable-feature</code> | Comma separated feature names to enable. Valid options: agent, exemplar-storage, expand-external-labels, memory-snapshot-on-shutdown, promql-at-modifier, promql-negative-offset, promql-per-step-stats, promql-experimental-functions, remote-write-receiver (DEPRECATED), extra-scrape-metrics, new-service-discovery-manager, auto-gomaxprocs, no-default-scrape-port, native-histograms, otlp-write-receiver. See https://prometheus.io/docs/prometheus/latest/feature_flags/ for more details. | |
| <codeclass="text-nowrap">--enable-feature</code> | Comma separated feature names to enable. Valid options: agent, exemplar-storage, expand-external-labels, memory-snapshot-on-shutdown, promql-at-modifier, promql-negative-offset, promql-per-step-stats, promql-experimental-functions, remote-write-receiver (DEPRECATED), extra-scrape-metrics, new-service-discovery-manager, auto-gomaxprocs, no-default-scrape-port, native-histograms, otlp-write-receiver, metadata-wal-records. See https://prometheus.io/docs/prometheus/latest/feature_flags/ for more details. | |
| <codeclass="text-nowrap">--remote-write-format</code> | remote write proto format to use, valid options: 0 (1.0), 1 (reduced format), 3 (min64 format) | `0` |
| <codeclass="text-nowrap">--log.level</code> | Only log messages with the given severity or above. One of: [debug, info, warn, error] | `info` |
| <codeclass="text-nowrap">--log.format</code> | Output format of log messages. One of: [logfmt, json] | `logfmt` |
@ -204,3 +204,10 @@ Enables ingestion of created timestamp. Created timestamps are injected as 0 val
Currently Prometheus supports created timestamps only on the traditional Prometheus Protobuf protocol (WIP for other protocols). As a result, when enabling this feature, the Prometheus protobuf scrape protocol will be prioritized (See `scrape_config.scrape_protocols` settings for more details).
Besides enabling this feature in Prometheus, created timestamps need to be exposed by the application being scraped.
## Metadata WAL Records
`--enable-features=metadata-wal-records`
When enabled, Prometheus will store metadata in-memory and keep track of
metadata changes as WAL records on a per-series basis. This must be used if
you are also using remote write 2.0 as it will only gather metadata from the WAL.
// metricTypeToMetricTypeProto transforms a Prometheus metricType into prompb metricType. Since the former is a string we need to transform it to an enum.
// metricTypeToMetricTypeProtoV2 transforms a Prometheus metricType into writev2 metricType. Since the former is a string we need to transform it to an enum.
// The current MetadataWatcher implementation is mutually exclusive
// with the new approach, which stores metadata as WAL records and
// ships them alongside series. If both mechanisms are set, the new one
// takes precedence by implicitly disabling the older one.
ift.mcfg.Send&&t.rwFormat>Version1{
level.Warn(logger).Log("msg","usage of 'metadata_config.send' is redundant when using remote write v2 (or higher) as metadata will always be gathered from the WAL and included for every series within each write request")
// todo: change the rws.rwFormat to a queue config field
ifrws.rwFormat>Version1&&rws.metadataInWAL{
returnerrors.New("invalid remote write configuration, if you are using remote write version 2.0 then the feature flag for metadata records in the WAL must be enabled")