Flag categorize labels on streams response (#10419)

We recently introduced support for ingesting and querying structured
metadata in Loki. This adds a new dimension to Loki's labels since now
we arguably have three categories of labels: _stream_, _structured
metadata_, and _parsed_ labels.

Depending on the origin of the labels, they should be used in LogQL
expressions differently to achieve optimal performance. _stream_ labels
should be added to stream matchers, _structured metadata_ labels should
be used in a filter expression before any parsing expression, and
_parsed_ labels should be placed after the parser expression extracting
them.

The Grafana UI has a hard time dealing with this same problem. Before
https://github.com/grafana/grafana/pull/73955, the filtering
functionality in Grafana was broken since it was not able to distinguish
between _stream_ and _structured metadata_ labels. Also, as soon as a
parser expression was added to the query, filters added by Grafana would
be appended to the end of the query regardless of the label category.
The PR above implements a workaround for this problem but needs a better
API on Loki's end to mitigate all corner cases.

Loki currently returns the following JSON for log queries:
```json
...
{
	"stream": {
	  "cluster": "us-central",
	  "container": "query-frontend",
	  "namespace": "loki",
	  "level": "info",
	  "traceID": "68810cf0c94bfcca"
	},
	 "values": [
                    [
                        "1693996529000222496",
                        "1693996529000222496 aaaaaaaaa.....\n"
                    ],
          ...
},
{
	"stream": {
	  "cluster": "us-central",
	  "container": "query-frontend",
	  "namespace": "loki",
	  "level": "debug",
	  "traceID": "a7116cj54c4bjz8s"
	},
	 "values": [
                    [
                        "1693996529000222497",
                        "1693996529000222497 bbbbbbbbb.....\n"
                    ],
          ...
},
...
```

As can be seen, there is no way we can distinguish the category of each
label.

This PR introduces a new flag `X-Loki-Response-Encoding-Flags:
categorize-labels` that makes Loki return categorized labels as follows:

```json
...
{
        "stream": {
	  "cluster": "us-central",
	  "container": "query-frontend",
	  "namespace": "loki",
	},
	"values": [
                    [
                        "1693996529000222496",
                        "1693996529000222496 aaaaaaaaa.....\n",
                       {
                           "structuredMetadata": {
                                "traceID": "68810cf0c94bfcca"
                           },
                          "parsed": {
                                 "level": "info"
                          }
                       }
                    ],
                    [
                        "1693996529000222497",
                        "1693996529000222497 bbbbbbbbb.....\n",
                       {
                           "structuredMetadata": {
                                "traceID": "a7116cj54c4bjz8s"
                           },
                          "parsed": {
                                 "level": "debug"
                          }
                       }
                    ],
    ...
},
...
```

Note that this PR only supports log queries, not metric queries. From a
UX perspective, being able to categorize labels in metric queries
doesn't have any benefit yet. Having said that, supporting this for
metric queries would require some minor refactoring on top of what has
been implemented here. If we decide to do that, I think we should do it
on a separate PR to avoid making this PR even larger.

I also decided to leave out support for Tail queries to avoid making
this PR even larger. Once this one gets merged, we can work to support
tailing.

---

**Note to reviewers**

This PR is huge since we need to forward categorized all over the
codebase (from parsing logs all the way to marshaling), fortunately,
many of the changes come from updating tests and refactoring iterators.

Tested out in a dev cell with query `'{stream="stdout"} | label_format
new="text"`.
- Without the new flag:
```
$ http http://127.0.0.1:3100/loki/api/v1/query_range\?direction\=BACKWARD\&end\=1693996529322486000\&limit\=30\&query\=%7Bstream%3D%22stdout%22%7D+%7C+label_format+new%3D%22text%22\&start\=1693992929322486000 X-Scope-Orgid:REDACTED
{
    "data": {
        "result": [
            {
                "stream": {
                    "new": "text",
                    "pod": "loki-canary-986bd6f4b-xqmb7",
                    "stream": "stdout"
                },
                "values": [
                    [
                        "1693996529000222496",
                        "1693996529000222496 pppppppppppp...\n"
                    ],
                    [
                        "1693996528499160852",
                        "1693996528499160852 pppppppppppp...\n"
                    ],
...
```

- With the new flag
```
$ http http://127.0.0.1:3100/loki/api/v1/query_range\?direction\=BACKWARD\&end\=1693996529322486000\&limit\=30\&query\=%7Bstream%3D%22stdout%22%7D+%7C+label_format+new%3D%22text%22\&start\=1693992929322486000 X-Scope-Orgid:REDACTED X-Loki-Response-Encoding-Flags:categorize-labels
{
    "data": {
        "encodingFlags": [
            "categorize-labels"
        ],
        "result": [
            {
                "stream": {
                    "pod": "loki-canary-986bd6f4b-xqmb7",
                    "stream": "stdout"
                },
                "values": [
                    [
                        "1693996529000222496",
                        "1693996529000222496 pppppppppppp...\n",
                        {
                            "parsed": {
                                "new": "text"
                            }
                        }
                    ],
                    [
                        "1693996528499160852",
                        "1693996528499160852 pppppppppppp...\n",
                        {
                            "parsed": {
                                "new": "text"
                            }
                        }
                    ],
...
```
pull/10996/head^2
Salva Corts 3 years ago committed by GitHub
parent 60ea954f5d
commit 52a3f16039
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 2
      go.mod
  2. 69
      integration/client/client.go
  3. 2
      integration/loki_micro_services_delete_test.go
  4. 293
      integration/loki_micro_services_test.go
  5. 2
      pkg/chunkenc/dumb_chunk.go
  6. 4
      pkg/chunkenc/interface.go
  7. 36
      pkg/chunkenc/memchunk.go
  8. 129
      pkg/chunkenc/memchunk_test.go
  9. 26
      pkg/chunkenc/unordered.go
  10. 21
      pkg/chunkenc/unordered_test.go
  11. 3
      pkg/compactor/retention/retention_test.go
  12. 12
      pkg/iter/entry_iterator.go
  13. 70
      pkg/loghttp/entry.go
  14. 5
      pkg/loghttp/query.go
  15. 17
      pkg/loghttp/query_test.go
  16. 46
      pkg/logql/engine.go
  17. 4
      pkg/logql/engine_test.go
  18. 6
      pkg/logql/log/fmt.go
  19. 3
      pkg/logql/log/ip.go
  20. 315
      pkg/logql/log/labels.go
  21. 86
      pkg/logql/log/labels_test.go
  22. 4
      pkg/logql/log/metrics_extraction.go
  23. 24
      pkg/logql/log/parser.go
  24. 3
      pkg/logql/log/parser_test.go
  25. 4
      pkg/logql/log/pipeline.go
  26. 95
      pkg/logql/log/pipeline_test.go
  27. 5
      pkg/loki/modules.go
  28. 155
      pkg/push/push.pb.go
  29. 7
      pkg/push/push.proto
  30. 98
      pkg/push/types.go
  31. 16
      pkg/push/types_test.go
  32. 2
      pkg/querier/http.go
  33. 28
      pkg/querier/queryrange/codec.go
  34. 237
      pkg/querier/queryrange/codec_test.go
  35. 4
      pkg/querier/queryrange/serialize.go
  36. 2
      pkg/storage/lazy_chunk_test.go
  37. 113
      pkg/util/httpreq/encoding_flags.go
  38. 2
      pkg/util/marshal/labels.go
  39. 12
      pkg/util/marshal/legacy/marshal_test.go
  40. 8
      pkg/util/marshal/marshal.go
  41. 676
      pkg/util/marshal/marshal_test.go
  42. 95
      pkg/util/marshal/query.go
  43. 155
      vendor/github.com/grafana/loki/pkg/push/push.pb.go
  44. 7
      vendor/github.com/grafana/loki/pkg/push/push.proto
  45. 98
      vendor/github.com/grafana/loki/pkg/push/types.go
  46. 2
      vendor/modules.txt

@ -123,7 +123,7 @@ require (
github.com/efficientgo/core v1.0.0-rc.2
github.com/fsnotify/fsnotify v1.6.0
github.com/gogo/googleapis v1.4.0
github.com/grafana/loki/pkg/push v0.0.0-20231017172654-cfc4f0e84adc
github.com/grafana/loki/pkg/push v0.0.0-20231023154132-0a7737e7c7eb
github.com/heroku/x v0.0.61
github.com/influxdata/tdigest v0.0.2-0.20210216194612-fc98d27c9e8b
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/translator/prometheus v0.86.0

@ -13,6 +13,7 @@ import (
"strings"
"time"
"github.com/buger/jsonparser"
"github.com/grafana/dskit/user"
"github.com/prometheus/prometheus/model/labels"
"go.opentelemetry.io/collector/pdata/pcommon"
@ -335,10 +336,40 @@ func (c *Client) GetDeleteRequests() (DeleteRequests, error) {
return deleteReqs, nil
}
type Entry []string
func (e *Entry) UnmarshalJSON(data []byte) error {
if *e == nil {
*e = make([]string, 0, 3)
}
var parseError error
_, err := jsonparser.ArrayEach(data, func(value []byte, t jsonparser.ValueType, _ int, _ error) {
// The TS and the lines are strings. The labels are a JSON object.
// but we will parse them as strings.
if t != jsonparser.String && t != jsonparser.Object {
parseError = jsonparser.MalformedStringError
return
}
v, err := jsonparser.ParseString(value)
if err != nil {
parseError = err
return
}
*e = append(*e, v)
})
if parseError != nil {
return parseError
}
return err
}
// StreamValues holds a label key value pairs for the Stream and a list of a list of values
type StreamValues struct {
Stream map[string]string
Values [][]string
Values []Entry
}
// MatrixValues holds a label key value pairs for the metric and a list of a list of values
@ -377,17 +408,19 @@ func (a *VectorValues) UnmarshalJSON(b []byte) error {
// DataType holds the result type and a list of StreamValues
type DataType struct {
ResultType string
Stream []StreamValues
Matrix []MatrixValues
Vector []VectorValues
ResultType string
Stream []StreamValues
Matrix []MatrixValues
Vector []VectorValues
EncodingFlags []string
}
func (a *DataType) UnmarshalJSON(b []byte) error {
// get the result type
var s struct {
ResultType string `json:"resultType"`
Result json.RawMessage `json:"result"`
ResultType string `json:"resultType"`
EncodingFlags []string `json:"encodingFlags"`
Result json.RawMessage `json:"result"`
}
if err := json.Unmarshal(b, &s); err != nil {
return err
@ -410,6 +443,7 @@ func (a *DataType) UnmarshalJSON(b []byte) error {
return fmt.Errorf("unknown result type %s", s.ResultType)
}
a.ResultType = s.ResultType
a.EncodingFlags = s.EncodingFlags
return nil
}
@ -434,12 +468,16 @@ type Rules struct {
Rules []interface{}
}
type Header struct {
Name, Value string
}
// RunRangeQuery runs a query and returns an error if anything went wrong
func (c *Client) RunRangeQuery(ctx context.Context, query string) (*Response, error) {
func (c *Client) RunRangeQuery(ctx context.Context, query string, extraHeaders ...Header) (*Response, error) {
ctx, cancelFunc := context.WithTimeout(ctx, requestTimeout)
defer cancelFunc()
buf, statusCode, err := c.run(ctx, c.rangeQueryURL(query))
buf, statusCode, err := c.run(ctx, c.rangeQueryURL(query), extraHeaders...)
if err != nil {
return nil, err
}
@ -448,7 +486,7 @@ func (c *Client) RunRangeQuery(ctx context.Context, query string) (*Response, er
}
// RunQuery runs a query and returns an error if anything went wrong
func (c *Client) RunQuery(ctx context.Context, query string) (*Response, error) {
func (c *Client) RunQuery(ctx context.Context, query string, extraHeaders ...Header) (*Response, error) {
ctx, cancelFunc := context.WithTimeout(ctx, requestTimeout)
defer cancelFunc()
@ -463,7 +501,7 @@ func (c *Client) RunQuery(ctx context.Context, query string) (*Response, error)
u.Path = "/loki/api/v1/query"
u.RawQuery = v.Encode()
buf, statusCode, err := c.run(ctx, u.String())
buf, statusCode, err := c.run(ctx, u.String(), extraHeaders...)
if err != nil {
return nil, err
}
@ -617,18 +655,21 @@ func (c *Client) Series(ctx context.Context, matcher string) ([]map[string]strin
return values.Data, nil
}
func (c *Client) request(ctx context.Context, method string, url string) (*http.Request, error) {
func (c *Client) request(ctx context.Context, method string, url string, extraHeaders ...Header) (*http.Request, error) {
ctx = user.InjectOrgID(ctx, c.instanceID)
req, err := http.NewRequestWithContext(ctx, method, url, nil)
if err != nil {
return nil, err
}
req.Header.Set("X-Scope-OrgID", c.instanceID)
for _, h := range extraHeaders {
req.Header.Add(h.Name, h.Value)
}
return req, nil
}
func (c *Client) run(ctx context.Context, u string) ([]byte, int, error) {
req, err := c.request(ctx, "GET", u)
func (c *Client) run(ctx context.Context, u string, extraHeaders ...Header) ([]byte, int, error) {
req, err := c.request(ctx, "GET", u, extraHeaders...)
if err != nil {
return nil, 0, err
}

@ -408,7 +408,7 @@ func getMetricValue(t *testing.T, metricName, metrics string) float64 {
}
func pushRequestToClientStreamValues(t *testing.T, p pushRequest) []client.StreamValues {
logsByStream := map[string][][]string{}
logsByStream := map[string][]client.Entry{}
for _, entry := range p.entries {
lb := labels.NewBuilder(labels.FromMap(p.stream))
for _, l := range entry.StructuredMetadata {

@ -2,12 +2,14 @@ package integration
import (
"context"
"encoding/json"
"strings"
"testing"
"time"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/common/expfmt"
"github.com/prometheus/prometheus/model/labels"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/proto"
@ -16,6 +18,7 @@ import (
"github.com/grafana/loki/integration/cluster"
"github.com/grafana/loki/pkg/storage"
"github.com/grafana/loki/pkg/util/httpreq"
"github.com/grafana/loki/pkg/util/querylimits"
)
@ -839,6 +842,296 @@ func TestOTLPLogsIngestQuery(t *testing.T) {
})
}
func TestCategorizedLabels(t *testing.T) {
clu := cluster.New(nil, cluster.SchemaWithTSDB, func(c *cluster.Cluster) {
c.SetSchemaVer("v13")
})
defer func() {
assert.NoError(t, clu.Cleanup())
}()
var (
tDistributor = clu.AddComponent(
"distributor",
"-target=distributor",
)
tIndexGateway = clu.AddComponent(
"index-gateway",
"-target=index-gateway",
"-tsdb.enable-postings-cache=true",
"-store.index-cache-read.embedded-cache.enabled=true",
)
)
require.NoError(t, clu.Run())
var (
tIngester = clu.AddComponent(
"ingester",
"-target=ingester",
"-ingester.flush-on-shutdown=true",
"-ingester.wal-enabled=false",
"-tsdb.shipper.index-gateway-client.server-address="+tIndexGateway.GRPCURL(),
)
tQueryScheduler = clu.AddComponent(
"query-scheduler",
"-target=query-scheduler",
"-query-scheduler.use-scheduler-ring=false",
"-tsdb.shipper.index-gateway-client.server-address="+tIndexGateway.GRPCURL(),
)
tCompactor = clu.AddComponent(
"compactor",
"-target=compactor",
"-boltdb.shipper.compactor.compaction-interval=1s",
"-tsdb.shipper.index-gateway-client.server-address="+tIndexGateway.GRPCURL(),
)
)
require.NoError(t, clu.Run())
// finally, run the query-frontend and querier.
var (
tQueryFrontend = clu.AddComponent(
"query-frontend",
"-target=query-frontend",
"-frontend.scheduler-address="+tQueryScheduler.GRPCURL(),
"-frontend.default-validity=0s",
"-common.compactor-address="+tCompactor.HTTPURL(),
"-tsdb.shipper.index-gateway-client.server-address="+tIndexGateway.GRPCURL(),
)
_ = clu.AddComponent(
"querier",
"-target=querier",
"-querier.scheduler-address="+tQueryScheduler.GRPCURL(),
"-common.compactor-address="+tCompactor.HTTPURL(),
"-tsdb.shipper.index-gateway-client.server-address="+tIndexGateway.GRPCURL(),
)
)
require.NoError(t, clu.Run())
tenantID := randStringRunes()
now := time.Now()
cliDistributor := client.New(tenantID, "", tDistributor.HTTPURL())
cliDistributor.Now = now
cliIngester := client.New(tenantID, "", tIngester.HTTPURL())
cliIngester.Now = now
cliQueryFrontend := client.New(tenantID, "", tQueryFrontend.HTTPURL())
cliQueryFrontend.Now = now
cliIndexGateway := client.New(tenantID, "", tIndexGateway.HTTPURL())
cliIndexGateway.Now = now
now = time.Now()
require.NoError(t, cliDistributor.PushLogLine("lineA", now.Add(-1*time.Second), nil, map[string]string{"job": "fake"}))
require.NoError(t, cliDistributor.PushLogLine("lineB", now.Add(-2*time.Second), map[string]string{"traceID": "123", "user": "a"}, map[string]string{"job": "fake"}))
require.NoError(t, tIngester.Restart())
require.NoError(t, cliDistributor.PushLogLine("lineC msg=foo", now.Add(-3*time.Second), map[string]string{"traceID": "456", "user": "b"}, map[string]string{"job": "fake"}))
require.NoError(t, cliDistributor.PushLogLine("lineD msg=foo text=bar", now.Add(-4*time.Second), map[string]string{"traceID": "789", "user": "c"}, map[string]string{"job": "fake"}))
type expectedStream struct {
Stream map[string]string
Lines []string
CategorizedLabels []map[string]map[string]string
}
for _, tc := range []struct {
name string
query string
encodingFlags []string
expectedStreams []expectedStream
}{
{
name: "no header - no parser ",
query: `{job="fake"}`,
expectedStreams: []expectedStream{
{
Stream: labels.FromStrings("job", "fake").Map(),
Lines: []string{"lineA"},
},
{
Stream: map[string]string{
"job": "fake",
"traceID": "123",
"user": "a",
},
Lines: []string{"lineB"},
},
{
Stream: map[string]string{
"job": "fake",
"traceID": "456",
"user": "b",
},
Lines: []string{"lineC msg=foo"},
},
{
Stream: map[string]string{
"job": "fake",
"traceID": "789",
"user": "c",
},
Lines: []string{"lineD msg=foo text=bar"},
},
},
},
{
name: "no header - with parser",
query: `{job="fake"} | logfmt`,
expectedStreams: []expectedStream{
{
Stream: map[string]string{
"job": "fake",
},
Lines: []string{"lineA"},
},
{
Stream: map[string]string{
"job": "fake",
"traceID": "123",
"user": "a",
},
Lines: []string{"lineB"},
},
{
Stream: map[string]string{
"job": "fake",
"traceID": "456",
"user": "b",
"msg": "foo",
},
Lines: []string{"lineC msg=foo"},
},
{
Stream: map[string]string{
"job": "fake",
"traceID": "789",
"user": "c",
"msg": "foo",
"text": "bar",
},
Lines: []string{"lineD msg=foo text=bar"},
},
},
},
{
name: "with header - no parser ",
query: `{job="fake"}`,
encodingFlags: []string{
string(httpreq.FlagCategorizeLabels),
},
expectedStreams: []expectedStream{
{
Stream: map[string]string{
"job": "fake",
},
Lines: []string{"lineA", "lineB", "lineC msg=foo", "lineD msg=foo text=bar"},
CategorizedLabels: []map[string]map[string]string{
{
"structuredMetadata": {
"traceID": "123",
"user": "a",
},
},
{
"structuredMetadata": {
"traceID": "456",
"user": "b",
},
},
{
"structuredMetadata": {
"traceID": "789",
"user": "c",
},
},
},
},
},
},
{
name: "with header - with parser",
query: `{job="fake"} | logfmt`,
encodingFlags: []string{
string(httpreq.FlagCategorizeLabels),
},
expectedStreams: []expectedStream{
{
Stream: map[string]string{
"job": "fake",
},
Lines: []string{"lineA", "lineB", "lineC msg=foo", "lineD msg=foo text=bar"},
CategorizedLabels: []map[string]map[string]string{
{
"structuredMetadata": {
"traceID": "123",
"user": "a",
},
},
{
"structuredMetadata": {
"traceID": "456",
"user": "b",
},
"parsed": {
"msg": "foo",
},
},
{
"structuredMetadata": {
"traceID": "789",
"user": "c",
},
"parsed": {
"msg": "foo",
"text": "bar",
},
},
},
},
},
},
} {
t.Run(tc.name, func(t *testing.T) {
// Add header with encoding flags and expect them to be returned in the response.
var headers []client.Header
var expectedEncodingFlags []string
if len(tc.encodingFlags) > 0 {
headers = append(headers, client.Header{Name: httpreq.LokiEncodingFlagsHeader, Value: strings.Join(tc.encodingFlags, httpreq.EncodeFlagsDelimiter)})
expectedEncodingFlags = tc.encodingFlags
}
resp, err := cliQueryFrontend.RunQuery(context.Background(), tc.query, headers...)
require.NoError(t, err)
assert.Equal(t, "streams", resp.Data.ResultType)
var streams []expectedStream
for _, stream := range resp.Data.Stream {
var lines []string
var categorizedLabels []map[string]map[string]string
for _, val := range stream.Values {
lines = append(lines, val[1])
var catLabels map[string]map[string]string
if len(val) >= 3 && val[2] != "" {
err = json.Unmarshal([]byte(val[2]), &catLabels)
require.NoError(t, err)
categorizedLabels = append(categorizedLabels, catLabels)
}
}
streams = append(streams, expectedStream{
Stream: stream.Stream,
Lines: lines,
CategorizedLabels: categorizedLabels,
})
}
assert.ElementsMatch(t, tc.expectedStreams, streams)
assert.ElementsMatch(t, expectedEncodingFlags, resp.Data.EncodingFlags)
})
}
}
func getValueFromMF(mf *dto.MetricFamily, lbs []*dto.LabelPair) float64 {
for _, m := range mf.Metric {
if !assert.ObjectsAreEqualValues(lbs, m.GetLabel()) {

@ -72,7 +72,7 @@ func (c *dumbChunk) Encoding() Encoding { return EncNone }
// Returns an iterator that goes from _most_ recent to _least_ recent (ie,
// backwards).
func (c *dumbChunk) Iterator(_ context.Context, from, through time.Time, direction logproto.Direction, _ log.StreamPipeline, _ ...iter.EntryIteratorOption) (iter.EntryIterator, error) {
func (c *dumbChunk) Iterator(_ context.Context, from, through time.Time, direction logproto.Direction, _ log.StreamPipeline) (iter.EntryIterator, error) {
i := sort.Search(len(c.entries), func(i int) bool {
return !from.After(c.entries[i].Timestamp)
})

@ -129,7 +129,7 @@ type Chunk interface {
Bounds() (time.Time, time.Time)
SpaceFor(*logproto.Entry) bool
Append(*logproto.Entry) error
Iterator(ctx context.Context, mintT, maxtT time.Time, direction logproto.Direction, pipeline log.StreamPipeline, options ...iter.EntryIteratorOption) (iter.EntryIterator, error)
Iterator(ctx context.Context, mintT, maxtT time.Time, direction logproto.Direction, pipeline log.StreamPipeline) (iter.EntryIterator, error)
SampleIterator(ctx context.Context, from, through time.Time, extractor log.StreamSampleExtractor) iter.SampleIterator
// Returns the list of blocks in the chunks.
Blocks(mintT, maxtT time.Time) []Block
@ -158,7 +158,7 @@ type Block interface {
// Entries is the amount of entries in the block.
Entries() int
// Iterator returns an entry iterator for the block.
Iterator(ctx context.Context, pipeline log.StreamPipeline, options ...iter.EntryIteratorOption) iter.EntryIterator
Iterator(ctx context.Context, pipeline log.StreamPipeline) iter.EntryIterator
// SampleIterator returns a sample iterator for the block.
SampleIterator(ctx context.Context, extractor log.StreamSampleExtractor) iter.SampleIterator
}

@ -950,7 +950,7 @@ func (c *MemChunk) Bounds() (fromT, toT time.Time) {
}
// Iterator implements Chunk.
func (c *MemChunk) Iterator(ctx context.Context, mintT, maxtT time.Time, direction logproto.Direction, pipeline log.StreamPipeline, options ...iter.EntryIteratorOption) (iter.EntryIterator, error) {
func (c *MemChunk) Iterator(ctx context.Context, mintT, maxtT time.Time, direction logproto.Direction, pipeline log.StreamPipeline) (iter.EntryIterator, error) {
mint, maxt := mintT.UnixNano(), maxtT.UnixNano()
blockItrs := make([]iter.EntryIterator, 0, len(c.blocks)+1)
@ -977,7 +977,7 @@ func (c *MemChunk) Iterator(ctx context.Context, mintT, maxtT time.Time, directi
}
lastMax = b.maxt
blockItrs = append(blockItrs, encBlock{c.encoding, c.format, c.symbolizer, b}.Iterator(ctx, pipeline, options...))
blockItrs = append(blockItrs, encBlock{c.encoding, c.format, c.symbolizer, b}.Iterator(ctx, pipeline))
}
if !c.head.IsEmpty() {
@ -985,7 +985,7 @@ func (c *MemChunk) Iterator(ctx context.Context, mintT, maxtT time.Time, directi
if from < lastMax {
ordered = false
}
headIterator = c.head.Iterator(ctx, direction, mint, maxt, pipeline, options...)
headIterator = c.head.Iterator(ctx, direction, mint, maxt, pipeline)
}
if direction == logproto.FORWARD {
@ -1100,7 +1100,7 @@ func (c *MemChunk) Blocks(mintT, maxtT time.Time) []Block {
// Rebound builds a smaller chunk with logs having timestamp from start and end(both inclusive)
func (c *MemChunk) Rebound(start, end time.Time, filter filter.Func) (Chunk, error) {
// add a millisecond to end time because the Chunk.Iterator considers end time to be non-inclusive.
itr, err := c.Iterator(context.Background(), start, end.Add(time.Millisecond), logproto.FORWARD, log.NewNoopPipeline().ForStream(labels.Labels{}), iter.WithKeepStructuredMetadata())
itr, err := c.Iterator(context.Background(), start, end.Add(time.Millisecond), logproto.FORWARD, log.NewNoopPipeline().ForStream(labels.Labels{}))
if err != nil {
return nil, err
}
@ -1149,11 +1149,11 @@ type encBlock struct {
block
}
func (b encBlock) Iterator(ctx context.Context, pipeline log.StreamPipeline, options ...iter.EntryIteratorOption) iter.EntryIterator {
func (b encBlock) Iterator(ctx context.Context, pipeline log.StreamPipeline) iter.EntryIterator {
if len(b.b) == 0 {
return iter.NoopIterator
}
return newEntryIterator(ctx, GetReaderPool(b.enc), b.b, pipeline, b.format, b.symbolizer, options...)
return newEntryIterator(ctx, GetReaderPool(b.enc), b.b, pipeline, b.format, b.symbolizer)
}
func (b encBlock) SampleIterator(ctx context.Context, extractor log.StreamSampleExtractor) iter.SampleIterator {
@ -1179,7 +1179,7 @@ func (b block) MaxTime() int64 {
return b.maxt
}
func (hb *headBlock) Iterator(ctx context.Context, direction logproto.Direction, mint, maxt int64, pipeline log.StreamPipeline, _ ...iter.EntryIteratorOption) iter.EntryIterator {
func (hb *headBlock) Iterator(ctx context.Context, direction logproto.Direction, mint, maxt int64, pipeline log.StreamPipeline) iter.EntryIterator {
if hb.IsEmpty() || (maxt < hb.mint || hb.maxt < mint) {
return iter.NoopIterator
}
@ -1205,7 +1205,7 @@ func (hb *headBlock) Iterator(ctx context.Context, direction logproto.Direction,
}
stats.AddPostFilterLines(1)
var stream *logproto.Stream
labels := parsedLbs.Labels().String()
labels := parsedLbs.String()
var ok bool
if stream, ok = streams[labels]; !ok {
stream = &logproto.Stream{
@ -1582,23 +1582,16 @@ func (si *bufferedIterator) close() {
si.origBytes = nil
}
func newEntryIterator(ctx context.Context, pool ReaderPool, b []byte, pipeline log.StreamPipeline, format byte, symbolizer *symbolizer, options ...iter.EntryIteratorOption) iter.EntryIterator {
entryIter := &entryBufferedIterator{
func newEntryIterator(ctx context.Context, pool ReaderPool, b []byte, pipeline log.StreamPipeline, format byte, symbolizer *symbolizer) iter.EntryIterator {
return &entryBufferedIterator{
bufferedIterator: newBufferedIterator(ctx, pool, b, format, symbolizer),
pipeline: pipeline,
}
for _, opt := range options {
opt(&entryIter.iterOptions)
}
return entryIter
}
type entryBufferedIterator struct {
*bufferedIterator
pipeline log.StreamPipeline
iterOptions iter.EntryIteratorOptions
pipeline log.StreamPipeline
cur logproto.Entry
currLabels log.LabelsResult
@ -1623,12 +1616,9 @@ func (e *entryBufferedIterator) Next() bool {
e.currLabels = lbs
e.cur.Timestamp = time.Unix(0, e.currTs)
e.cur.Line = string(newLine)
e.cur.StructuredMetadata = logproto.FromLabelsToLabelAdapters(lbs.StructuredMetadata())
e.cur.Parsed = logproto.FromLabelsToLabelAdapters(lbs.Parsed())
// Most of the time, there is no need to send back the labels of structured metadata, as they are already part of the labels results.
// Still it might be needed for example when appending entries from one chunk into another one.
if e.iterOptions.KeepStructuredMetdata {
e.cur.StructuredMetadata = logproto.FromLabelsToLabelAdapters(e.currStructuredMetadata)
}
return true
}
return false

@ -193,10 +193,14 @@ func TestBlock(t *testing.T) {
e := it.Entry()
require.Equal(t, cases[idx].ts, e.Timestamp.UnixNano())
require.Equal(t, cases[idx].str, e.Line)
require.Empty(t, e.StructuredMetadata)
if chunkFormat < ChunkFormatV4 {
require.Equal(t, labels.EmptyLabels().String(), it.Labels())
require.Empty(t, e.StructuredMetadata)
} else {
if len(cases[idx].lbs) > 0 {
require.Equal(t, push.LabelsAdapter(cases[idx].lbs), e.StructuredMetadata)
}
expectedLabels := logproto.FromLabelAdaptersToLabels(cases[idx].lbs).String()
require.Equal(t, expectedLabels, it.Labels())
}
@ -452,11 +456,12 @@ func TestSerialization(t *testing.T) {
e := it.Entry()
require.Equal(t, int64(i), e.Timestamp.UnixNano())
require.Equal(t, strconv.Itoa(i), e.Line)
require.Nil(t, e.StructuredMetadata)
if appendWithStructuredMetadata && testData.chunkFormat >= ChunkFormatV4 {
require.Equal(t, labels.FromStrings("foo", strconv.Itoa(i)).String(), it.Labels())
require.Equal(t, labels.FromStrings("foo", strconv.Itoa(i)), logproto.FromLabelAdaptersToLabels(e.StructuredMetadata))
} else {
require.Equal(t, labels.EmptyLabels().String(), it.Labels())
require.Nil(t, e.StructuredMetadata)
}
}
require.NoError(t, it.Error())
@ -1735,10 +1740,11 @@ func TestMemChunk_IteratorWithStructuredMetadata(t *testing.T) {
expectedBytes := lineBytes + expectedStructuredMetadataBytes
for _, tc := range []struct {
name string
query string
expectedLines []string
expectedStreams []string
name string
query string
expectedLines []string
expectedStreams []string
expectedStructuredMetadata [][]logproto.LabelAdapter
}{
{
name: "no-filter",
@ -1750,6 +1756,12 @@ func TestMemChunk_IteratorWithStructuredMetadata(t *testing.T) {
labels.FromStrings("job", "fake", "traceID", "789", "user", "c").String(),
labels.FromStrings("job", "fake", "traceID", "123", "user", "d").String(),
},
expectedStructuredMetadata: [][]logproto.LabelAdapter{
logproto.FromLabelsToLabelAdapters(labels.FromStrings("traceID", "123", "user", "a")),
logproto.FromLabelsToLabelAdapters(labels.FromStrings("traceID", "456", "user", "b")),
logproto.FromLabelsToLabelAdapters(labels.FromStrings("traceID", "789", "user", "c")),
logproto.FromLabelsToLabelAdapters(labels.FromStrings("traceID", "123", "user", "d")),
},
},
{
name: "filter",
@ -1758,6 +1770,9 @@ func TestMemChunk_IteratorWithStructuredMetadata(t *testing.T) {
expectedStreams: []string{
labels.FromStrings("job", "fake", "traceID", "789", "user", "c").String(),
},
expectedStructuredMetadata: [][]logproto.LabelAdapter{
logproto.FromLabelsToLabelAdapters(labels.FromStrings("traceID", "789", "user", "c")),
},
},
{
name: "filter-regex-or",
@ -1767,6 +1782,10 @@ func TestMemChunk_IteratorWithStructuredMetadata(t *testing.T) {
labels.FromStrings("job", "fake", "traceID", "456", "user", "b").String(),
labels.FromStrings("job", "fake", "traceID", "789", "user", "c").String(),
},
expectedStructuredMetadata: [][]logproto.LabelAdapter{
logproto.FromLabelsToLabelAdapters(labels.FromStrings("traceID", "456", "user", "b")),
logproto.FromLabelsToLabelAdapters(labels.FromStrings("traceID", "789", "user", "c")),
},
},
{
name: "filter-regex-contains",
@ -1775,6 +1794,9 @@ func TestMemChunk_IteratorWithStructuredMetadata(t *testing.T) {
expectedStreams: []string{
labels.FromStrings("job", "fake", "traceID", "456", "user", "b").String(),
},
expectedStructuredMetadata: [][]logproto.LabelAdapter{
logproto.FromLabelsToLabelAdapters(labels.FromStrings("traceID", "456", "user", "b")),
},
},
{
name: "filter-regex-complex",
@ -1784,6 +1806,10 @@ func TestMemChunk_IteratorWithStructuredMetadata(t *testing.T) {
labels.FromStrings("job", "fake", "traceID", "123", "user", "a").String(),
labels.FromStrings("job", "fake", "traceID", "123", "user", "d").String(),
},
expectedStructuredMetadata: [][]logproto.LabelAdapter{
logproto.FromLabelsToLabelAdapters(labels.FromStrings("traceID", "123", "user", "a")),
logproto.FromLabelsToLabelAdapters(labels.FromStrings("traceID", "123", "user", "d")),
},
},
{
name: "multiple-filters",
@ -1792,6 +1818,9 @@ func TestMemChunk_IteratorWithStructuredMetadata(t *testing.T) {
expectedStreams: []string{
labels.FromStrings("job", "fake", "traceID", "123", "user", "d").String(),
},
expectedStructuredMetadata: [][]logproto.LabelAdapter{
logproto.FromLabelsToLabelAdapters(labels.FromStrings("traceID", "123", "user", "d")),
},
},
{
name: "keep",
@ -1803,6 +1832,12 @@ func TestMemChunk_IteratorWithStructuredMetadata(t *testing.T) {
labels.FromStrings("job", "fake", "user", "c").String(),
labels.FromStrings("job", "fake", "user", "d").String(),
},
expectedStructuredMetadata: [][]logproto.LabelAdapter{
logproto.FromLabelsToLabelAdapters(labels.FromStrings("user", "a")),
logproto.FromLabelsToLabelAdapters(labels.FromStrings("user", "b")),
logproto.FromLabelsToLabelAdapters(labels.FromStrings("user", "c")),
logproto.FromLabelsToLabelAdapters(labels.FromStrings("user", "d")),
},
},
{
name: "keep-filter",
@ -1814,6 +1849,9 @@ func TestMemChunk_IteratorWithStructuredMetadata(t *testing.T) {
labels.FromStrings("job", "fake").String(),
labels.FromStrings("job", "fake").String(),
},
expectedStructuredMetadata: [][]logproto.LabelAdapter{
logproto.FromLabelsToLabelAdapters(labels.FromStrings("user", "b")),
},
},
{
name: "drop",
@ -1825,6 +1863,12 @@ func TestMemChunk_IteratorWithStructuredMetadata(t *testing.T) {
labels.FromStrings("job", "fake", "user", "c").String(),
labels.FromStrings("job", "fake", "user", "d").String(),
},
expectedStructuredMetadata: [][]logproto.LabelAdapter{
logproto.FromLabelsToLabelAdapters(labels.FromStrings("user", "a")),
logproto.FromLabelsToLabelAdapters(labels.FromStrings("user", "b")),
logproto.FromLabelsToLabelAdapters(labels.FromStrings("user", "c")),
logproto.FromLabelsToLabelAdapters(labels.FromStrings("user", "d")),
},
},
{
name: "drop-filter",
@ -1836,6 +1880,12 @@ func TestMemChunk_IteratorWithStructuredMetadata(t *testing.T) {
labels.FromStrings("job", "fake", "traceID", "789", "user", "c").String(),
labels.FromStrings("job", "fake", "user", "d").String(),
},
expectedStructuredMetadata: [][]logproto.LabelAdapter{
logproto.FromLabelsToLabelAdapters(labels.FromStrings("user", "a")),
logproto.FromLabelsToLabelAdapters(labels.FromStrings("traceID", "456", "user", "b")),
logproto.FromLabelsToLabelAdapters(labels.FromStrings("traceID", "789", "user", "c")),
logproto.FromLabelsToLabelAdapters(labels.FromStrings("user", "d")),
},
},
} {
t.Run(tc.name, func(t *testing.T) {
@ -1855,18 +1905,21 @@ func TestMemChunk_IteratorWithStructuredMetadata(t *testing.T) {
var lines []string
var streams []string
var structuredMetadata [][]logproto.LabelAdapter
for it.Next() {
require.NoError(t, it.Error())
e := it.Entry()
lines = append(lines, e.Line)
streams = append(streams, it.Labels())
// We don't want to send back the structured metadata since
// they are already part of the returned labels.
require.Empty(t, e.StructuredMetadata)
if len(e.StructuredMetadata) > 0 {
structuredMetadata = append(structuredMetadata, e.StructuredMetadata)
}
require.Empty(t, e.Parsed)
}
assert.ElementsMatch(t, tc.expectedLines, lines)
assert.ElementsMatch(t, tc.expectedStreams, streams)
assert.ElementsMatch(t, tc.expectedStructuredMetadata, structuredMetadata)
resultStats := sts.Result(0, 0, len(lines))
require.Equal(t, int64(expectedBytes), resultStats.Summary.TotalBytesProcessed)
@ -1909,61 +1962,3 @@ func TestMemChunk_IteratorWithStructuredMetadata(t *testing.T) {
})
}
}
func TestMemChunk_IteratorOptions(t *testing.T) {
chk := newMemChunkWithFormat(ChunkFormatV4, EncNone, UnorderedWithStructuredMetadataHeadBlockFmt, testBlockSize, testTargetSize)
require.NoError(t, chk.Append(logprotoEntryWithStructuredMetadata(0, "0", logproto.FromLabelsToLabelAdapters(
labels.FromStrings("a", "0"),
))))
require.NoError(t, chk.Append(logprotoEntryWithStructuredMetadata(1, "1", logproto.FromLabelsToLabelAdapters(
labels.FromStrings("a", "1"),
))))
require.NoError(t, chk.cut())
require.NoError(t, chk.Append(logprotoEntryWithStructuredMetadata(2, "2", logproto.FromLabelsToLabelAdapters(
labels.FromStrings("a", "2"),
))))
require.NoError(t, chk.Append(logprotoEntryWithStructuredMetadata(3, "3", logproto.FromLabelsToLabelAdapters(
labels.FromStrings("a", "3"),
))))
for _, tc := range []struct {
name string
options []iter.EntryIteratorOption
expectStructuredMetadata bool
}{
{
name: "No options",
expectStructuredMetadata: false,
},
{
name: "WithKeepStructuredMetadata",
options: []iter.EntryIteratorOption{
iter.WithKeepStructuredMetadata(),
},
expectStructuredMetadata: true,
},
} {
t.Run(tc.name, func(t *testing.T) {
it, err := chk.Iterator(context.Background(), time.Unix(0, 0), time.Unix(0, math.MaxInt64), logproto.FORWARD, noopStreamPipeline, tc.options...)
require.NoError(t, err)
var idx int64
for it.Next() {
expectedLabels := labels.FromStrings("a", fmt.Sprintf("%d", idx))
expectedEntry := logproto.Entry{
Timestamp: time.Unix(0, idx),
Line: fmt.Sprintf("%d", idx),
}
if tc.expectStructuredMetadata {
expectedEntry.StructuredMetadata = logproto.FromLabelsToLabelAdapters(expectedLabels)
}
require.Equal(t, expectedEntry, it.Entry())
require.Equal(t, expectedLabels.String(), it.Labels())
idx++
}
})
}
}

@ -41,7 +41,6 @@ type HeadBlock interface {
mint,
maxt int64,
pipeline log.StreamPipeline,
options ...iter.EntryIteratorOption,
) iter.EntryIterator
SampleIterator(
ctx context.Context,
@ -244,12 +243,7 @@ func (hb *unorderedHeadBlock) forEntries(
return nil
}
func (hb *unorderedHeadBlock) Iterator(ctx context.Context, direction logproto.Direction, mint, maxt int64, pipeline log.StreamPipeline, options ...iter.EntryIteratorOption) iter.EntryIterator {
var iterOptions iter.EntryIteratorOptions
for _, option := range options {
option(&iterOptions)
}
func (hb *unorderedHeadBlock) Iterator(ctx context.Context, direction logproto.Direction, mint, maxt int64, pipeline log.StreamPipeline) iter.EntryIterator {
// We are doing a copy everytime, this is because b.entries could change completely,
// the alternate would be that we allocate a new b.entries everytime we cut a block,
// but the tradeoff is that queries to near-realtime data would be much lower than
@ -278,18 +272,12 @@ func (hb *unorderedHeadBlock) Iterator(ctx context.Context, direction logproto.D
streams[labels] = stream
}
entry := logproto.Entry{
Timestamp: time.Unix(0, ts),
Line: newLine,
}
// Most of the time, there is no need to send back the structured metadata, as they are already part of the labels results.
// Still it might be needed for example when appending entries from one chunk into another one.
if iterOptions.KeepStructuredMetdata {
entry.StructuredMetadata = logproto.FromLabelsToLabelAdapters(hb.symbolizer.Lookup(structuredMetadataSymbols))
}
stream.Entries = append(stream.Entries, entry)
stream.Entries = append(stream.Entries, logproto.Entry{
Timestamp: time.Unix(0, ts),
Line: newLine,
StructuredMetadata: logproto.FromLabelsToLabelAdapters(parsedLbs.StructuredMetadata()),
Parsed: logproto.FromLabelsToLabelAdapters(parsedLbs.Parsed()),
})
return nil
},
)

@ -22,8 +22,9 @@ func iterEq(t *testing.T, exp []entry, got iter.EntryIterator) {
var i int
for got.Next() {
require.Equal(t, logproto.Entry{
Timestamp: time.Unix(0, exp[i].t),
Line: exp[i].s,
Timestamp: time.Unix(0, exp[i].t),
Line: exp[i].s,
StructuredMetadata: logproto.FromLabelsToLabelAdapters(exp[i].structuredMetadata),
}, got.Entry())
require.Equal(t, exp[i].structuredMetadata.String(), got.Labels())
i++
@ -445,22 +446,10 @@ func TestUnorderedChunkIterators(t *testing.T) {
// ensure head block has data
require.Equal(t, false, c.head.IsEmpty())
forward, err := c.Iterator(
context.Background(),
time.Unix(0, 0),
time.Unix(100, 0),
logproto.FORWARD,
noopStreamPipeline,
)
forward, err := c.Iterator(context.Background(), time.Unix(0, 0), time.Unix(100, 0), logproto.FORWARD, noopStreamPipeline)
require.Nil(t, err)
backward, err := c.Iterator(
context.Background(),
time.Unix(0, 0),
time.Unix(100, 0),
logproto.BACKWARD,
noopStreamPipeline,
)
backward, err := c.Iterator(context.Background(), time.Unix(0, 0), time.Unix(100, 0), logproto.BACKWARD, noopStreamPipeline)
require.Nil(t, err)
smpl := c.SampleIterator(

@ -22,7 +22,6 @@ import (
"github.com/grafana/loki/pkg/chunkenc"
ingesterclient "github.com/grafana/loki/pkg/ingester/client"
"github.com/grafana/loki/pkg/iter"
"github.com/grafana/loki/pkg/logproto"
"github.com/grafana/loki/pkg/logql/log"
"github.com/grafana/loki/pkg/storage/chunk"
@ -544,7 +543,7 @@ func TestChunkRewriter(t *testing.T) {
require.Equal(t, expectedChunks[i][len(expectedChunks[i])-1].End, chunks[i].Through)
lokiChunk := chunks[i].Data.(*chunkenc.Facade).LokiChunk()
newChunkItr, err := lokiChunk.Iterator(context.Background(), chunks[i].From.Time(), chunks[i].Through.Add(time.Minute).Time(), logproto.FORWARD, log.NewNoopPipeline().ForStream(labels.Labels{}), iter.WithKeepStructuredMetadata())
newChunkItr, err := lokiChunk.Iterator(context.Background(), chunks[i].From.Time(), chunks[i].Through.Add(time.Minute).Time(), logproto.FORWARD, log.NewNoopPipeline().ForStream(labels.Labels{}))
require.NoError(t, err)
for _, interval := range expectedChunks[i] {

@ -19,18 +19,6 @@ type EntryIterator interface {
Entry() logproto.Entry
}
type EntryIteratorOptions struct {
KeepStructuredMetdata bool
}
type EntryIteratorOption func(*EntryIteratorOptions)
func WithKeepStructuredMetadata() EntryIteratorOption {
return func(o *EntryIteratorOptions) {
o.KeepStructuredMetdata = true
}
}
// streamIterator iterates over entries in a stream.
type streamIterator struct {
i int

@ -1,6 +1,7 @@
package loghttp
import (
"fmt"
"strconv"
"time"
"unsafe"
@ -20,6 +21,7 @@ type Entry struct {
Timestamp time.Time
Line string
StructuredMetadata labels.Labels
Parsed labels.Labels
}
func (e *Entry) UnmarshalJSON(data []byte) error {
@ -52,26 +54,57 @@ func (e *Entry) UnmarshalJSON(data []byte) error {
return
}
e.Line = v
case 2: // labels
case 2: // structured metadata
if t != jsonparser.Object {
parseError = jsonparser.MalformedObjectError
return
}
// Here we deserialize entries for both query responses and push requests.
//
// For push requests, we accept structured metadata as the third object in the entry array. E.g.:
// [ "<ts>", "<log line>", {"trace_id": "0242ac120002", "user_id": "superUser123"}]
//
// For query responses, we accept structured metadata and parsed labels in the third object in the entry array. E.g.:
// [ "<ts>", "<log line>", { "structuredMetadata": {"trace_id": "0242ac120002", "user_id": "superUser123"}, "parsed": {"msg": "text"}}]
//
// Therefore, we need to check if the third object contains the "structuredMetadata" or "parsed" fields. If it does,
// we deserialize the inner objects into the structured metadata and parsed labels respectively.
// If it doesn't, we deserialize the object into the structured metadata labels.
var structuredMetadata labels.Labels
var parsed labels.Labels
if err := jsonparser.ObjectEach(value, func(key []byte, value []byte, dataType jsonparser.ValueType, _ int) error {
if dataType != jsonparser.String {
return jsonparser.MalformedStringError
if dataType == jsonparser.Object {
if string(key) == "structuredMetadata" {
lbls, err := parseLabels(value)
if err != nil {
return err
}
structuredMetadata = lbls
}
if string(key) == "parsed" {
lbls, err := parseLabels(value)
if err != nil {
return err
}
parsed = lbls
}
return nil
}
structuredMetadata = append(structuredMetadata, labels.Label{
Name: string(key),
Value: string(value),
})
return nil
if dataType == jsonparser.String || t != jsonparser.Number {
structuredMetadata = append(structuredMetadata, labels.Label{
Name: string(key),
Value: string(value),
})
return nil
}
return fmt.Errorf("could not parse structured metadata or parsed fileds")
}); err != nil {
parseError = err
return
}
e.StructuredMetadata = structuredMetadata
e.Parsed = parsed
}
i++
})
@ -81,6 +114,27 @@ func (e *Entry) UnmarshalJSON(data []byte) error {
return err
}
func parseLabels(data []byte) (labels.Labels, error) {
var lbls labels.Labels
err := jsonparser.ObjectEach(data, func(key []byte, value []byte, t jsonparser.ValueType, _ int) error {
if t != jsonparser.String && t != jsonparser.Number {
return fmt.Errorf("could not parse label value. Expected string or number, got %s", t)
}
val, err := jsonparser.ParseString(value)
if err != nil {
return err
}
lbls = append(lbls, labels.Label{
Name: string(key),
Value: val,
})
return nil
})
return lbls, err
}
type jsonExtension struct {
jsoniter.DummyExtension
}

@ -240,7 +240,10 @@ func (s Streams) ToProto() []logproto.Stream {
result := make([]logproto.Stream, 0, len(s))
for _, s := range s {
entries := *(*[]logproto.Entry)(unsafe.Pointer(&s.Entries))
result = append(result, logproto.Stream{Labels: s.Labels.String(), Entries: entries})
result = append(result, logproto.Stream{
Labels: s.Labels.String(),
Entries: entries,
})
}
return result
}

@ -151,7 +151,7 @@ func TestStreams_ToProto(t *testing.T) {
"some",
[]Stream{
{
Labels: map[string]string{"foo": "bar"},
Labels: map[string]string{"job": "fake"},
Entries: []Entry{
{Timestamp: time.Unix(0, 1), Line: "1"},
{Timestamp: time.Unix(0, 2), Line: "2", StructuredMetadata: labels.Labels{
@ -161,19 +161,20 @@ func TestStreams_ToProto(t *testing.T) {
},
},
{
Labels: map[string]string{"foo": "bar", "lvl": "error"},
Labels: map[string]string{"job": "fake", "lvl": "error"},
Entries: []Entry{
{Timestamp: time.Unix(0, 3), Line: "3"},
{Timestamp: time.Unix(0, 4), Line: "4", StructuredMetadata: labels.Labels{
{Name: "foo", Value: "a"},
{Name: "bar", Value: "b"},
}},
{Timestamp: time.Unix(0, 4), Line: "4",
StructuredMetadata: labels.Labels{
{Name: "foo", Value: "a"},
{Name: "bar", Value: "b"},
}},
},
},
},
[]logproto.Stream{
{
Labels: `{foo="bar"}`,
Labels: `{job="fake"}`,
Entries: []logproto.Entry{
{Timestamp: time.Unix(0, 1), Line: "1"},
{Timestamp: time.Unix(0, 2), Line: "2", StructuredMetadata: []logproto.LabelAdapter{
@ -183,7 +184,7 @@ func TestStreams_ToProto(t *testing.T) {
},
},
{
Labels: `{foo="bar", lvl="error"}`,
Labels: `{job="fake", lvl="error"}`,
Entries: []logproto.Entry{
{Timestamp: time.Unix(0, 3), Line: "3"},
{Timestamp: time.Unix(0, 4), Line: "4", StructuredMetadata: []logproto.LabelAdapter{

@ -291,8 +291,11 @@ func (q *query) Eval(ctx context.Context) (promql_parser.Value, error) {
return nil, err
}
encodingFlags := httpreq.ExtractEncodingFlagsFromCtx(ctx)
categorizeLabels := encodingFlags.Has(httpreq.FlagCategorizeLabels)
defer util.LogErrorWithContext(ctx, "closing iterator", iter.Close)
streams, err := readStreams(iter, q.params.Limit(), q.params.Direction(), q.params.Interval())
streams, err := readStreams(iter, q.params.Limit(), q.params.Direction(), q.params.Interval(), categorizeLabels)
return streams, err
default:
return nil, fmt.Errorf("unexpected type (%T): cannot evaluate", e)
@ -498,28 +501,55 @@ func PopulateMatrixFromScalar(data promql.Scalar, params Params) promql.Matrix {
return promql.Matrix{series}
}
func readStreams(i iter.EntryIterator, size uint32, dir logproto.Direction, interval time.Duration) (logqlmodel.Streams, error) {
// readStreams reads the streams from the iterator and returns them sorted.
// If categorizeLabels is true, the stream labels contains just the stream labels and entries inside each stream have their
// structuredMetadata and parsed fields populated with structured metadata labels plus the parsed labels respectively.
// Otherwise, the stream labels are the whole series labels including the stream labels, structured metadata labels and parsed labels.
func readStreams(i iter.EntryIterator, size uint32, dir logproto.Direction, interval time.Duration, categorizeLabels bool) (logqlmodel.Streams, error) {
streams := map[string]*logproto.Stream{}
respSize := uint32(0)
// lastEntry should be a really old time so that the first comparison is always true, we use a negative
// value here because many unit tests start at time.Unix(0,0)
lastEntry := lastEntryMinTime
for respSize < size && i.Next() {
labels, entry := i.Labels(), i.Entry()
entry := i.Entry()
forwardShouldOutput := dir == logproto.FORWARD &&
(i.Entry().Timestamp.Equal(lastEntry.Add(interval)) || i.Entry().Timestamp.After(lastEntry.Add(interval)))
(entry.Timestamp.Equal(lastEntry.Add(interval)) || entry.Timestamp.After(lastEntry.Add(interval)))
backwardShouldOutput := dir == logproto.BACKWARD &&
(i.Entry().Timestamp.Equal(lastEntry.Add(-interval)) || i.Entry().Timestamp.Before(lastEntry.Add(-interval)))
(entry.Timestamp.Equal(lastEntry.Add(-interval)) || entry.Timestamp.Before(lastEntry.Add(-interval)))
// If step == 0 output every line.
// If lastEntry.Unix < 0 this is the first pass through the loop and we should output the line.
// Then check to see if the entry is equal to, or past a forward or reverse step
if interval == 0 || lastEntry.Unix() < 0 || forwardShouldOutput || backwardShouldOutput {
stream, ok := streams[labels]
streamLabels := i.Labels()
// If categorizeLabels is true, We need to remove the structured metadata labels and parsed labels from the stream labels.
// TODO(salvacorts): If this is too slow, provided this is in the hot path, we can consider doing this in the iterator.
if categorizeLabels && (len(entry.StructuredMetadata) > 0 || len(entry.Parsed) > 0) {
lbls, err := syntax.ParseLabels(streamLabels)
if err != nil {
return nil, fmt.Errorf("failed to parse series labels to categorize labels: %w", err)
}
builder := labels.NewBuilder(lbls)
for _, label := range entry.StructuredMetadata {
builder.Del(label.Name)
}
for _, label := range entry.Parsed {
builder.Del(label.Name)
}
streamLabels = builder.Labels().String()
}
stream, ok := streams[streamLabels]
if !ok {
stream = &logproto.Stream{
Labels: labels,
Labels: streamLabels,
}
streams[labels] = stream
streams[streamLabels] = stream
}
stream.Entries = append(stream.Entries, entry)
lastEntry = i.Entry().Timestamp

@ -581,7 +581,7 @@ func TestEngine_LogsInstantQuery(t *testing.T) {
{T: 60 * 1000, F: 1.2, Metric: labels.FromStrings("app", "fuzz")},
},
},
//sort and sort_desc
// sort and sort_desc
{
`sort(rate(({app=~"foo|bar"} |~".+bar")[1m])) + 1`, time.Unix(60, 0), logproto.FORWARD, 100,
[][]logproto.Series{
@ -1575,7 +1575,7 @@ func TestEngine_RangeQuery(t *testing.T) {
},
promql.Matrix{
promql.Series{
//vector result
// vector result
Metric: labels.Labels(nil),
Floats: []promql.FPoint{{T: 60000, F: 0}, {T: 80000, F: 0}, {T: 100000, F: 0}, {T: 120000, F: 0}, {T: 140000, F: 0}, {T: 160000, F: 0}, {T: 180000, F: 0}}},
promql.Series{

@ -383,9 +383,9 @@ func (lf *LabelsFormatter) Process(ts int64, l []byte, lbs *LabelsBuilder) ([]by
var data interface{}
for _, f := range lf.formats {
if f.Rename {
v, ok := lbs.Get(f.Value)
v, category, ok := lbs.GetWithCategory(f.Value)
if ok {
lbs.Set(f.Name, v)
lbs.Set(category, f.Name, v)
lbs.Del(f.Value)
}
continue
@ -399,7 +399,7 @@ func (lf *LabelsFormatter) Process(ts int64, l []byte, lbs *LabelsBuilder) ([]by
lbs.SetErrorDetails(err.Error())
continue
}
lbs.Set(f.Name, lf.buf.String())
lbs.Set(ParsedLabel, f.Name, lf.buf.String())
}
return l, true
}

@ -3,9 +3,8 @@ package log
import (
"errors"
"fmt"
"unicode"
"net/netip"
"unicode"
"github.com/prometheus/prometheus/model/labels"
"go4.org/netipx"

@ -11,25 +11,38 @@ import (
const MaxInternedStrings = 1024
var EmptyLabelsResult = NewLabelsResult(labels.Labels{}, labels.Labels{}.Hash())
var EmptyLabelsResult = NewLabelsResult(labels.EmptyLabels().String(), labels.EmptyLabels().Hash(), labels.EmptyLabels(), labels.EmptyLabels(), labels.EmptyLabels())
// LabelsResult is a computed labels result that contains the labels set with associated string and hash.
// The is mainly used for caching and returning labels computations out of pipelines and stages.
type LabelsResult interface {
String() string
Labels() labels.Labels
Stream() labels.Labels
StructuredMetadata() labels.Labels
Parsed() labels.Labels
Hash() uint64
}
// NewLabelsResult creates a new LabelsResult from a labels set and a hash.
func NewLabelsResult(lbs labels.Labels, hash uint64) LabelsResult {
return &labelsResult{lbs: lbs, s: lbs.String(), h: hash}
// NewLabelsResult creates a new LabelsResult.
// It takes the string representation of the labels, the hash of the labels and the labels categorized.
func NewLabelsResult(allLabelsStr string, hash uint64, stream, structuredMetadata, parsed labels.Labels) LabelsResult {
return &labelsResult{
s: allLabelsStr,
h: hash,
stream: stream,
structuredMetadata: structuredMetadata,
parsed: parsed,
}
}
type labelsResult struct {
lbs labels.Labels
s string
h uint64
s string
h uint64
stream labels.Labels
structuredMetadata labels.Labels
parsed labels.Labels
}
func (l labelsResult) String() string {
@ -37,13 +50,34 @@ func (l labelsResult) String() string {
}
func (l labelsResult) Labels() labels.Labels {
return l.lbs
return flattenLabels(nil, l.stream, l.structuredMetadata, l.parsed)
}
func (l labelsResult) Hash() uint64 {
return l.h
}
func (l labelsResult) Stream() labels.Labels {
if len(l.stream) == 0 {
return nil
}
return l.stream
}
func (l labelsResult) StructuredMetadata() labels.Labels {
if len(l.structuredMetadata) == 0 {
return nil
}
return l.structuredMetadata
}
func (l labelsResult) Parsed() labels.Labels {
if len(l.parsed) == 0 {
return nil
}
return l.parsed
}
type hasher struct {
buf []byte // buffer for computing hash without bytes slice allocation.
}
@ -62,11 +96,37 @@ func (h *hasher) Hash(lbs labels.Labels) uint64 {
return hash
}
type LabelCategory int
const (
StreamLabel LabelCategory = iota
StructuredMetadataLabel
ParsedLabel
InvalidCategory
numValidCategories = 3
)
var allCategories = []LabelCategory{
StreamLabel,
StructuredMetadataLabel,
ParsedLabel,
}
func categoriesContain(categories []LabelCategory, category LabelCategory) bool {
for _, c := range categories {
if c == category {
return true
}
}
return false
}
// BaseLabelsBuilder is a label builder used by pipeline and stages.
// Only one base builder is used and it contains cache for each LabelsBuilders.
type BaseLabelsBuilder struct {
del []string
add []labels.Label
add [numValidCategories]labels.Labels
// nolint:structcheck
// https://github.com/golangci/golangci-lint/issues/826
err string
@ -98,9 +158,14 @@ func NewBaseLabelsBuilderWithGrouping(groups []string, parserKeyHints ParserHint
parserKeyHints = noParserHints
}
const labelsCapacity = 16
return &BaseLabelsBuilder{
del: make([]string, 0, 5),
add: make([]labels.Label, 0, 16),
del: make([]string, 0, 5),
add: [numValidCategories]labels.Labels{
StreamLabel: make(labels.Labels, 0, labelsCapacity),
StructuredMetadataLabel: make(labels.Labels, 0, labelsCapacity),
ParsedLabel: make(labels.Labels, 0, labelsCapacity),
},
resultCache: make(map[uint64]LabelsResult),
hasher: newHasher(),
groups: groups,
@ -110,7 +175,7 @@ func NewBaseLabelsBuilderWithGrouping(groups []string, parserKeyHints ParserHint
}
}
// NewLabelsBuilder creates a new base labels builder.
// NewBaseLabelsBuilder creates a new base labels builder.
func NewBaseLabelsBuilder() *BaseLabelsBuilder {
return NewBaseLabelsBuilderWithGrouping(nil, noParserHints, false, false)
}
@ -126,7 +191,7 @@ func (b *BaseLabelsBuilder) ForLabels(lbs labels.Labels, hash uint64) *LabelsBui
}
return res
}
labelResult := NewLabelsResult(lbs, hash)
labelResult := NewLabelsResult(lbs.String(), hash, lbs, labels.EmptyLabels(), labels.EmptyLabels())
b.resultCache[hash] = labelResult
res := &LabelsBuilder{
base: lbs,
@ -139,7 +204,9 @@ func (b *BaseLabelsBuilder) ForLabels(lbs labels.Labels, hash uint64) *LabelsBui
// Reset clears all current state for the builder.
func (b *BaseLabelsBuilder) Reset() {
b.del = b.del[:0]
b.add = b.add[:0]
for k := range b.add {
b.add[k] = b.add[k][:0]
}
b.err = ""
b.errDetails = ""
b.parserKeyHints.Reset()
@ -151,6 +218,27 @@ func (b *BaseLabelsBuilder) ParserLabelHints() ParserHint {
return b.parserKeyHints
}
func (b *BaseLabelsBuilder) hasDel() bool {
return len(b.del) > 0
}
func (b *BaseLabelsBuilder) hasAdd() bool {
for _, lbls := range b.add {
if len(lbls) > 0 {
return true
}
}
return false
}
func (b *BaseLabelsBuilder) sizeAdd() int {
var length int
for _, lbls := range b.add {
length += len(lbls)
}
return length
}
// SetErr sets the error label.
func (b *LabelsBuilder) SetErr(err string) *LabelsBuilder {
b.err = err
@ -195,33 +283,42 @@ func (b *LabelsBuilder) BaseHas(key string) bool {
return b.base.Has(key)
}
// Get returns the value of a labels key if it exists.
func (b *LabelsBuilder) Get(key string) (string, bool) {
for _, a := range b.add {
if a.Name == key {
return a.Value, true
// GetWithCategory returns the value and the category of a labels key if it exists.
func (b *LabelsBuilder) GetWithCategory(key string) (string, LabelCategory, bool) {
for category, lbls := range b.add {
for _, l := range lbls {
if l.Name == key {
return l.Value, LabelCategory(category), true
}
}
}
for _, d := range b.del {
if d == key {
return "", false
return "", InvalidCategory, false
}
}
for _, l := range b.base {
if l.Name == key {
return l.Value, true
return l.Value, StreamLabel, true
}
}
return "", false
return "", InvalidCategory, false
}
func (b *LabelsBuilder) Get(key string) (string, bool) {
v, _, ok := b.GetWithCategory(key)
return v, ok
}
// Del deletes the label of the given name.
func (b *LabelsBuilder) Del(ns ...string) *LabelsBuilder {
for _, n := range ns {
for i, a := range b.add {
if a.Name == n {
b.add = append(b.add[:i], b.add[i+1:]...)
for category, lbls := range b.add {
for i, a := range lbls {
if a.Name == n {
b.add[category] = append(lbls[:i], lbls[i+1:]...)
}
}
}
b.del = append(b.del, n)
@ -230,14 +327,14 @@ func (b *LabelsBuilder) Del(ns ...string) *LabelsBuilder {
}
// Set the name/value pair as a label.
func (b *LabelsBuilder) Set(n, v string) *LabelsBuilder {
for i, a := range b.add {
func (b *LabelsBuilder) Set(category LabelCategory, n, v string) *LabelsBuilder {
for i, a := range b.add[category] {
if a.Name == n {
b.add[i].Value = v
b.add[category][i].Value = v
return b
}
}
b.add = append(b.add, labels.Label{Name: n, Value: v})
b.add[category] = append(b.add[category], labels.Label{Name: n, Value: v})
// Sometimes labels are set and later modified. Only record
// each label once
@ -247,73 +344,101 @@ func (b *LabelsBuilder) Set(n, v string) *LabelsBuilder {
// Add the labels to the builder. If a label with the same name
// already exists in the base labels, a suffix is added to the name.
func (b *LabelsBuilder) Add(labels ...labels.Label) *LabelsBuilder {
func (b *LabelsBuilder) Add(category LabelCategory, labels ...labels.Label) *LabelsBuilder {
for _, l := range labels {
name := l.Name
if b.BaseHas(name) {
name = fmt.Sprintf("%s%s", name, duplicateSuffix)
}
b.Set(name, l.Value)
b.Set(category, name, l.Value)
}
return b
}
// Labels returns the labels from the builder. If no modifications
// were made, the original labels are returned.
func (b *LabelsBuilder) labels() labels.Labels {
b.buf = b.UnsortedLabels(b.buf)
func (b *LabelsBuilder) labels(categories ...LabelCategory) labels.Labels {
b.buf = b.UnsortedLabels(b.buf, categories...)
sort.Sort(b.buf)
return b.buf
}
func (b *LabelsBuilder) appendErrors(buf labels.Labels) labels.Labels {
if b.err != "" {
buf = append(buf, labels.Label{Name: logqlmodel.ErrorLabel, Value: b.err})
buf = append(buf, labels.Label{
Name: logqlmodel.ErrorLabel,
Value: b.err,
})
}
if b.errDetails != "" {
buf = append(buf, labels.Label{Name: logqlmodel.ErrorDetailsLabel, Value: b.errDetails})
buf = append(buf, labels.Label{
Name: logqlmodel.ErrorDetailsLabel,
Value: b.errDetails,
})
}
return buf
}
func (b *LabelsBuilder) UnsortedLabels(buf labels.Labels) labels.Labels {
if len(b.del) == 0 && len(b.add) == 0 {
func (b *LabelsBuilder) UnsortedLabels(buf labels.Labels, categories ...LabelCategory) labels.Labels {
if categories == nil {
categories = allCategories
}
if !b.hasDel() && !b.hasAdd() && categoriesContain(categories, StreamLabel) {
if buf == nil {
buf = make(labels.Labels, 0, len(b.base)+1)
buf = make(labels.Labels, 0, len(b.base)+1) // +1 for error label.
} else {
buf = buf[:0]
}
buf = append(buf, b.base...)
return b.appendErrors(buf)
if categoriesContain(categories, ParsedLabel) {
buf = b.appendErrors(buf)
}
return buf
}
// In the general case, labels are removed, modified or moved
// rather than added.
if buf == nil {
buf = make(labels.Labels, 0, len(b.base)+len(b.add)+1)
size := len(b.base) + b.sizeAdd() + 1
buf = make(labels.Labels, 0, size)
} else {
buf = buf[:0]
}
Outer:
for _, l := range b.base {
for _, n := range b.del {
if l.Name == n {
continue Outer
if categoriesContain(categories, StreamLabel) {
Outer:
for _, l := range b.base {
// Skip stream labels to be deleted
for _, n := range b.del {
if l.Name == n {
continue Outer
}
}
}
for _, la := range b.add {
if l.Name == la.Name {
continue Outer
// Skip stream labels which value will be replaced
for _, lbls := range b.add {
for _, la := range lbls {
if l.Name == la.Name {
continue Outer
}
}
}
buf = append(buf, l)
}
buf = append(buf, l)
}
buf = append(buf, b.add...)
return b.appendErrors(buf)
for _, category := range categories {
buf = append(buf, b.add[category]...)
}
if (b.HasErr() || b.HasErrorDetails()) && categoriesContain(categories, ParsedLabel) {
buf = b.appendErrors(buf)
}
return buf
}
func (b *LabelsBuilder) Map() map[string]string {
if len(b.del) == 0 && len(b.add) == 0 && b.err == "" {
if !b.hasDel() && !b.hasAdd() && !b.HasErr() {
if b.baseMap == nil {
b.baseMap = b.base.Map()
}
@ -333,18 +458,51 @@ func (b *LabelsBuilder) Map() map[string]string {
// No grouping is applied and the cache is used when possible.
func (b *LabelsBuilder) LabelsResult() LabelsResult {
// unchanged path.
if len(b.del) == 0 && len(b.add) == 0 && b.err == "" {
if !b.hasDel() && !b.hasAdd() && !b.HasErr() {
return b.currentResult
}
return b.toResult(b.labels())
stream := b.labels(StreamLabel).Copy()
structuredMetadata := b.labels(StructuredMetadataLabel).Copy()
parsed := b.labels(ParsedLabel).Copy()
b.buf = flattenLabels(b.buf, stream, structuredMetadata, parsed)
hash := b.hasher.Hash(b.buf)
if cached, ok := b.resultCache[hash]; ok {
return cached
}
result := NewLabelsResult(b.buf.String(), hash, stream, structuredMetadata, parsed)
b.resultCache[hash] = result
return result
}
func (b *BaseLabelsBuilder) toResult(buf labels.Labels) LabelsResult {
func flattenLabels(buf labels.Labels, many ...labels.Labels) labels.Labels {
var size int
for _, lbls := range many {
size += len(lbls)
}
if buf == nil || cap(buf) < size {
buf = make(labels.Labels, 0, size)
} else {
buf = buf[:0]
}
for _, lbls := range many {
buf = append(buf, lbls...)
}
sort.Sort(buf)
return buf
}
func (b *BaseLabelsBuilder) toUncategorizedResult(buf labels.Labels) LabelsResult {
hash := b.hasher.Hash(buf)
if cached, ok := b.resultCache[hash]; ok {
return cached
}
res := NewLabelsResult(buf.Copy(), hash)
res := NewLabelsResult(buf.String(), hash, buf.Copy(), nil, nil)
b.resultCache[hash] = res
return res
}
@ -352,7 +510,7 @@ func (b *BaseLabelsBuilder) toResult(buf labels.Labels) LabelsResult {
// GroupedLabels returns the LabelsResult from the builder.
// Groups are applied and the cache is used when possible.
func (b *LabelsBuilder) GroupedLabels() LabelsResult {
if b.err != "" {
if b.HasErr() {
// We need to return now before applying grouping otherwise the error might get lost.
return b.LabelsResult()
}
@ -360,7 +518,7 @@ func (b *LabelsBuilder) GroupedLabels() LabelsResult {
return EmptyLabelsResult
}
// unchanged path.
if len(b.del) == 0 && len(b.add) == 0 {
if !b.hasDel() && !b.hasAdd() {
if len(b.groups) == 0 {
return b.currentResult
}
@ -391,9 +549,11 @@ Outer:
}
}
for _, la := range b.add {
if g == la.Name {
b.buf = append(b.buf, la)
continue Outer
for _, l := range la {
if g == l.Name {
b.buf = append(b.buf, l)
continue Outer
}
}
}
for _, l := range b.base {
@ -403,12 +563,12 @@ Outer:
}
}
}
return b.toResult(b.buf)
return b.toUncategorizedResult(b.buf)
}
func (b *LabelsBuilder) withoutResult() LabelsResult {
if b.buf == nil {
size := len(b.base) + len(b.add) - len(b.del) - len(b.groups)
size := len(b.base) + b.sizeAdd() - len(b.del) - len(b.groups)
if size < 0 {
size = 0
}
@ -423,9 +583,11 @@ Outer:
continue Outer
}
}
for _, la := range b.add {
if l.Name == la.Name {
continue Outer
for _, lbls := range b.add {
for _, la := range lbls {
if l.Name == la.Name {
continue Outer
}
}
}
for _, lg := range b.groups {
@ -435,17 +597,20 @@ Outer:
}
b.buf = append(b.buf, l)
}
OuterAdd:
for _, la := range b.add {
for _, lg := range b.groups {
if la.Name == lg {
continue OuterAdd
for _, lbls := range b.add {
OuterAdd:
for _, la := range lbls {
for _, lg := range b.groups {
if la.Name == lg {
continue OuterAdd
}
}
b.buf = append(b.buf, la)
}
b.buf = append(b.buf, la)
}
sort.Sort(b.buf)
return b.toResult(b.buf)
return b.toUncategorizedResult(b.buf)
}
func (b *LabelsBuilder) toBaseGroup() LabelsResult {
@ -458,7 +623,7 @@ func (b *LabelsBuilder) toBaseGroup() LabelsResult {
} else {
lbs = labels.NewBuilder(b.base).Keep(b.groups...).Labels()
}
res := NewLabelsResult(lbs, lbs.Hash())
res := NewLabelsResult(lbs.String(), lbs.Hash(), lbs, nil, nil)
b.groupedResult = res
return res
}

@ -4,6 +4,7 @@ import (
"testing"
"github.com/prometheus/prometheus/model/labels"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/grafana/loki/pkg/logqlmodel"
@ -13,22 +14,24 @@ func TestLabelsBuilder_Get(t *testing.T) {
lbs := labels.FromStrings("already", "in")
b := NewBaseLabelsBuilder().ForLabels(lbs, lbs.Hash())
b.Reset()
b.Set("foo", "bar")
b.Set("bar", "buzz")
b.Set(StructuredMetadataLabel, "foo", "bar")
b.Set(ParsedLabel, "bar", "buzz")
b.Del("foo")
_, ok := b.Get("foo")
_, _, ok := b.GetWithCategory("foo")
require.False(t, ok)
v, ok := b.Get("bar")
v, category, ok := b.GetWithCategory("bar")
require.True(t, ok)
require.Equal(t, "buzz", v)
v, ok = b.Get("already")
require.Equal(t, ParsedLabel, category)
v, category, ok = b.GetWithCategory("already")
require.True(t, ok)
require.Equal(t, "in", v)
require.Equal(t, StreamLabel, category)
b.Del("bar")
_, ok = b.Get("bar")
_, _, ok = b.GetWithCategory("bar")
require.False(t, ok)
b.Del("already")
_, ok = b.Get("already")
_, _, ok = b.GetWithCategory("already")
require.False(t, ok)
}
@ -37,22 +40,30 @@ func TestLabelsBuilder_LabelsError(t *testing.T) {
b := NewBaseLabelsBuilder().ForLabels(lbs, lbs.Hash())
b.Reset()
b.SetErr("err")
lbsWithErr := b.LabelsResult().Labels()
require.Equal(
t,
labels.FromStrings(logqlmodel.ErrorLabel, "err",
"already", "in",
),
lbsWithErr,
lbsWithErr := b.LabelsResult()
expectedLbs := labels.FromStrings(
logqlmodel.ErrorLabel, "err",
"already", "in",
)
require.Equal(t, expectedLbs, lbsWithErr.Labels())
require.Equal(t, expectedLbs.String(), lbsWithErr.String())
require.Equal(t, expectedLbs.Hash(), lbsWithErr.Hash())
require.Equal(t, labels.FromStrings("already", "in"), lbsWithErr.Stream())
require.Nil(t, lbsWithErr.StructuredMetadata())
require.Equal(t, labels.FromStrings(logqlmodel.ErrorLabel, "err"), lbsWithErr.Parsed())
// make sure the original labels is unchanged.
require.Equal(t, labels.FromStrings("already", "in"), lbs)
}
func TestLabelsBuilder_LabelsResult(t *testing.T) {
strs := []string{"namespace", "loki",
strs := []string{
"namespace", "loki",
"job", "us-central1/loki",
"cluster", "us-central1"}
"cluster", "us-central1",
"ToReplace", "text",
}
lbs := labels.FromStrings(strs...)
b := NewBaseLabelsBuilder().ForLabels(lbs, lbs.Hash())
b.Reset()
@ -61,19 +72,38 @@ func TestLabelsBuilder_LabelsResult(t *testing.T) {
withErr := labels.FromStrings(append(strs, logqlmodel.ErrorLabel, "err")...)
assertLabelResult(t, withErr, b.LabelsResult())
b.Set("foo", "bar")
b.Set("namespace", "tempo")
b.Set("buzz", "fuzz")
b.Set(StructuredMetadataLabel, "foo", "bar")
b.Set(StreamLabel, "namespace", "tempo")
b.Set(ParsedLabel, "buzz", "fuzz")
b.Set(ParsedLabel, "ToReplace", "other")
b.Del("job")
expected := labels.FromStrings(logqlmodel.ErrorLabel, "err",
expectedStreamLbls := labels.FromStrings(
"namespace", "tempo",
"cluster", "us-central1",
)
expectedStucturedMetadataLbls := labels.FromStrings(
"foo", "bar",
)
expectedParsedLbls := labels.FromStrings(
logqlmodel.ErrorLabel, "err",
"buzz", "fuzz",
"ToReplace", "other",
)
expected := make(labels.Labels, 0, len(expectedStreamLbls)+len(expectedStucturedMetadataLbls)+len(expectedParsedLbls))
expected = append(expected, expectedStreamLbls...)
expected = append(expected, expectedStucturedMetadataLbls...)
expected = append(expected, expectedParsedLbls...)
expected = labels.New(expected...)
assertLabelResult(t, expected, b.LabelsResult())
// cached.
assertLabelResult(t, expected, b.LabelsResult())
actual := b.LabelsResult()
assert.Equal(t, expectedStreamLbls, actual.Stream())
assert.Equal(t, expectedStucturedMetadataLbls, actual.StructuredMetadata())
assert.Equal(t, expectedParsedLbls, actual.Parsed())
}
func TestLabelsBuilder_GroupedLabelsResult(t *testing.T) {
@ -89,9 +119,9 @@ func TestLabelsBuilder_GroupedLabelsResult(t *testing.T) {
assertLabelResult(t, withErr, b.GroupedLabels())
b.Reset()
b.Set("foo", "bar")
b.Set("namespace", "tempo")
b.Set("buzz", "fuzz")
b.Set(StructuredMetadataLabel, "foo", "bar")
b.Set(StreamLabel, "namespace", "tempo")
b.Set(ParsedLabel, "buzz", "fuzz")
b.Del("job")
expected := labels.FromStrings("namespace", "tempo")
assertLabelResult(t, expected, b.GroupedLabels())
@ -104,13 +134,13 @@ func TestLabelsBuilder_GroupedLabelsResult(t *testing.T) {
b.Del("job")
assertLabelResult(t, labels.EmptyLabels(), b.GroupedLabels())
b.Reset()
b.Set("namespace", "tempo")
b.Set(StreamLabel, "namespace", "tempo")
assertLabelResult(t, labels.FromStrings("job", "us-central1/loki"), b.GroupedLabels())
b = NewBaseLabelsBuilderWithGrouping([]string{"job"}, nil, true, false).ForLabels(lbs, lbs.Hash())
b.Del("job")
b.Set("foo", "bar")
b.Set("job", "something")
b.Set(StructuredMetadataLabel, "foo", "bar")
b.Set(StreamLabel, "job", "something")
expected = labels.FromStrings("namespace", "loki",
"cluster", "us-central1",
"foo", "bar",
@ -118,8 +148,8 @@ func TestLabelsBuilder_GroupedLabelsResult(t *testing.T) {
assertLabelResult(t, expected, b.GroupedLabels())
b = NewBaseLabelsBuilderWithGrouping(nil, nil, false, false).ForLabels(lbs, lbs.Hash())
b.Set("foo", "bar")
b.Set("job", "something")
b.Set(StructuredMetadataLabel, "foo", "bar")
b.Set(StreamLabel, "job", "something")
expected = labels.FromStrings("namespace", "loki",
"job", "something",
"cluster", "us-central1",

@ -82,7 +82,7 @@ type streamLineSampleExtractor struct {
func (l *streamLineSampleExtractor) Process(ts int64, line []byte, structuredMetadata ...labels.Label) (float64, LabelsResult, bool) {
l.builder.Reset()
l.builder.Add(structuredMetadata...)
l.builder.Add(StructuredMetadataLabel, structuredMetadata...)
// short circuit.
if l.Stage == NoopStage {
@ -174,7 +174,7 @@ func (l *labelSampleExtractor) ForStream(labels labels.Labels) StreamSampleExtra
func (l *streamLabelSampleExtractor) Process(ts int64, line []byte, structuredMetadata ...labels.Label) (float64, LabelsResult, bool) {
// Apply the pipeline first.
l.builder.Reset()
l.builder.Add(structuredMetadata...)
l.builder.Add(StructuredMetadataLabel, structuredMetadata...)
line, ok := l.preStage.Process(ts, line, l.builder)
if !ok {
return 0, nil, false

@ -136,7 +136,7 @@ func (j *JSONParser) parseLabelValue(key, value []byte, dataType jsonparser.Valu
if !ok {
return nil
}
j.lbs.Set(key, readValue(value, dataType))
j.lbs.Set(ParsedLabel, key, readValue(value, dataType))
if !j.parserHints.ShouldContinueParsingLine(key, j.lbs) {
return errLabelDoesNotMatch
}
@ -166,7 +166,7 @@ func (j *JSONParser) parseLabelValue(key, value []byte, dataType jsonparser.Valu
return nil
}
j.lbs.Set(keyString, readValue(value, dataType))
j.lbs.Set(ParsedLabel, keyString, readValue(value, dataType))
if !j.parserHints.ShouldContinueParsingLine(keyString, j.lbs) {
return errLabelDoesNotMatch
}
@ -272,7 +272,7 @@ func (r *RegexpParser) Process(_ int64, line []byte, lbs *LabelsBuilder) ([]byte
continue
}
lbs.Set(key, string(value))
lbs.Set(ParsedLabel, key, string(value))
if !parserHints.ShouldContinueParsingLine(key, lbs) {
return line, false
}
@ -348,7 +348,7 @@ func (l *LogfmtParser) Process(_ int64, line []byte, lbs *LabelsBuilder) ([]byte
continue
}
lbs.Set(key, string(val))
lbs.Set(ParsedLabel, key, string(val))
if !parserHints.ShouldContinueParsingLine(key, lbs) {
return line, false
}
@ -410,7 +410,7 @@ func (l *PatternParser) Process(_ int64, line []byte, lbs *LabelsBuilder) ([]byt
continue
}
lbs.Set(name, string(m))
lbs.Set(ParsedLabel, name, string(m))
if !parserHints.ShouldContinueParsingLine(name, lbs) {
return line, false
}
@ -469,7 +469,7 @@ func (l *LogfmtExpressionParser) Process(_ int64, line []byte, lbs *LabelsBuilde
for id, paths := range l.expressions {
keys[id] = fmt.Sprintf("%v", paths...)
if !lbs.BaseHas(id) {
lbs.Set(id, "")
lbs.Set(ParsedLabel, id, "")
}
}
@ -523,7 +523,7 @@ func (l *LogfmtExpressionParser) Process(_ int64, line []byte, lbs *LabelsBuilde
}
}
lbs.Set(key, string(val))
lbs.Set(ParsedLabel, key, string(val))
if lbs.ParserLabelHints().AllRequiredExtracted() {
break
@ -613,9 +613,9 @@ func (j *JSONExpressionParser) Process(_ int64, line []byte, lbs *LabelsBuilder)
switch typ {
case jsonparser.Null:
lbs.Set(key, "")
lbs.Set(ParsedLabel, key, "")
default:
lbs.Set(key, unescapeJSONString(data))
lbs.Set(ParsedLabel, key, unescapeJSONString(data))
}
matches++
@ -625,7 +625,7 @@ func (j *JSONExpressionParser) Process(_ int64, line []byte, lbs *LabelsBuilder)
if matches < len(j.ids) {
for _, id := range j.ids {
if _, ok := lbs.Get(id); !ok {
lbs.Set(id, "")
lbs.Set(ParsedLabel, id, "")
}
}
}
@ -695,7 +695,7 @@ func addErrLabel(msg string, err error, lbs *LabelsBuilder) {
}
if lbs.ParserLabelHints().PreserveError() {
lbs.Set(logqlmodel.PreserveErrorLabel, "true")
lbs.Set(ParsedLabel, logqlmodel.PreserveErrorLabel, "true")
}
}
@ -746,7 +746,7 @@ func (u *UnpackParser) unpack(entry []byte, lbs *LabelsBuilder) ([]byte, error)
// flush the buffer if we found a packed entry.
if isPacked {
for i := 0; i < len(u.lbsBuffer); i = i + 2 {
lbs.Set(u.lbsBuffer[i], u.lbsBuffer[i+1])
lbs.Set(ParsedLabel, u.lbsBuffer[i], u.lbsBuffer[i+1])
if !lbs.ParserLabelHints().ShouldContinueParsingLine(u.lbsBuffer[i], lbs) {
return entry, errLabelDoesNotMatch
}

@ -187,8 +187,9 @@ func TestLabelShortCircuit(t *testing.T) {
_, result = tt.p.Process(0, tt.line, lbs)
require.Len(t, lbs.labels(), 1)
name, ok := lbs.Get("name")
name, category, ok := lbs.GetWithCategory("name")
require.True(t, ok)
require.Equal(t, ParsedLabel, category)
require.Contains(t, name, "text1")
})
}

@ -89,7 +89,7 @@ type noopStreamPipeline struct {
func (n noopStreamPipeline) Process(_ int64, line []byte, structuredMetadata ...labels.Label) ([]byte, LabelsResult, bool) {
n.builder.Reset()
n.builder.Add(structuredMetadata...)
n.builder.Add(StructuredMetadataLabel, structuredMetadata...)
return line, n.builder.LabelsResult(), true
}
@ -204,7 +204,7 @@ func (p *pipeline) Reset() {
func (p *streamPipeline) Process(ts int64, line []byte, structuredMetadata ...labels.Label) ([]byte, LabelsResult, bool) {
var ok bool
p.builder.Reset()
p.builder.Add(structuredMetadata...)
p.builder.Add(StructuredMetadataLabel, structuredMetadata...)
for _, s := range p.stages {
line, ok = s.Process(ts, line, p.builder)

@ -16,39 +16,44 @@ func TestNoopPipeline(t *testing.T) {
l, lbr, matches := pipeline.ForStream(lbs).Process(0, []byte(""))
require.Equal(t, []byte(""), l)
require.Equal(t, NewLabelsResult(lbs, lbs.Hash()), lbr)
require.Equal(t, NewLabelsResult(lbs.String(), lbs.Hash(), lbs, labels.EmptyLabels(), labels.EmptyLabels()), lbr)
require.Equal(t, lbs.Hash(), lbr.Hash())
require.Equal(t, lbs.String(), lbr.String())
require.Equal(t, true, matches)
ls, lbr, matches := pipeline.ForStream(lbs).ProcessString(0, "")
require.Equal(t, "", ls)
require.Equal(t, NewLabelsResult(lbs, lbs.Hash()), lbr)
require.Equal(t, NewLabelsResult(lbs.String(), lbs.Hash(), lbs, labels.EmptyLabels(), labels.EmptyLabels()), lbr)
require.Equal(t, lbs.Hash(), lbr.Hash())
require.Equal(t, lbs.String(), lbr.String())
require.Equal(t, true, matches)
structuredMetadata := labels.Labels{
{Name: "y", Value: "1"},
{Name: "z", Value: "2"},
}
structuredMetadata := labels.FromStrings("y", "1", "z", "2")
expectedLabelsResults := append(lbs, structuredMetadata...)
l, lbr, matches = pipeline.ForStream(lbs).Process(0, []byte(""), structuredMetadata...)
require.Equal(t, []byte(""), l)
require.Equal(t, NewLabelsResult(expectedLabelsResults, expectedLabelsResults.Hash()), lbr)
require.Equal(t, NewLabelsResult(expectedLabelsResults.String(), expectedLabelsResults.Hash(), lbs, structuredMetadata, labels.EmptyLabels()), lbr)
require.Equal(t, expectedLabelsResults.Hash(), lbr.Hash())
require.Equal(t, expectedLabelsResults.String(), lbr.String())
require.Equal(t, true, matches)
ls, lbr, matches = pipeline.ForStream(lbs).ProcessString(0, "", structuredMetadata...)
require.Equal(t, "", ls)
require.Equal(t, NewLabelsResult(expectedLabelsResults, expectedLabelsResults.Hash()), lbr)
require.Equal(t, NewLabelsResult(expectedLabelsResults.String(), expectedLabelsResults.Hash(), lbs, structuredMetadata, labels.EmptyLabels()), lbr)
require.Equal(t, expectedLabelsResults.Hash(), lbr.Hash())
require.Equal(t, expectedLabelsResults.String(), lbr.String())
require.Equal(t, true, matches)
// test duplicated structured metadata with stream labels
expectedLabelsResults = append(lbs, labels.Label{
Name: "foo_extracted", Value: "baz",
})
expectedLabelsResults = append(expectedLabelsResults, structuredMetadata...)
expectedNonIndexedLabels := labels.FromStrings("foo_extracted", "baz", "y", "1", "z", "2")
expectedLabelsResults = labels.FromStrings("foo", "bar", "foo_extracted", "baz", "y", "1", "z", "2")
l, lbr, matches = pipeline.ForStream(lbs).Process(0, []byte(""), append(structuredMetadata, labels.Label{
Name: "foo", Value: "baz",
})...)
require.Equal(t, []byte(""), l)
require.Equal(t, NewLabelsResult(expectedLabelsResults, expectedLabelsResults.Hash()), lbr)
require.Equal(t, NewLabelsResult(expectedLabelsResults.String(), expectedLabelsResults.Hash(), lbs, expectedNonIndexedLabels, labels.EmptyLabels()), lbr)
require.Equal(t, expectedLabelsResults.Hash(), lbr.Hash())
require.Equal(t, expectedLabelsResults.String(), lbr.String())
require.Equal(t, true, matches)
pipeline.Reset()
@ -64,12 +69,16 @@ func TestPipeline(t *testing.T) {
l, lbr, matches := p.ForStream(lbs).Process(0, []byte("line"))
require.Equal(t, []byte("lbs bar"), l)
require.Equal(t, NewLabelsResult(lbs, lbs.Hash()), lbr)
require.Equal(t, NewLabelsResult(lbs.String(), lbs.Hash(), lbs, labels.EmptyLabels(), labels.EmptyLabels()), lbr)
require.Equal(t, lbs.Hash(), lbr.Hash())
require.Equal(t, lbs.String(), lbr.String())
require.Equal(t, true, matches)
ls, lbr, matches := p.ForStream(lbs).ProcessString(0, "line")
require.Equal(t, "lbs bar", ls)
require.Equal(t, NewLabelsResult(lbs, lbs.Hash()), lbr)
require.Equal(t, NewLabelsResult(lbs.String(), lbs.Hash(), lbs, labels.EmptyLabels(), labels.EmptyLabels()), lbr)
require.Equal(t, lbs.Hash(), lbr.Hash())
require.Equal(t, lbs.String(), lbr.String())
require.Equal(t, true, matches)
l, lbr, matches = p.ForStream(labels.EmptyLabels()).Process(0, []byte("line"))
@ -84,12 +93,16 @@ func TestPipeline(t *testing.T) {
// Reset caches
p.baseBuilder.del = []string{"foo", "bar"}
p.baseBuilder.add = labels.FromStrings("baz", "blip")
p.baseBuilder.add = [numValidCategories]labels.Labels{
ParsedLabel: labels.FromStrings("baz", "blip"),
}
p.Reset()
require.Len(t, p.streamPipelines, 0)
require.Len(t, p.baseBuilder.del, 0)
require.Len(t, p.baseBuilder.add, 0)
for _, v := range p.baseBuilder.add {
require.Len(t, v, 0)
}
}
func TestPipelineWithStructuredMetadata(t *testing.T) {
@ -104,31 +117,38 @@ func TestPipelineWithStructuredMetadata(t *testing.T) {
l, lbr, matches := p.ForStream(lbs).Process(0, []byte("line"), structuredMetadata...)
require.Equal(t, []byte("lbs bar bob"), l)
require.Equal(t, NewLabelsResult(expectedLabelsResults, expectedLabelsResults.Hash()), lbr)
require.Equal(t, NewLabelsResult(expectedLabelsResults.String(), expectedLabelsResults.Hash(), lbs, structuredMetadata, labels.EmptyLabels()), lbr)
require.Equal(t, expectedLabelsResults.Hash(), lbr.Hash())
require.Equal(t, expectedLabelsResults.String(), lbr.String())
require.Equal(t, true, matches)
ls, lbr, matches := p.ForStream(lbs).ProcessString(0, "line", structuredMetadata...)
require.Equal(t, "lbs bar bob", ls)
require.Equal(t, NewLabelsResult(expectedLabelsResults, expectedLabelsResults.Hash()), lbr)
require.Equal(t, NewLabelsResult(expectedLabelsResults.String(), expectedLabelsResults.Hash(), lbs, structuredMetadata, labels.EmptyLabels()), lbr)
require.Equal(t, expectedLabelsResults.Hash(), lbr.Hash())
require.Equal(t, expectedLabelsResults.String(), lbr.String())
require.Equal(t, true, matches)
// test duplicated structured metadata with stream labels
expectedLabelsResults = append(lbs, labels.Label{
Name: "foo_extracted", Value: "baz",
})
expectedNonIndexedLabels := labels.FromStrings("user", "bob", "foo_extracted", "baz")
expectedLabelsResults = labels.FromStrings("foo", "bar", "foo_extracted", "baz")
expectedLabelsResults = append(expectedLabelsResults, structuredMetadata...)
l, lbr, matches = p.ForStream(lbs).Process(0, []byte("line"), append(structuredMetadata, labels.Label{
Name: "foo", Value: "baz",
})...)
require.Equal(t, []byte("lbs bar bob"), l)
require.Equal(t, NewLabelsResult(expectedLabelsResults, expectedLabelsResults.Hash()), lbr)
require.Equal(t, NewLabelsResult(expectedLabelsResults.String(), expectedLabelsResults.Hash(), lbs, expectedNonIndexedLabels, labels.EmptyLabels()), lbr)
require.Equal(t, expectedLabelsResults.Hash(), lbr.Hash())
require.Equal(t, expectedLabelsResults.String(), lbr.String())
require.Equal(t, true, matches)
ls, lbr, matches = p.ForStream(lbs).ProcessString(0, "line", append(structuredMetadata, labels.Label{
Name: "foo", Value: "baz",
})...)
require.Equal(t, "lbs bar bob", ls)
require.Equal(t, NewLabelsResult(expectedLabelsResults, expectedLabelsResults.Hash()), lbr)
require.Equal(t, NewLabelsResult(expectedLabelsResults.String(), expectedLabelsResults.Hash(), lbs, expectedNonIndexedLabels, labels.EmptyLabels()), lbr)
require.Equal(t, expectedLabelsResults.Hash(), lbr.Hash())
require.Equal(t, expectedLabelsResults.String(), lbr.String())
require.Equal(t, true, matches)
l, lbr, matches = p.ForStream(lbs).Process(0, []byte("line"))
@ -153,12 +173,16 @@ func TestPipelineWithStructuredMetadata(t *testing.T) {
// Reset caches
p.baseBuilder.del = []string{"foo", "bar"}
p.baseBuilder.add = labels.FromStrings("baz", "blip")
p.baseBuilder.add = [numValidCategories]labels.Labels{
ParsedLabel: labels.FromStrings("baz", "blip"),
}
p.Reset()
require.Len(t, p.streamPipelines, 0)
require.Len(t, p.baseBuilder.del, 0)
require.Len(t, p.baseBuilder.add, 0)
for _, v := range p.baseBuilder.add {
require.Len(t, v, 0)
}
}
func TestFilteringPipeline(t *testing.T) {
@ -358,6 +382,10 @@ func TestDropLabelsPipeline(t *testing.T) {
for i, line := range tt.lines {
_, finalLbs, _ := sp.Process(0, line)
require.Equal(t, tt.wantLabels[i], finalLbs.Labels())
require.Nil(t, finalLbs.Stream())
require.Nil(t, finalLbs.StructuredMetadata())
require.Equal(t, tt.wantLabels[i], finalLbs.Parsed())
require.Equal(t, tt.wantLabels[i].Hash(), finalLbs.Hash())
}
}
@ -436,7 +464,7 @@ func TestKeepLabelsPipeline(t *testing.T) {
labels.FromStrings(
"level", "debug",
),
{},
labels.EmptyLabels(),
},
},
{
@ -464,8 +492,8 @@ func TestKeepLabelsPipeline(t *testing.T) {
labels.FromStrings(
"level", "info",
),
{},
{},
labels.EmptyLabels(),
labels.EmptyLabels(),
},
},
} {
@ -476,6 +504,15 @@ func TestKeepLabelsPipeline(t *testing.T) {
finalLine, finalLbs, _ := sp.Process(0, line)
require.Equal(t, tt.wantLine[i], finalLine)
require.Equal(t, tt.wantLabels[i], finalLbs.Labels())
require.Nil(t, finalLbs.Stream())
require.Nil(t, finalLbs.StructuredMetadata())
if len(tt.wantLabels[i]) > 0 {
require.Equal(t, tt.wantLabels[i], finalLbs.Parsed())
} else {
require.Nil(t, finalLbs.Parsed())
}
require.Equal(t, tt.wantLabels[i].Hash(), finalLbs.Hash())
require.Equal(t, tt.wantLabels[i].String(), finalLbs.String())
}
})
}

@ -387,6 +387,7 @@ func (t *Loki) initQuerier() (services.Service, error) {
toMerge := []middleware.Interface{
httpreq.ExtractQueryMetricsMiddleware(),
httpreq.ExtractQueryTagsMiddleware(),
httpreq.PropagateHeadersMiddleware(httpreq.LokiEncodingFlagsHeader),
serverutil.RecoveryHTTPMiddleware,
t.HTTPAuthMiddleware,
serverutil.NewPrepopulateMiddleware(),
@ -898,7 +899,7 @@ func (t *Loki) initQueryFrontend() (_ services.Service, err error) {
toMerge := []middleware.Interface{
httpreq.ExtractQueryTagsMiddleware(),
httpreq.PropagateHeadersMiddleware(httpreq.LokiActorPathHeader),
httpreq.PropagateHeadersMiddleware(httpreq.LokiActorPathHeader, httpreq.LokiEncodingFlagsHeader),
serverutil.RecoveryHTTPMiddleware,
t.HTTPAuthMiddleware,
queryrange.StatsHTTPMiddleware,
@ -1402,7 +1403,7 @@ func (t *Loki) initBloomCompactorRing() (services.Service, error) {
t.Cfg.BloomCompactor.Ring.ListenPort = t.Cfg.Server.GRPCListenPort
// is LegacyMode needed?
//legacyReadMode := t.Cfg.LegacyReadTarget && t.isModuleActive(Read)
// legacyReadMode := t.Cfg.LegacyReadTarget && t.isModuleActive(Read)
rm, err := lokiring.NewRingManager(bloomCompactorRingKey, lokiring.ServerMode, t.Cfg.BloomCompactor.Ring, 1, 1, util_log.Logger, prometheus.DefaultRegisterer)

@ -219,6 +219,10 @@ type EntryAdapter struct {
Timestamp time.Time `protobuf:"bytes,1,opt,name=timestamp,proto3,stdtime" json:"ts"`
Line string `protobuf:"bytes,2,opt,name=line,proto3" json:"line"`
StructuredMetadata []LabelPairAdapter `protobuf:"bytes,3,rep,name=structuredMetadata,proto3" json:"structuredMetadata,omitempty"`
// This field shouldn't be used by clients to push data to Loki.
// It is only used by Loki to return parsed log lines in query responses.
// TODO: Remove this field from the write path Proto.
Parsed []LabelPairAdapter `protobuf:"bytes,4,rep,name=parsed,proto3" json:"parsed,omitempty"`
}
func (m *EntryAdapter) Reset() { *m = EntryAdapter{} }
@ -274,6 +278,13 @@ func (m *EntryAdapter) GetStructuredMetadata() []LabelPairAdapter {
return nil
}
func (m *EntryAdapter) GetParsed() []LabelPairAdapter {
if m != nil {
return m.Parsed
}
return nil
}
func init() {
proto.RegisterType((*PushRequest)(nil), "logproto.PushRequest")
proto.RegisterType((*PushResponse)(nil), "logproto.PushResponse")
@ -285,39 +296,40 @@ func init() {
func init() { proto.RegisterFile("pkg/push/push.proto", fileDescriptor_35ec442956852c9e) }
var fileDescriptor_35ec442956852c9e = []byte{
// 503 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x6c, 0x53, 0x31, 0x6f, 0xd3, 0x40,
0x14, 0xf6, 0x25, 0x69, 0xda, 0x5e, 0x4a, 0x41, 0x47, 0x5b, 0x8c, 0x55, 0x9d, 0x23, 0x8b, 0x21,
0x03, 0xd8, 0x52, 0x18, 0x58, 0x58, 0x62, 0x09, 0xa9, 0x03, 0x48, 0x95, 0x41, 0x20, 0xb1, 0x5d,
0x9a, 0xab, 0x6d, 0xd5, 0xf6, 0x99, 0xbb, 0x33, 0x52, 0x37, 0x7e, 0x42, 0xf9, 0x17, 0xfc, 0x94,
0x8e, 0x19, 0x2b, 0x06, 0x43, 0x9c, 0xa5, 0xca, 0xd4, 0x9f, 0x80, 0x7c, 0xf6, 0x91, 0x52, 0xba,
0x9c, 0xbf, 0xf7, 0xdd, 0x7b, 0xef, 0xfb, 0xfc, 0x9e, 0x0d, 0x1f, 0xe7, 0x67, 0xa1, 0x97, 0x17,
0x22, 0x52, 0x87, 0x9b, 0x73, 0x26, 0x19, 0xda, 0x4a, 0x58, 0xa8, 0x90, 0xb5, 0x17, 0xb2, 0x90,
0x29, 0xe8, 0xd5, 0xa8, 0xb9, 0xb7, 0xec, 0x90, 0xb1, 0x30, 0xa1, 0x9e, 0x8a, 0xa6, 0xc5, 0xa9,
0x27, 0xe3, 0x94, 0x0a, 0x49, 0xd2, 0xbc, 0x49, 0x70, 0x3e, 0xc1, 0xc1, 0x71, 0x21, 0xa2, 0x80,
0x7e, 0x29, 0xa8, 0x90, 0xe8, 0x08, 0x6e, 0x0a, 0xc9, 0x29, 0x49, 0x85, 0x09, 0x86, 0xdd, 0xd1,
0x60, 0xfc, 0xc4, 0xd5, 0x0a, 0xee, 0x7b, 0x75, 0x31, 0x99, 0x91, 0x5c, 0x52, 0xee, 0xef, 0xff,
0x2c, 0xed, 0x7e, 0x43, 0xad, 0x4a, 0x5b, 0x57, 0x05, 0x1a, 0x38, 0xbb, 0x70, 0xa7, 0x69, 0x2c,
0x72, 0x96, 0x09, 0xea, 0x7c, 0x07, 0xf0, 0xc1, 0x3f, 0x1d, 0x90, 0x03, 0xfb, 0x09, 0x99, 0xd2,
0xa4, 0x96, 0x02, 0xa3, 0x6d, 0x1f, 0xae, 0x4a, 0xbb, 0x65, 0x82, 0xf6, 0x89, 0x26, 0x70, 0x93,
0x66, 0x92, 0xc7, 0x54, 0x98, 0x1d, 0xe5, 0xe7, 0x60, 0xed, 0xe7, 0x4d, 0x26, 0xf9, 0xb9, 0xb6,
0xf3, 0xf0, 0xb2, 0xb4, 0x8d, 0xda, 0x48, 0x9b, 0x1e, 0x68, 0x80, 0x9e, 0xc2, 0x5e, 0x44, 0x44,
0x64, 0x76, 0x87, 0x60, 0xd4, 0xf3, 0x37, 0x56, 0xa5, 0x0d, 0x5e, 0x04, 0x8a, 0x72, 0x5e, 0xc3,
0x47, 0x6f, 0x6b, 0x9d, 0x63, 0x12, 0x73, 0xed, 0x0a, 0xc1, 0x5e, 0x46, 0x52, 0xda, 0x78, 0x0a,
0x14, 0x46, 0x7b, 0x70, 0xe3, 0x2b, 0x49, 0x0a, 0x6a, 0x76, 0x14, 0xd9, 0x04, 0xce, 0x35, 0x80,
0x3b, 0xb7, 0x3d, 0xa0, 0x23, 0xb8, 0xfd, 0x77, 0xbc, 0xaa, 0x7e, 0x30, 0xb6, 0xdc, 0x66, 0x01,
0xae, 0x5e, 0x80, 0xfb, 0x41, 0x67, 0xf8, 0xbb, 0xad, 0xe5, 0x8e, 0x14, 0x17, 0xbf, 0x6c, 0x10,
0xac, 0x8b, 0xd1, 0x21, 0xec, 0x25, 0x71, 0xd6, 0xea, 0xf9, 0x5b, 0xab, 0xd2, 0x56, 0x71, 0xa0,
0x4e, 0x94, 0x43, 0x24, 0x24, 0x2f, 0x4e, 0x64, 0xc1, 0xe9, 0xec, 0x1d, 0x95, 0x64, 0x46, 0x24,
0x31, 0xbb, 0x6a, 0x3e, 0xd6, 0x7a, 0x3e, 0x77, 0x5f, 0xcd, 0x7f, 0xd6, 0x0a, 0x1e, 0xfe, 0x5f,
0xfd, 0x9c, 0xa5, 0xb1, 0xa4, 0x69, 0x2e, 0xcf, 0x83, 0x7b, 0x7a, 0x8f, 0x27, 0xb0, 0x5f, 0x2f,
0x93, 0x72, 0xf4, 0x0a, 0xf6, 0x6a, 0x84, 0xf6, 0xd7, 0x3a, 0xb7, 0xbe, 0x1f, 0xeb, 0xe0, 0x2e,
0xdd, 0x6e, 0xdf, 0xf0, 0x3f, 0xce, 0x17, 0xd8, 0xb8, 0x5a, 0x60, 0xe3, 0x66, 0x81, 0xc1, 0xb7,
0x0a, 0x83, 0x1f, 0x15, 0x06, 0x97, 0x15, 0x06, 0xf3, 0x0a, 0x83, 0xdf, 0x15, 0x06, 0xd7, 0x15,
0x36, 0x6e, 0x2a, 0x0c, 0x2e, 0x96, 0xd8, 0x98, 0x2f, 0xb1, 0x71, 0xb5, 0xc4, 0xc6, 0xe7, 0x61,
0x18, 0xcb, 0xa8, 0x98, 0xba, 0x27, 0x2c, 0xf5, 0x42, 0x4e, 0x4e, 0x49, 0x46, 0xbc, 0x84, 0x9d,
0xc5, 0x9e, 0xfe, 0x19, 0xa6, 0x7d, 0xa5, 0xf6, 0xf2, 0x4f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x3a,
0x46, 0x64, 0x71, 0x1f, 0x03, 0x00, 0x00,
// 527 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x84, 0x53, 0xc1, 0x6e, 0xd3, 0x40,
0x10, 0xf5, 0x26, 0x6e, 0xda, 0x6e, 0x4a, 0xa9, 0x96, 0xb6, 0x18, 0xab, 0x5a, 0x47, 0x16, 0x87,
0x1c, 0xc0, 0x96, 0xc2, 0x81, 0x0b, 0x97, 0x58, 0x42, 0xea, 0xa1, 0x48, 0x95, 0x41, 0x20, 0x71,
0xdb, 0x34, 0x5b, 0xdb, 0xaa, 0xed, 0x35, 0xbb, 0x6b, 0xa4, 0xde, 0xf8, 0x84, 0xf2, 0x17, 0x7c,
0x01, 0xdf, 0xd0, 0x63, 0x8e, 0x15, 0x07, 0x43, 0x9c, 0x0b, 0xca, 0xa9, 0x9f, 0x80, 0xbc, 0xb6,
0x49, 0x28, 0x48, 0x5c, 0x36, 0x6f, 0x66, 0x67, 0xde, 0x7b, 0x99, 0x1d, 0xc3, 0x07, 0xd9, 0x45,
0xe0, 0x66, 0xb9, 0x08, 0xd5, 0xe1, 0x64, 0x9c, 0x49, 0x86, 0xb6, 0x62, 0x16, 0x28, 0x64, 0xee,
0x07, 0x2c, 0x60, 0x0a, 0xba, 0x15, 0xaa, 0xef, 0x4d, 0x2b, 0x60, 0x2c, 0x88, 0xa9, 0xab, 0xa2,
0x49, 0x7e, 0xee, 0xca, 0x28, 0xa1, 0x42, 0x92, 0x24, 0xab, 0x0b, 0xec, 0x77, 0xb0, 0x7f, 0x9a,
0x8b, 0xd0, 0xa7, 0x1f, 0x72, 0x2a, 0x24, 0x3a, 0x86, 0x9b, 0x42, 0x72, 0x4a, 0x12, 0x61, 0x80,
0x41, 0x77, 0xd8, 0x1f, 0x3d, 0x74, 0x5a, 0x05, 0xe7, 0xb5, 0xba, 0x18, 0x4f, 0x49, 0x26, 0x29,
0xf7, 0x0e, 0xbe, 0x15, 0x56, 0xaf, 0x4e, 0x2d, 0x0b, 0xab, 0xed, 0xf2, 0x5b, 0x60, 0xef, 0xc2,
0x9d, 0x9a, 0x58, 0x64, 0x2c, 0x15, 0xd4, 0xfe, 0x0c, 0xe0, 0xbd, 0x3f, 0x18, 0x90, 0x0d, 0x7b,
0x31, 0x99, 0xd0, 0xb8, 0x92, 0x02, 0xc3, 0x6d, 0x0f, 0x2e, 0x0b, 0xab, 0xc9, 0xf8, 0xcd, 0x2f,
0x1a, 0xc3, 0x4d, 0x9a, 0x4a, 0x1e, 0x51, 0x61, 0x74, 0x94, 0x9f, 0xc3, 0x95, 0x9f, 0x97, 0xa9,
0xe4, 0x97, 0xad, 0x9d, 0xfb, 0xd7, 0x85, 0xa5, 0x55, 0x46, 0x9a, 0x72, 0xbf, 0x05, 0xe8, 0x11,
0xd4, 0x43, 0x22, 0x42, 0xa3, 0x3b, 0x00, 0x43, 0xdd, 0xdb, 0x58, 0x16, 0x16, 0x78, 0xea, 0xab,
0x94, 0xfd, 0x02, 0xee, 0x9d, 0x54, 0x3a, 0xa7, 0x24, 0xe2, 0xad, 0x2b, 0x04, 0xf5, 0x94, 0x24,
0xb4, 0xf6, 0xe4, 0x2b, 0x8c, 0xf6, 0xe1, 0xc6, 0x47, 0x12, 0xe7, 0xd4, 0xe8, 0xa8, 0x64, 0x1d,
0xd8, 0x5f, 0x3b, 0x70, 0x67, 0xdd, 0x03, 0x3a, 0x86, 0xdb, 0xbf, 0xc7, 0xab, 0xfa, 0xfb, 0x23,
0xd3, 0xa9, 0x1f, 0xc0, 0x69, 0x1f, 0xc0, 0x79, 0xd3, 0x56, 0x78, 0xbb, 0x8d, 0xe5, 0x8e, 0x14,
0x57, 0xdf, 0x2d, 0xe0, 0xaf, 0x9a, 0xd1, 0x11, 0xd4, 0xe3, 0x28, 0x6d, 0xf4, 0xbc, 0xad, 0x65,
0x61, 0xa9, 0xd8, 0x57, 0x27, 0xca, 0x20, 0x12, 0x92, 0xe7, 0x67, 0x32, 0xe7, 0x74, 0xfa, 0x8a,
0x4a, 0x32, 0x25, 0x92, 0x18, 0x5d, 0x35, 0x1f, 0x73, 0x35, 0x9f, 0xbb, 0x7f, 0xcd, 0x7b, 0xdc,
0x08, 0x1e, 0xfd, 0xdd, 0xfd, 0x84, 0x25, 0x91, 0xa4, 0x49, 0x26, 0x2f, 0xfd, 0x7f, 0x70, 0xa3,
0x13, 0xd8, 0xcb, 0x08, 0x17, 0x74, 0x6a, 0xe8, 0xff, 0x55, 0x31, 0x1a, 0x95, 0xbd, 0xba, 0x63,
0x8d, 0xb9, 0xe1, 0x18, 0x8d, 0x61, 0xaf, 0x5a, 0x0d, 0xca, 0xd1, 0x73, 0xa8, 0x57, 0x08, 0x1d,
0xac, 0xf8, 0xd6, 0xb6, 0xd1, 0x3c, 0xbc, 0x9b, 0x6e, 0x76, 0x49, 0xf3, 0xde, 0xce, 0xe6, 0x58,
0xbb, 0x99, 0x63, 0xed, 0x76, 0x8e, 0xc1, 0xa7, 0x12, 0x83, 0x2f, 0x25, 0x06, 0xd7, 0x25, 0x06,
0xb3, 0x12, 0x83, 0x1f, 0x25, 0x06, 0x3f, 0x4b, 0xac, 0xdd, 0x96, 0x18, 0x5c, 0x2d, 0xb0, 0x36,
0x5b, 0x60, 0xed, 0x66, 0x81, 0xb5, 0xf7, 0x83, 0x20, 0x92, 0x61, 0x3e, 0x71, 0xce, 0x58, 0xe2,
0x06, 0x9c, 0x9c, 0x93, 0x94, 0xb8, 0x31, 0xbb, 0x88, 0xdc, 0xf6, 0xd3, 0x9a, 0xf4, 0x94, 0xda,
0xb3, 0x5f, 0x01, 0x00, 0x00, 0xff, 0xff, 0x7e, 0xaa, 0x57, 0xd3, 0x6d, 0x03, 0x00, 0x00,
}
func (this *PushRequest) Equal(that interface{}) bool {
@ -465,6 +477,14 @@ func (this *EntryAdapter) Equal(that interface{}) bool {
return false
}
}
if len(this.Parsed) != len(that1.Parsed) {
return false
}
for i := range this.Parsed {
if !this.Parsed[i].Equal(&that1.Parsed[i]) {
return false
}
}
return true
}
func (this *PushRequest) GoString() string {
@ -519,7 +539,7 @@ func (this *EntryAdapter) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 7)
s := make([]string, 0, 8)
s = append(s, "&push.EntryAdapter{")
s = append(s, "Timestamp: "+fmt.Sprintf("%#v", this.Timestamp)+",\n")
s = append(s, "Line: "+fmt.Sprintf("%#v", this.Line)+",\n")
@ -530,6 +550,13 @@ func (this *EntryAdapter) GoString() string {
}
s = append(s, "StructuredMetadata: "+fmt.Sprintf("%#v", vs)+",\n")
}
if this.Parsed != nil {
vs := make([]*LabelPairAdapter, len(this.Parsed))
for i := range vs {
vs[i] = &this.Parsed[i]
}
s = append(s, "Parsed: "+fmt.Sprintf("%#v", vs)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
@ -788,6 +815,20 @@ func (m *EntryAdapter) MarshalToSizedBuffer(dAtA []byte) (int, error) {
_ = i
var l int
_ = l
if len(m.Parsed) > 0 {
for iNdEx := len(m.Parsed) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Parsed[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintPush(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0x22
}
}
if len(m.StructuredMetadata) > 0 {
for iNdEx := len(m.StructuredMetadata) - 1; iNdEx >= 0; iNdEx-- {
{
@ -912,6 +953,12 @@ func (m *EntryAdapter) Size() (n int) {
n += 1 + l + sovPush(uint64(l))
}
}
if len(m.Parsed) > 0 {
for _, e := range m.Parsed {
l = e.Size()
n += 1 + l + sovPush(uint64(l))
}
}
return n
}
@ -977,10 +1024,16 @@ func (this *EntryAdapter) String() string {
repeatedStringForStructuredMetadata += strings.Replace(strings.Replace(f.String(), "LabelPairAdapter", "LabelPairAdapter", 1), `&`, ``, 1) + ","
}
repeatedStringForStructuredMetadata += "}"
repeatedStringForParsed := "[]LabelPairAdapter{"
for _, f := range this.Parsed {
repeatedStringForParsed += strings.Replace(strings.Replace(f.String(), "LabelPairAdapter", "LabelPairAdapter", 1), `&`, ``, 1) + ","
}
repeatedStringForParsed += "}"
s := strings.Join([]string{`&EntryAdapter{`,
`Timestamp:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.Timestamp), "Timestamp", "types.Timestamp", 1), `&`, ``, 1) + `,`,
`Line:` + fmt.Sprintf("%v", this.Line) + `,`,
`StructuredMetadata:` + repeatedStringForStructuredMetadata + `,`,
`Parsed:` + repeatedStringForParsed + `,`,
`}`,
}, "")
return s
@ -1516,6 +1569,40 @@ func (m *EntryAdapter) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
case 4:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Parsed", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPush
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthPush
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthPush
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Parsed = append(m.Parsed, LabelPairAdapter{})
if err := m.Parsed[len(m.Parsed)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipPush(dAtA[iNdEx:])

@ -46,4 +46,11 @@ message EntryAdapter {
(gogoproto.nullable) = false,
(gogoproto.jsontag) = "structuredMetadata,omitempty"
];
// This field shouldn't be used by clients to push data to Loki.
// It is only used by Loki to return parsed log lines in query responses.
// TODO: Remove this field from the write path Proto.
repeated LabelPairAdapter parsed = 4 [
(gogoproto.nullable) = false,
(gogoproto.jsontag) = "parsed,omitempty"
];
}

@ -25,12 +25,38 @@ type Entry struct {
Timestamp time.Time `protobuf:"bytes,1,opt,name=timestamp,proto3,stdtime" json:"ts"`
Line string `protobuf:"bytes,2,opt,name=line,proto3" json:"line"`
StructuredMetadata LabelsAdapter `protobuf:"bytes,3,opt,name=structuredMetadata,proto3" json:"structuredMetadata,omitempty"`
Parsed LabelsAdapter `protobuf:"bytes,4,opt,name=parsed,proto3" json:"parsed,omitempty"`
}
// MarshalJSON implements json.Marshaler.
// In Loki, this method should only be used by the
// Legacy encoder used when hitting the deprecated /api/promt/query endpoint.
// We will ignore the categorized labels and only return the stream labels.
func (m *Stream) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Labels string `json:"labels"`
Entries []Entry `json:"entries"`
}{
Labels: m.Labels,
Entries: m.Entries,
})
}
// MarshalJSON implements json.Marshaler.
// In Loki, this method should only be used by the
// Legacy encoder used when hitting the deprecated /api/promt/query endpoint.
// We will ignore the structured metadata.
func (m *Entry) MarshalJSON() ([]byte, error) {
type raw Entry
e := raw(*m)
e.StructuredMetadata = nil
return json.Marshal(e)
}
// LabelAdapter should be a copy of the Prometheus labels.Label type.
// We cannot import Prometheus in this package because it would create many dependencies
// in other projects importing this package. Instead, we copy the definition here, which should
// be kept in sync with the original so it can be casted to the prometheus type.
// be kept in sync with the original, so it can be cast to the prometheus type.
type LabelAdapter struct {
Name, Value string
}
@ -172,6 +198,20 @@ func (m *Entry) MarshalToSizedBuffer(dAtA []byte) (int, error) {
_ = i
var l int
_ = l
if len(m.Parsed) > 0 {
for iNdEx := len(m.Parsed) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Parsed[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintPush(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0x22
}
}
if len(m.StructuredMetadata) > 0 {
for iNdEx := len(m.StructuredMetadata) - 1; iNdEx >= 0; iNdEx-- {
{
@ -471,6 +511,40 @@ func (m *Entry) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
case 4:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Parsed", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPush
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthPush
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthPush
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Parsed = append(m.Parsed, LabelAdapter{})
if err := m.Parsed[len(m.Parsed)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipPush(dAtA[iNdEx:])
@ -661,6 +735,12 @@ func (m *Entry) Size() (n int) {
n += 1 + l + sovPush(uint64(l))
}
}
if len(m.Parsed) > 0 {
for _, e := range m.Parsed {
l = e.Size()
n += 1 + l + sovPush(uint64(l))
}
}
return n
}
@ -711,7 +791,10 @@ func (m *Stream) Equal(that interface{}) bool {
return false
}
}
return m.Hash == that1.Hash
if m.Hash != that1.Hash {
return false
}
return true
}
func (m *Entry) Equal(that interface{}) bool {
@ -739,11 +822,22 @@ func (m *Entry) Equal(that interface{}) bool {
if m.Line != that1.Line {
return false
}
if len(m.StructuredMetadata) != len(that1.StructuredMetadata) {
return false
}
for i := range m.StructuredMetadata {
if !m.StructuredMetadata[i].Equal(that1.StructuredMetadata[i]) {
return false
}
}
if len(m.Parsed) != len(that1.Parsed) {
return false
}
for i := range m.Parsed {
if !m.Parsed[i].Equal(that1.Parsed[i]) {
return false
}
}
return true
}

@ -14,20 +14,20 @@ var (
Labels: `{job="foobar", cluster="foo-central1", namespace="bar", container_name="buzz"}`,
Hash: 1234*10 ^ 9,
Entries: []Entry{
{now, line, nil},
{now.Add(1 * time.Second), line, LabelsAdapter{{Name: "traceID", Value: "1234"}}},
{now.Add(2 * time.Second), line, nil},
{now.Add(3 * time.Second), line, LabelsAdapter{{Name: "user", Value: "abc"}}},
{now, line, nil, nil},
{now.Add(1 * time.Second), line, LabelsAdapter{{Name: "traceID", Value: "1234"}}, nil},
{now.Add(2 * time.Second), line, nil, nil},
{now.Add(3 * time.Second), line, LabelsAdapter{{Name: "user", Value: "abc"}}, LabelsAdapter{{Name: "msg", Value: "text"}}},
},
}
streamAdapter = StreamAdapter{
Labels: `{job="foobar", cluster="foo-central1", namespace="bar", container_name="buzz"}`,
Hash: 1234*10 ^ 9,
Entries: []EntryAdapter{
{now, line, nil},
{now.Add(1 * time.Second), line, []LabelPairAdapter{{Name: "traceID", Value: "1234"}}},
{now.Add(2 * time.Second), line, nil},
{now.Add(3 * time.Second), line, []LabelPairAdapter{{Name: "user", Value: "abc"}}},
{now, line, nil, nil},
{now.Add(1 * time.Second), line, []LabelPairAdapter{{Name: "traceID", Value: "1234"}}, nil},
{now.Add(2 * time.Second), line, nil, nil},
{now.Add(3 * time.Second), line, []LabelPairAdapter{{Name: "user", Value: "abc"}}, []LabelPairAdapter{{Name: "msg", Value: "text"}}},
},
}
)

@ -289,7 +289,7 @@ func (q *QuerierAPI) IndexStatsHandler(ctx context.Context, req *loghttp.RangeQu
return resp, err
}
//TODO(trevorwhitney): add test for the handler split
// TODO(trevorwhitney): add test for the handler split
// VolumeHandler queries the index label volumes related to the passed matchers and given time range.
// Returns either N values where N is the time range / step and a single value for a time range depending on the request.

@ -378,6 +378,14 @@ func (Codec) DecodeHTTPGrpcRequest(ctx context.Context, r *httpgrpc.HTTPRequest)
}
}
// If there is not encoding flags in the context, we try the HTTP request.
if encFlags := httpreq.ExtractEncodingFlagsFromCtx(ctx); encFlags == nil {
encFlags = httpreq.ExtractEncodingFlagsFromProto(r)
if encFlags != nil {
ctx = httpreq.AddEncodingFlagsToContext(ctx, encFlags)
}
}
if err := httpReq.ParseForm(); err != nil {
return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
}
@ -500,7 +508,9 @@ func (Codec) EncodeHTTPGrpcResponse(ctx context.Context, req *httpgrpc.HTTPReque
version := loghttp.GetVersion(req.Url)
var buf bytes.Buffer
err := encodeResponseJSONTo(version, res, &buf)
encodingFlags := httpreq.ExtractEncodingFlagsFromProto(req)
err := encodeResponseJSONTo(version, res, &buf, encodingFlags)
if err != nil {
return nil, err
}
@ -521,6 +531,11 @@ func (c Codec) EncodeRequest(ctx context.Context, r queryrangebase.Request) (*ht
header.Set(string(httpreq.QueryTagsHTTPHeader), queryTags)
}
encodingFlags := httpreq.ExtractHeader(ctx, httpreq.LokiEncodingFlagsHeader)
if encodingFlags != "" {
header.Set(httpreq.LokiEncodingFlagsHeader, encodingFlags)
}
actor := httpreq.ExtractHeader(ctx, httpreq.LokiActorPathHeader)
if actor != "" {
header.Set(httpreq.LokiActorPathHeader, actor)
@ -912,15 +927,16 @@ func (Codec) EncodeResponse(ctx context.Context, req *http.Request, res queryran
// Default to JSON.
version := loghttp.GetVersion(req.RequestURI)
return encodeResponseJSON(ctx, version, res)
encodingFlags := httpreq.ExtractEncodingFlags(req)
return encodeResponseJSON(ctx, version, res, encodingFlags)
}
func encodeResponseJSON(ctx context.Context, version loghttp.Version, res queryrangebase.Response) (*http.Response, error) {
func encodeResponseJSON(ctx context.Context, version loghttp.Version, res queryrangebase.Response, encodeFlags httpreq.EncodingFlags) (*http.Response, error) {
sp, _ := opentracing.StartSpanFromContext(ctx, "codec.EncodeResponse")
defer sp.Finish()
var buf bytes.Buffer
err := encodeResponseJSONTo(version, res, &buf)
err := encodeResponseJSONTo(version, res, &buf, encodeFlags)
if err != nil {
return nil, err
}
@ -937,7 +953,7 @@ func encodeResponseJSON(ctx context.Context, version loghttp.Version, res queryr
return &resp, nil
}
func encodeResponseJSONTo(version loghttp.Version, res queryrangebase.Response, w io.Writer) error {
func encodeResponseJSONTo(version loghttp.Version, res queryrangebase.Response, w io.Writer, encodeFlags httpreq.EncodingFlags) error {
switch response := res.(type) {
case *LokiPromResponse:
return response.encodeTo(w)
@ -959,7 +975,7 @@ func encodeResponseJSONTo(version loghttp.Version, res queryrangebase.Response,
return err
}
} else {
if err := marshal.WriteQueryResponseJSON(logqlmodel.Streams(streams), response.Statistics, w); err != nil {
if err := marshal.WriteQueryResponseJSON(logqlmodel.Streams(streams), response.Statistics, w, encodeFlags); err != nil {
return err
}
}

@ -28,6 +28,7 @@ import (
"github.com/grafana/loki/pkg/logqlmodel/stats"
"github.com/grafana/loki/pkg/querier/queryrange/queryrangebase"
"github.com/grafana/loki/pkg/util"
"github.com/grafana/loki/pkg/util/httpreq"
)
func init() {
@ -271,6 +272,36 @@ func Test_codec_DecodeResponse(t *testing.T) {
Statistics: statsResult,
}, false,
},
{
"streams v1 with structured metadata", &http.Response{StatusCode: 200, Body: io.NopCloser(strings.NewReader(streamsStringWithStructuredMetdata))},
&LokiRequest{Direction: logproto.FORWARD, Limit: 100, Path: "/loki/api/v1/query_range"},
&LokiResponse{
Status: loghttp.QueryStatusSuccess,
Direction: logproto.FORWARD,
Limit: 100,
Version: uint32(loghttp.VersionV1),
Data: LokiData{
ResultType: loghttp.ResultTypeStream,
Result: logStreamsWithStructuredMetadata,
},
Statistics: statsResult,
}, false,
},
{
"streams v1 with categorized labels", &http.Response{StatusCode: 200, Body: io.NopCloser(strings.NewReader(streamsStringWithCategories))},
&LokiRequest{Direction: logproto.FORWARD, Limit: 100, Path: "/loki/api/v1/query_range"},
&LokiResponse{
Status: loghttp.QueryStatusSuccess,
Direction: logproto.FORWARD,
Limit: 100,
Version: uint32(loghttp.VersionV1),
Data: LokiData{
ResultType: loghttp.ResultTypeStream,
Result: logStreamsWithCategories,
},
Statistics: statsResult,
}, false,
},
{
"streams legacy", &http.Response{StatusCode: 200, Body: io.NopCloser(strings.NewReader(streamsString))},
&LokiRequest{Direction: logproto.FORWARD, Limit: 100, Path: "/api/prom/query_range"},
@ -768,13 +799,14 @@ func Test_codec_seriesVolume_DecodeRequest(t *testing.T) {
func Test_codec_EncodeResponse(t *testing.T) {
tests := []struct {
name string
path string
res queryrangebase.Response
body string
wantErr bool
name string
path string
res queryrangebase.Response
body string
wantErr bool
queryParams map[string]string
}{
{"error", "/loki/api/v1/query_range", &badResponse{}, "", true},
{"error", "/loki/api/v1/query_range", &badResponse{}, "", true, nil},
{
"prom", "/loki/api/v1/query_range",
&LokiPromResponse{
@ -786,7 +818,7 @@ func Test_codec_EncodeResponse(t *testing.T) {
},
},
Statistics: statsResult,
}, matrixString, false},
}, matrixString, false, nil},
{
"loki v1", "/loki/api/v1/query_range",
&LokiResponse{
@ -799,7 +831,25 @@ func Test_codec_EncodeResponse(t *testing.T) {
Result: logStreams,
},
Statistics: statsResult,
}, streamsString, false,
}, streamsString, false, nil,
},
{
"loki v1 with categories", "/loki/api/v1/query_range",
&LokiResponse{
Status: loghttp.QueryStatusSuccess,
Direction: logproto.FORWARD,
Limit: 100,
Version: uint32(loghttp.VersionV1),
Data: LokiData{
ResultType: loghttp.ResultTypeStream,
Result: logStreamsWithCategories,
},
Statistics: statsResult,
},
streamsStringWithCategories, false,
map[string]string{
httpreq.LokiEncodingFlagsHeader: string(httpreq.FlagCategorizeLabels),
},
},
{
"loki legacy", "/api/promt/query",
@ -813,7 +863,7 @@ func Test_codec_EncodeResponse(t *testing.T) {
Result: logStreams,
},
Statistics: statsResult,
}, streamsStringLegacy, false,
}, streamsStringLegacy, false, nil,
},
{
"loki series", "/loki/api/v1/series",
@ -821,7 +871,7 @@ func Test_codec_EncodeResponse(t *testing.T) {
Status: "success",
Version: uint32(loghttp.VersionV1),
Data: seriesData,
}, seriesString, false,
}, seriesString, false, nil,
},
{
"loki labels", "/loki/api/v1/labels",
@ -829,7 +879,7 @@ func Test_codec_EncodeResponse(t *testing.T) {
Status: "success",
Version: uint32(loghttp.VersionV1),
Data: labelsData,
}, labelsString, false,
}, labelsString, false, nil,
},
{
"loki labels legacy", "/api/prom/label",
@ -837,7 +887,7 @@ func Test_codec_EncodeResponse(t *testing.T) {
Status: "success",
Version: uint32(loghttp.VersionLegacy),
Data: labelsData,
}, labelsLegacyString, false,
}, labelsLegacyString, false, nil,
},
{
"index stats", "/loki/api/v1/index/stats",
@ -848,7 +898,7 @@ func Test_codec_EncodeResponse(t *testing.T) {
Bytes: 3,
Entries: 4,
},
}, indexStatsString, false,
}, indexStatsString, false, nil,
},
{
"volume", "/loki/api/v1/index/volume",
@ -859,16 +909,21 @@ func Test_codec_EncodeResponse(t *testing.T) {
},
Limit: 100,
},
}, seriesVolumeString, false,
}, seriesVolumeString, false, nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
u := &url.URL{Path: tt.path}
h := http.Header{}
for k, v := range tt.queryParams {
h.Set(k, v)
}
req := &http.Request{
Method: "GET",
RequestURI: u.String(),
URL: u,
Header: h,
}
got, err := DefaultCodec.EncodeResponse(context.TODO(), req, tt.res)
if (err != nil) != tt.wantErr {
@ -1559,17 +1614,132 @@ var (
},
{
"stream": {
"test": "test2"
"test": "test",
"x": "a",
"y": "b"
},
"values":[
[ "123456789012346", "super line2"]
[ "123456789012346", "super line2" ]
]
},
{
"stream": {
"test": "test",
"x": "a",
"y": "b",
"z": "text"
},
"values":[
[ "123456789012346", "super line3 z=text" ]
]
}
]
}
}`
streamsStringWithStructuredMetdata = `{
"status": "success",
"data": {
` + statsResultString + `
"resultType": "streams",
"result": [
{
"stream": {
"test": "test"
},
"values":[
[ "123456789012345", "super line"]
]
},
{
"stream": {
"test": "test",
"x": "a",
"y": "b"
},
"values":[
[ "123456789012346", "super line2", {"x": "a", "y": "b"} ]
]
},
{
"stream": {
"test": "test",
"x": "a",
"y": "b",
"z": "text"
},
"values":[
[ "123456789012346", "super line3 z=text", {"x": "a", "y": "b"}]
]
}
]
}
}`
streamsStringWithCategories = `{
"status": "success",
"data": {
` + statsResultString + `
"resultType": "streams",
"encodingFlags": ["` + string(httpreq.FlagCategorizeLabels) + `"],
"result": [
{
"stream": {
"test": "test"
},
"values":[
[ "123456789012345", "super line"],
[ "123456789012346", "super line2", {
"structuredMetadata": {
"x": "a",
"y": "b"
}
}],
[ "123456789012347", "super line3 z=text", {
"structuredMetadata": {
"x": "a",
"y": "b"
},
"parsed": {
"z": "text"
}
}]
]
}
]
}
}`
streamsStringLegacy = `{
` + statsResultString + `"streams":[{"labels":"{test=\"test\"}","entries":[{"ts":"1970-01-02T10:17:36.789012345Z","line":"super line"}]},{"labels":"{test=\"test2\"}","entries":[{"ts":"1970-01-02T10:17:36.789012346Z","line":"super line2"}]}]}`
` + statsResultString + `"streams":[{"labels":"{test=\"test\"}","entries":[{"ts":"1970-01-02T10:17:36.789012345Z","line":"super line"}]},{"labels":"{test=\"test\", x=\"a\", y=\"b\"}","entries":[{"ts":"1970-01-02T10:17:36.789012346Z","line":"super line2"}]}, {"labels":"{test=\"test\", x=\"a\", y=\"b\", z=\"text\"}","entries":[{"ts":"1970-01-02T10:17:36.789012346Z","line":"super line3 z=text"}]}]}`
logStreamsWithStructuredMetadata = []logproto.Stream{
{
Labels: `{test="test"}`,
Entries: []logproto.Entry{
{
Line: "super line",
Timestamp: time.Unix(0, 123456789012345).UTC(),
},
},
},
{
Labels: `{test="test", x="a", y="b"}`,
Entries: []logproto.Entry{
{
Line: "super line2",
Timestamp: time.Unix(0, 123456789012346).UTC(),
StructuredMetadata: logproto.FromLabelsToLabelAdapters(labels.FromStrings("x", "a", "y", "b")),
},
},
},
{
Labels: `{test="test", x="a", y="b", z="text"}`,
Entries: []logproto.Entry{
{
Line: "super line3 z=text",
Timestamp: time.Unix(0, 123456789012346).UTC(),
StructuredMetadata: logproto.FromLabelsToLabelAdapters(labels.FromStrings("x", "a", "y", "b")),
},
},
},
}
logStreams = []logproto.Stream{
{
Labels: `{test="test"}`,
@ -1581,7 +1751,7 @@ var (
},
},
{
Labels: `{test="test2"}`,
Labels: `{test="test", x="a", y="b"}`,
Entries: []logproto.Entry{
{
Line: "super line2",
@ -1589,6 +1759,37 @@ var (
},
},
},
{
Labels: `{test="test", x="a", y="b", z="text"}`,
Entries: []logproto.Entry{
{
Line: "super line3 z=text",
Timestamp: time.Unix(0, 123456789012346).UTC(),
},
},
},
}
logStreamsWithCategories = []logproto.Stream{
{
Labels: `{test="test"}`,
Entries: []logproto.Entry{
{
Line: "super line",
Timestamp: time.Unix(0, 123456789012345).UTC(),
},
{
Line: "super line2",
Timestamp: time.Unix(0, 123456789012346).UTC(),
StructuredMetadata: logproto.FromLabelsToLabelAdapters(labels.FromStrings("x", "a", "y", "b")),
},
{
Line: "super line3 z=text",
Timestamp: time.Unix(0, 123456789012347).UTC(),
StructuredMetadata: logproto.FromLabelsToLabelAdapters(labels.FromStrings("x", "a", "y", "b")),
Parsed: logproto.FromLabelsToLabelAdapters(labels.FromStrings("z", "text")),
},
},
},
}
seriesString = `{
"status": "success",

@ -7,6 +7,7 @@ import (
"github.com/grafana/loki/pkg/loghttp"
"github.com/grafana/loki/pkg/querier/queryrange/queryrangebase"
"github.com/grafana/loki/pkg/util/httpreq"
serverutil "github.com/grafana/loki/pkg/util/server"
)
@ -70,7 +71,8 @@ func (rt *serializeHTTPHandler) ServeHTTP(w http.ResponseWriter, r *http.Request
}
version := loghttp.GetVersion(r.RequestURI)
if err := encodeResponseJSONTo(version, response, w); err != nil {
encodingFlags := httpreq.ExtractEncodingFlags(r)
if err := encodeResponseJSONTo(version, response, w, encodingFlags); err != nil {
serverutil.WriteError(err, w)
}
}

@ -204,7 +204,7 @@ func (fakeBlock) Entries() int { return 0 }
func (fakeBlock) Offset() int { return 0 }
func (f fakeBlock) MinTime() int64 { return f.mint }
func (f fakeBlock) MaxTime() int64 { return f.maxt }
func (fakeBlock) Iterator(context.Context, log.StreamPipeline, ...iter.EntryIteratorOption) iter.EntryIterator {
func (fakeBlock) Iterator(context.Context, log.StreamPipeline) iter.EntryIterator {
return nil
}

@ -0,0 +1,113 @@
package httpreq
import (
"context"
"net/http"
"strings"
"github.com/grafana/dskit/httpgrpc"
)
type EncodingFlag string
type EncodingFlags map[EncodingFlag]struct{}
func NewEncodingFlags(flags ...EncodingFlag) EncodingFlags {
var ef EncodingFlags
ef.Set(flags...)
return ef
}
func (ef *EncodingFlags) Set(flags ...EncodingFlag) {
if *ef == nil {
*ef = make(EncodingFlags, len(flags))
}
for _, flag := range flags {
(*ef)[flag] = struct{}{}
}
}
func (ef *EncodingFlags) Has(flag EncodingFlag) bool {
_, ok := (*ef)[flag]
return ok
}
func (ef *EncodingFlags) String() string {
var sb strings.Builder
var i int
for flag := range *ef {
if i > 0 {
sb.WriteString(EncodeFlagsDelimiter)
}
sb.WriteString(string(flag))
i++
}
return sb.String()
}
const (
LokiEncodingFlagsHeader = "X-Loki-Response-Encoding-Flags"
FlagCategorizeLabels EncodingFlag = "categorize-labels"
EncodeFlagsDelimiter = ","
)
func AddEncodingFlags(req *http.Request, flags EncodingFlags) {
if len(flags) == 0 {
return
}
req.Header.Set(LokiEncodingFlagsHeader, flags.String())
}
func AddEncodingFlagsToContext(ctx context.Context, flags EncodingFlags) context.Context {
if len(flags) == 0 {
return ctx
}
return context.WithValue(ctx, headerContextKey(LokiEncodingFlagsHeader), flags.String())
}
func ExtractEncodingFlags(req *http.Request) EncodingFlags {
rawValue := req.Header.Get(LokiEncodingFlagsHeader)
if rawValue == "" {
return nil
}
return parseEncodingFlags(rawValue)
}
func ExtractEncodingFlagsFromProto(req *httpgrpc.HTTPRequest) EncodingFlags {
var rawValue string
for _, header := range req.GetHeaders() {
if header.GetKey() == LokiEncodingFlagsHeader {
rawValue = header.GetValues()[0]
if rawValue == "" {
return nil
}
return parseEncodingFlags(rawValue)
}
}
return nil
}
func ExtractEncodingFlagsFromCtx(ctx context.Context) EncodingFlags {
rawValue := ExtractHeader(ctx, LokiEncodingFlagsHeader)
if rawValue == "" {
return nil
}
return parseEncodingFlags(rawValue)
}
func parseEncodingFlags(rawFlags string) EncodingFlags {
split := strings.Split(rawFlags, EncodeFlagsDelimiter)
flags := make(EncodingFlags, len(split))
for _, rawFlag := range split {
flags.Set(EncodingFlag(rawFlag))
}
return flags
}

@ -12,8 +12,8 @@ func NewLabelSet(s string) (loghttp.LabelSet, error) {
if err != nil {
return nil, err
}
ret := make(map[string]string, len(labels))
ret := make(map[string]string, len(labels))
for _, l := range labels {
ret[l.Name] = l.Value
}

@ -50,11 +50,7 @@ var queryTests = []struct {
},
{
"ts": "2019-09-13T18:32:23.380001319Z",
"line": "super line with labels",
"structuredMetadata": {
"foo": "a",
"bar": "b"
}
"line": "super line with labels"
}
]
}
@ -229,11 +225,7 @@ var tailTests = []struct {
},
{
"ts": "2019-09-13T18:32:23.380001319Z",
"line": "super line with labels",
"structuredMetadata": {
"foo": "a",
"bar": "b"
}
"line": "super line with labels"
}
]
}

@ -17,6 +17,7 @@ import (
"github.com/grafana/loki/pkg/logqlmodel"
"github.com/grafana/loki/pkg/logqlmodel/stats"
indexStats "github.com/grafana/loki/pkg/storage/stores/index/stats"
"github.com/grafana/loki/pkg/util/httpreq"
marshal_legacy "github.com/grafana/loki/pkg/util/marshal/legacy"
)
@ -24,8 +25,9 @@ func WriteResponseJSON(r *http.Request, v any, w http.ResponseWriter) error {
switch result := v.(type) {
case logqlmodel.Result:
version := loghttp.GetVersion(r.RequestURI)
encodeFlags := httpreq.ExtractEncodingFlags(r)
if version == loghttp.VersionV1 {
return WriteQueryResponseJSON(result.Data, result.Statistics, w)
return WriteQueryResponseJSON(result.Data, result.Statistics, w, encodeFlags)
}
return marshal_legacy.WriteQueryResponseJSON(result, w)
@ -48,10 +50,10 @@ func WriteResponseJSON(r *http.Request, v any, w http.ResponseWriter) error {
// WriteQueryResponseJSON marshals the promql.Value to v1 loghttp JSON and then
// writes it to the provided io.Writer.
func WriteQueryResponseJSON(data parser.Value, statistics stats.Result, w io.Writer) error {
func WriteQueryResponseJSON(data parser.Value, statistics stats.Result, w io.Writer, encodeFlags httpreq.EncodingFlags) error {
s := jsoniter.ConfigFastest.BorrowStream(w)
defer jsoniter.ConfigFastest.ReturnStream(s)
err := EncodeResult(data, statistics, s)
err := EncodeResult(data, statistics, s, encodeFlags)
if err != nil {
return fmt.Errorf("could not write JSON response: %w", err)
}

@ -20,8 +20,188 @@ import (
"github.com/grafana/loki/pkg/logproto"
"github.com/grafana/loki/pkg/logqlmodel"
"github.com/grafana/loki/pkg/logqlmodel/stats"
"github.com/grafana/loki/pkg/util/httpreq"
)
const emptyStats = `{
"ingester" : {
"store": {
"chunksDownloadTime": 0,
"totalChunksRef": 0,
"totalChunksDownloaded": 0,
"chunkRefsFetchTime": 0,
"chunk" :{
"compressedBytes": 0,
"decompressedBytes": 0,
"decompressedLines": 0,
"decompressedStructuredMetadataBytes": 0,
"headChunkBytes": 0,
"headChunkLines": 0,
"headChunkStructuredMetadataBytes": 0,
"postFilterLines": 0,
"totalDuplicates": 0
}
},
"totalBatches": 0,
"totalChunksMatched": 0,
"totalLinesSent": 0,
"totalReached": 0
},
"querier": {
"store": {
"chunksDownloadTime": 0,
"totalChunksRef": 0,
"totalChunksDownloaded": 0,
"chunkRefsFetchTime": 0,
"chunk" :{
"compressedBytes": 0,
"decompressedBytes": 0,
"decompressedLines": 0,
"decompressedStructuredMetadataBytes": 0,
"headChunkBytes": 0,
"headChunkLines": 0,
"headChunkStructuredMetadataBytes": 0,
"postFilterLines": 0,
"totalDuplicates": 0
}
}
},
"cache": {
"chunk": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
},
"index": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
},
"statsResult": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
},
"volumeResult": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
},
"result": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
}
},
"summary": {
"bytesProcessedPerSecond": 0,
"execTime": 0,
"linesProcessedPerSecond": 0,
"queueTime": 0,
"shards": 0,
"splits": 0,
"subqueries": 0,
"totalBytesProcessed": 0,
"totalEntriesReturned": 0,
"totalLinesProcessed": 0,
"totalStructuredMetadataBytesProcessed": 0,
"totalPostFilterLines": 0
}
}`
var queryTestWithEncodingFlags = []struct {
actual parser.Value
encodingFlags httpreq.EncodingFlags
expected string
}{
{
actual: logqlmodel.Streams{
logproto.Stream{
Entries: []logproto.Entry{
{
Timestamp: time.Unix(0, 123456789012345),
Line: "super line",
},
{
Timestamp: time.Unix(0, 123456789012346),
Line: "super line with labels",
StructuredMetadata: []logproto.LabelAdapter{
{Name: "foo", Value: "a"},
{Name: "bar", Value: "b"},
},
},
{
Timestamp: time.Unix(0, 123456789012347),
Line: "super line with labels msg=text",
StructuredMetadata: []logproto.LabelAdapter{
{Name: "foo", Value: "a"},
{Name: "bar", Value: "b"},
},
Parsed: []logproto.LabelAdapter{
{Name: "msg", Value: "text"},
},
},
},
Labels: `{test="test"}`,
},
},
encodingFlags: httpreq.NewEncodingFlags(httpreq.FlagCategorizeLabels),
expected: fmt.Sprintf(`{
"status": "success",
"data": {
"resultType": "streams",
"encodingFlags": ["%s"],
"result": [
{
"stream": {
"test": "test"
},
"values":[
[ "123456789012345", "super line"],
[ "123456789012346", "super line with labels", {
"structuredMetadata": {
"foo": "a",
"bar": "b"
}
}],
[ "123456789012347", "super line with labels msg=text", {
"structuredMetadata": {
"foo": "a",
"bar": "b"
},
"parsed": {
"msg": "text"
}
}]
]
}
],
"stats" : %s
}
}`, httpreq.FlagCategorizeLabels, emptyStats),
},
}
// covers responses from /loki/api/v1/query_range and /loki/api/v1/query
var queryTests = []struct {
actual parser.Value
@ -47,7 +227,7 @@ var queryTests = []struct {
Labels: `{test="test"}`,
},
},
`{
fmt.Sprintf(`{
"status": "success",
"data": {
"resultType": "streams",
@ -58,117 +238,13 @@ var queryTests = []struct {
},
"values":[
[ "123456789012345", "super line"],
[ "123456789012346", "super line with labels", { "foo": "a", "bar": "b" } ]
[ "123456789012346", "super line with labels" ]
]
}
],
"stats" : {
"ingester" : {
"store": {
"chunksDownloadTime": 0,
"totalChunksRef": 0,
"totalChunksDownloaded": 0,
"chunkRefsFetchTime": 0,
"chunk" :{
"compressedBytes": 0,
"decompressedBytes": 0,
"decompressedLines": 0,
"decompressedStructuredMetadataBytes": 0,
"headChunkBytes": 0,
"headChunkLines": 0,
"headChunkStructuredMetadataBytes": 0,
"postFilterLines": 0,
"totalDuplicates": 0
}
},
"totalBatches": 0,
"totalChunksMatched": 0,
"totalLinesSent": 0,
"totalReached": 0
},
"querier": {
"store": {
"chunksDownloadTime": 0,
"totalChunksRef": 0,
"totalChunksDownloaded": 0,
"chunkRefsFetchTime": 0,
"chunk" :{
"compressedBytes": 0,
"decompressedBytes": 0,
"decompressedLines": 0,
"decompressedStructuredMetadataBytes": 0,
"headChunkBytes": 0,
"headChunkLines": 0,
"headChunkStructuredMetadataBytes": 0,
"postFilterLines": 0,
"totalDuplicates": 0
}
}
},
"cache": {
"chunk": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
},
"index": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
},
"statsResult": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
},
"volumeResult": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
},
"result": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
}
},
"summary": {
"bytesProcessedPerSecond": 0,
"execTime": 0,
"linesProcessedPerSecond": 0,
"queueTime": 0,
"shards": 0,
"splits": 0,
"subqueries": 0,
"totalBytesProcessed": 0,
"totalEntriesReturned": 0,
"totalLinesProcessed": 0,
"totalStructuredMetadataBytesProcessed": 0,
"totalPostFilterLines": 0
}
}
"stats" : %s
}
}`,
}`, emptyStats),
},
// vector test
{
@ -202,7 +278,7 @@ var queryTests = []struct {
},
},
},
`{
fmt.Sprintf(`{
"data": {
"resultType": "vector",
"result": [
@ -227,114 +303,10 @@ var queryTests = []struct {
]
}
],
"stats" : {
"ingester" : {
"store": {
"chunksDownloadTime": 0,
"totalChunksRef": 0,
"totalChunksDownloaded": 0,
"chunkRefsFetchTime": 0,
"chunk" :{
"compressedBytes": 0,
"decompressedBytes": 0,
"decompressedLines": 0,
"decompressedStructuredMetadataBytes": 0,
"headChunkBytes": 0,
"headChunkLines": 0,
"headChunkStructuredMetadataBytes": 0,
"postFilterLines": 0,
"totalDuplicates": 0
}
},
"totalBatches": 0,
"totalChunksMatched": 0,
"totalLinesSent": 0,
"totalReached": 0
},
"querier": {
"store": {
"chunksDownloadTime": 0,
"totalChunksRef": 0,
"totalChunksDownloaded": 0,
"chunkRefsFetchTime": 0,
"chunk" :{
"compressedBytes": 0,
"decompressedBytes": 0,
"decompressedLines": 0,
"decompressedStructuredMetadataBytes": 0,
"headChunkBytes": 0,
"headChunkLines": 0,
"headChunkStructuredMetadataBytes": 0,
"postFilterLines": 0,
"totalDuplicates": 0
}
}
},
"cache": {
"chunk": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
},
"index": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
},
"statsResult": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
},
"volumeResult": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
},
"result": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
}
},
"summary": {
"bytesProcessedPerSecond": 0,
"execTime": 0,
"linesProcessedPerSecond": 0,
"queueTime": 0,
"shards": 0,
"splits": 0,
"subqueries": 0,
"totalBytesProcessed": 0,
"totalEntriesReturned": 0,
"totalLinesProcessed": 0,
"totalStructuredMetadataBytesProcessed": 0,
"totalPostFilterLines": 0
}
}
},
"stats" : %s
},
"status": "success"
}`,
}`, emptyStats),
},
// matrix test
{
@ -380,7 +352,7 @@ var queryTests = []struct {
},
},
},
`{
fmt.Sprintf(`{
"data": {
"resultType": "matrix",
"result": [
@ -413,114 +385,10 @@ var queryTests = []struct {
]
}
],
"stats" : {
"ingester" : {
"store": {
"chunksDownloadTime": 0,
"totalChunksRef": 0,
"totalChunksDownloaded": 0,
"chunkRefsFetchTime": 0,
"chunk" :{
"compressedBytes": 0,
"decompressedBytes": 0,
"decompressedLines": 0,
"decompressedStructuredMetadataBytes": 0,
"headChunkBytes": 0,
"headChunkLines": 0,
"headChunkStructuredMetadataBytes": 0,
"postFilterLines": 0,
"totalDuplicates": 0
}
},
"totalBatches": 0,
"totalChunksMatched": 0,
"totalLinesSent": 0,
"totalReached": 0
},
"querier": {
"store": {
"chunksDownloadTime": 0,
"totalChunksRef": 0,
"totalChunksDownloaded": 0,
"chunkRefsFetchTime": 0,
"chunk" :{
"compressedBytes": 0,
"decompressedBytes": 0,
"decompressedLines": 0,
"decompressedStructuredMetadataBytes": 0,
"headChunkBytes": 0,
"headChunkLines": 0,
"headChunkStructuredMetadataBytes": 0,
"postFilterLines": 0,
"totalDuplicates": 0
}
}
},
"cache": {
"chunk": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
},
"index": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
},
"statsResult": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
},
"volumeResult": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
},
"result": {
"entriesFound": 0,
"entriesRequested": 0,
"entriesStored": 0,
"bytesReceived": 0,
"bytesSent": 0,
"requests": 0,
"downloadTime": 0
}
},
"summary": {
"bytesProcessedPerSecond": 0,
"execTime": 0,
"linesProcessedPerSecond": 0,
"queueTime": 0,
"shards": 0,
"splits": 0,
"subqueries": 0,
"totalBytesProcessed": 0,
"totalEntriesReturned": 0,
"totalLinesProcessed": 0,
"totalStructuredMetadataBytesProcessed": 0,
"totalPostFilterLines": 0
}
}
"stats" : %s
},
"status": "success"
}`,
}`, emptyStats),
},
}
@ -542,6 +410,7 @@ var labelTests = []struct {
}
// covers responses from /loki/api/v1/tail
// TODO(salvacorts): Support encoding flags. And fix serialized structured metadata labels which shouldn't be there unless the categorize flag is set.
var tailTests = []struct {
actual legacy.TailResponse
expected string
@ -601,7 +470,14 @@ var tailTests = []struct {
func Test_WriteQueryResponseJSON(t *testing.T) {
for i, queryTest := range queryTests {
var b bytes.Buffer
err := WriteQueryResponseJSON(queryTest.actual, stats.Result{}, &b)
err := WriteQueryResponseJSON(queryTest.actual, stats.Result{}, &b, nil)
require.NoError(t, err)
require.JSONEqf(t, queryTest.expected, b.String(), "Query Test %d failed", i)
}
for i, queryTest := range queryTestWithEncodingFlags {
var b bytes.Buffer
err := WriteQueryResponseJSON(queryTest.actual, stats.Result{}, &b, queryTest.encodingFlags)
require.NoError(t, err)
require.JSONEqf(t, queryTest.expected, b.String(), "Query Test %d failed", i)
@ -633,7 +509,7 @@ func Test_WriteQueryResponseJSONWithError(t *testing.T) {
},
}
var b bytes.Buffer
err := WriteQueryResponseJSON(broken.Data, stats.Result{}, &b)
err := WriteQueryResponseJSON(broken.Data, stats.Result{}, &b, nil)
require.Error(t, err)
}
@ -756,6 +632,152 @@ func Test_WriteSeriesResponseJSON(t *testing.T) {
}
}
func Test_WriteQueryResponseJSON_EncodeFlags(t *testing.T) {
inputStream := logqlmodel.Streams{
logproto.Stream{
Labels: `{test="test"}`,
Entries: []logproto.Entry{
{
Timestamp: time.Unix(0, 123456789012346),
Line: "super line",
},
},
},
logproto.Stream{
Labels: `{test="test", foo="a", bar="b"}`,
Entries: []logproto.Entry{
{
Timestamp: time.Unix(0, 123456789012346),
Line: "super line with labels",
StructuredMetadata: logproto.FromLabelsToLabelAdapters(labels.FromStrings("foo", "a", "bar", "b")),
},
},
},
logproto.Stream{
Labels: `{test="test", foo="a", bar="b", msg="baz"}`,
Entries: []logproto.Entry{
{
Timestamp: time.Unix(0, 123456789012346),
Line: "super line with labels msg=baz",
StructuredMetadata: logproto.FromLabelsToLabelAdapters(labels.FromStrings("foo", "a", "bar", "b")),
Parsed: logproto.FromLabelsToLabelAdapters(labels.FromStrings("msg", "baz")),
},
},
},
}
for _, tc := range []struct {
name string
encodeFlags httpreq.EncodingFlags
expected string
}{
{
name: "uncategorized labels",
expected: fmt.Sprintf(`{
"status": "success",
"data": {
"resultType": "streams",
"result": [
{
"stream": {
"test": "test"
},
"values":[
[ "123456789012346", "super line"]
]
},
{
"stream": {
"test": "test",
"foo": "a",
"bar": "b"
},
"values":[
[ "123456789012346", "super line with labels"]
]
},
{
"stream": {
"test": "test",
"foo": "a",
"bar": "b",
"msg": "baz"
},
"values":[
[ "123456789012346", "super line with labels msg=baz"]
]
}
],
"stats" : %s
}
}`, emptyStats),
},
{
name: "categorized labels",
encodeFlags: httpreq.NewEncodingFlags(httpreq.FlagCategorizeLabels),
expected: fmt.Sprintf(`{
"status": "success",
"data": {
"resultType": "streams",
"encodingFlags": ["%s"],
"result": [
{
"stream": {
"test": "test"
},
"values":[
[ "123456789012346", "super line"]
]
},
{
"stream": {
"test": "test",
"foo": "a",
"bar": "b"
},
"values":[
[ "123456789012346", "super line with labels", {
"structuredMetadata": {
"foo": "a",
"bar": "b"
}
}]
]
},
{
"stream": {
"test": "test",
"foo": "a",
"bar": "b",
"msg": "baz"
},
"values":[
[ "123456789012346", "super line with labels msg=baz", {
"structuredMetadata": {
"foo": "a",
"bar": "b"
},
"parsed": {
"msg": "baz"
}
}]
]
}
],
"stats" : %s
}
}`, httpreq.FlagCategorizeLabels, emptyStats),
},
} {
t.Run(tc.name, func(t *testing.T) {
var b bytes.Buffer
err := WriteQueryResponseJSON(inputStream, stats.Result{}, &b, tc.encodeFlags)
require.NoError(t, err)
require.JSONEq(t, tc.expected, b.String())
})
}
}
// wrappedValue and its Generate method is used by quick to generate a random
// parser.Value.
type wrappedValue struct {
@ -857,7 +879,7 @@ func Test_EncodeResult_And_ResultValue_Parity(t *testing.T) {
f := func(w wrappedValue) bool {
var buf bytes.Buffer
js := json.NewStream(json.ConfigFastest, &buf, 0)
err := encodeResult(w.Value, js)
err := encodeResult(w.Value, js, httpreq.NewEncodingFlags(httpreq.FlagCategorizeLabels))
require.NoError(t, err)
js.Flush()
actual := buf.String()
@ -883,7 +905,7 @@ func Benchmark_Encode(b *testing.B) {
for n := 0; n < b.N; n++ {
for _, queryTest := range queryTests {
require.NoError(b, WriteQueryResponseJSON(queryTest.actual, stats.Result{}, buf))
require.NoError(b, WriteQueryResponseJSON(queryTest.actual, stats.Result{}, buf, nil))
buf.Reset()
}
}

@ -16,6 +16,7 @@ import (
"github.com/grafana/loki/pkg/logproto"
"github.com/grafana/loki/pkg/logqlmodel"
"github.com/grafana/loki/pkg/logqlmodel/stats"
"github.com/grafana/loki/pkg/util/httpreq"
)
// NewResultValue constructs a ResultValue from a promql.Value
@ -174,14 +175,14 @@ func NewMetric(l labels.Labels) model.Metric {
return ret
}
func EncodeResult(data parser.Value, statistics stats.Result, s *jsoniter.Stream) error {
func EncodeResult(data parser.Value, statistics stats.Result, s *jsoniter.Stream, encodeFlags httpreq.EncodingFlags) error {
s.WriteObjectStart()
s.WriteObjectField("status")
s.WriteString("success")
s.WriteMore()
s.WriteObjectField("data")
err := encodeData(data, statistics, s)
err := encodeData(data, statistics, s, encodeFlags)
if err != nil {
return err
}
@ -190,15 +191,39 @@ func EncodeResult(data parser.Value, statistics stats.Result, s *jsoniter.Stream
return nil
}
func encodeData(data parser.Value, statistics stats.Result, s *jsoniter.Stream) error {
func encodeEncodingFlags(s *jsoniter.Stream, flags httpreq.EncodingFlags) error {
s.WriteArrayStart()
defer s.WriteArrayEnd()
var i int
for flag := range flags {
if i > 0 {
s.WriteMore()
}
s.WriteString(string(flag))
i++
}
return nil
}
func encodeData(data parser.Value, statistics stats.Result, s *jsoniter.Stream, encodeFlags httpreq.EncodingFlags) error {
s.WriteObjectStart()
s.WriteObjectField("resultType")
s.WriteString(string(data.Type()))
if len(encodeFlags) > 0 {
s.WriteMore()
s.WriteObjectField("encodingFlags")
if err := encodeEncodingFlags(s, encodeFlags); err != nil {
return err
}
}
s.WriteMore()
s.WriteObjectField("result")
err := encodeResult(data, s)
err := encodeResult(data, s, encodeFlags)
if err != nil {
return err
}
@ -212,7 +237,7 @@ func encodeData(data parser.Value, statistics stats.Result, s *jsoniter.Stream)
return nil
}
func encodeResult(v parser.Value, s *jsoniter.Stream) error {
func encodeResult(v parser.Value, s *jsoniter.Stream, encodeFlags httpreq.EncodingFlags) error {
switch v.Type() {
case loghttp.ResultTypeStream:
result, ok := v.(logqlmodel.Streams)
@ -221,7 +246,7 @@ func encodeResult(v parser.Value, s *jsoniter.Stream) error {
return fmt.Errorf("unexpected type %T for streams", s)
}
return encodeStreams(result, s)
return encodeStreams(result, s, encodeFlags)
case loghttp.ResultTypeScalar:
scalar, ok := v.(promql.Scalar)
@ -256,7 +281,7 @@ func encodeResult(v parser.Value, s *jsoniter.Stream) error {
return nil
}
func encodeStreams(streams logqlmodel.Streams, s *jsoniter.Stream) error {
func encodeStreams(streams logqlmodel.Streams, s *jsoniter.Stream, encodeFlags httpreq.EncodingFlags) error {
s.WriteArrayStart()
defer s.WriteArrayEnd()
@ -265,7 +290,7 @@ func encodeStreams(streams logqlmodel.Streams, s *jsoniter.Stream) error {
s.WriteMore()
}
err := encodeStream(stream, s)
err := encodeStream(stream, s, encodeFlags)
if err != nil {
return err
}
@ -274,25 +299,35 @@ func encodeStreams(streams logqlmodel.Streams, s *jsoniter.Stream) error {
return nil
}
func encodeStream(stream logproto.Stream, s *jsoniter.Stream) error {
func encodeLabels(labels []logproto.LabelAdapter, s *jsoniter.Stream) {
for i, label := range labels {
if i > 0 {
s.WriteMore()
}
s.WriteObjectField(label.Name)
s.WriteString(label.Value)
}
}
// encodeStream encodes a logproto.Stream to JSON.
// If the FlagCategorizeLabels is set, the stream labels are grouped by their group name.
// Otherwise, the stream labels are written one after the other.
func encodeStream(stream logproto.Stream, s *jsoniter.Stream, encodeFlags httpreq.EncodingFlags) error {
categorizeLabels := encodeFlags.Has(httpreq.FlagCategorizeLabels)
s.WriteObjectStart()
defer s.WriteObjectEnd()
s.WriteObjectField("stream")
s.WriteObjectStart()
labels, err := parser.ParseMetric(stream.Labels)
lbls, err := parser.ParseMetric(stream.Labels)
if err != nil {
return err
}
encodeLabels(logproto.FromLabelsToLabelAdapters(lbls), s)
for i, l := range labels {
if i > 0 {
s.WriteMore()
}
s.WriteObjectField(l.Name)
s.WriteString(l.Value)
}
s.WriteObjectEnd()
s.Flush()
@ -311,16 +346,30 @@ func encodeStream(stream logproto.Stream, s *jsoniter.Stream) error {
s.WriteRaw(`"`)
s.WriteMore()
s.WriteStringWithHTMLEscaped(e.Line)
if len(e.StructuredMetadata) > 0 {
if categorizeLabels && (len(e.StructuredMetadata) > 0 || len(e.Parsed) > 0) {
s.WriteMore()
s.WriteObjectStart()
for i, lbl := range e.StructuredMetadata {
if i > 0 {
var writeMore bool
if len(e.StructuredMetadata) > 0 {
s.WriteObjectField("structuredMetadata")
s.WriteObjectStart()
encodeLabels(e.StructuredMetadata, s)
s.WriteObjectEnd()
writeMore = true
}
if len(e.Parsed) > 0 {
if writeMore {
s.WriteMore()
}
s.WriteObjectField(lbl.Name)
s.WriteString(lbl.Value)
s.WriteObjectField("parsed")
s.WriteObjectStart()
encodeLabels(e.Parsed, s)
s.WriteObjectEnd()
}
s.WriteObjectEnd()
}
s.WriteArrayEnd()

@ -219,6 +219,10 @@ type EntryAdapter struct {
Timestamp time.Time `protobuf:"bytes,1,opt,name=timestamp,proto3,stdtime" json:"ts"`
Line string `protobuf:"bytes,2,opt,name=line,proto3" json:"line"`
StructuredMetadata []LabelPairAdapter `protobuf:"bytes,3,rep,name=structuredMetadata,proto3" json:"structuredMetadata,omitempty"`
// This field shouldn't be used by clients to push data to Loki.
// It is only used by Loki to return parsed log lines in query responses.
// TODO: Remove this field from the write path Proto.
Parsed []LabelPairAdapter `protobuf:"bytes,4,rep,name=parsed,proto3" json:"parsed,omitempty"`
}
func (m *EntryAdapter) Reset() { *m = EntryAdapter{} }
@ -274,6 +278,13 @@ func (m *EntryAdapter) GetStructuredMetadata() []LabelPairAdapter {
return nil
}
func (m *EntryAdapter) GetParsed() []LabelPairAdapter {
if m != nil {
return m.Parsed
}
return nil
}
func init() {
proto.RegisterType((*PushRequest)(nil), "logproto.PushRequest")
proto.RegisterType((*PushResponse)(nil), "logproto.PushResponse")
@ -285,39 +296,40 @@ func init() {
func init() { proto.RegisterFile("pkg/push/push.proto", fileDescriptor_35ec442956852c9e) }
var fileDescriptor_35ec442956852c9e = []byte{
// 503 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x6c, 0x53, 0x31, 0x6f, 0xd3, 0x40,
0x14, 0xf6, 0x25, 0x69, 0xda, 0x5e, 0x4a, 0x41, 0x47, 0x5b, 0x8c, 0x55, 0x9d, 0x23, 0x8b, 0x21,
0x03, 0xd8, 0x52, 0x18, 0x58, 0x58, 0x62, 0x09, 0xa9, 0x03, 0x48, 0x95, 0x41, 0x20, 0xb1, 0x5d,
0x9a, 0xab, 0x6d, 0xd5, 0xf6, 0x99, 0xbb, 0x33, 0x52, 0x37, 0x7e, 0x42, 0xf9, 0x17, 0xfc, 0x94,
0x8e, 0x19, 0x2b, 0x06, 0x43, 0x9c, 0xa5, 0xca, 0xd4, 0x9f, 0x80, 0x7c, 0xf6, 0x91, 0x52, 0xba,
0x9c, 0xbf, 0xf7, 0xdd, 0x7b, 0xef, 0xfb, 0xfc, 0x9e, 0x0d, 0x1f, 0xe7, 0x67, 0xa1, 0x97, 0x17,
0x22, 0x52, 0x87, 0x9b, 0x73, 0x26, 0x19, 0xda, 0x4a, 0x58, 0xa8, 0x90, 0xb5, 0x17, 0xb2, 0x90,
0x29, 0xe8, 0xd5, 0xa8, 0xb9, 0xb7, 0xec, 0x90, 0xb1, 0x30, 0xa1, 0x9e, 0x8a, 0xa6, 0xc5, 0xa9,
0x27, 0xe3, 0x94, 0x0a, 0x49, 0xd2, 0xbc, 0x49, 0x70, 0x3e, 0xc1, 0xc1, 0x71, 0x21, 0xa2, 0x80,
0x7e, 0x29, 0xa8, 0x90, 0xe8, 0x08, 0x6e, 0x0a, 0xc9, 0x29, 0x49, 0x85, 0x09, 0x86, 0xdd, 0xd1,
0x60, 0xfc, 0xc4, 0xd5, 0x0a, 0xee, 0x7b, 0x75, 0x31, 0x99, 0x91, 0x5c, 0x52, 0xee, 0xef, 0xff,
0x2c, 0xed, 0x7e, 0x43, 0xad, 0x4a, 0x5b, 0x57, 0x05, 0x1a, 0x38, 0xbb, 0x70, 0xa7, 0x69, 0x2c,
0x72, 0x96, 0x09, 0xea, 0x7c, 0x07, 0xf0, 0xc1, 0x3f, 0x1d, 0x90, 0x03, 0xfb, 0x09, 0x99, 0xd2,
0xa4, 0x96, 0x02, 0xa3, 0x6d, 0x1f, 0xae, 0x4a, 0xbb, 0x65, 0x82, 0xf6, 0x89, 0x26, 0x70, 0x93,
0x66, 0x92, 0xc7, 0x54, 0x98, 0x1d, 0xe5, 0xe7, 0x60, 0xed, 0xe7, 0x4d, 0x26, 0xf9, 0xb9, 0xb6,
0xf3, 0xf0, 0xb2, 0xb4, 0x8d, 0xda, 0x48, 0x9b, 0x1e, 0x68, 0x80, 0x9e, 0xc2, 0x5e, 0x44, 0x44,
0x64, 0x76, 0x87, 0x60, 0xd4, 0xf3, 0x37, 0x56, 0xa5, 0x0d, 0x5e, 0x04, 0x8a, 0x72, 0x5e, 0xc3,
0x47, 0x6f, 0x6b, 0x9d, 0x63, 0x12, 0x73, 0xed, 0x0a, 0xc1, 0x5e, 0x46, 0x52, 0xda, 0x78, 0x0a,
0x14, 0x46, 0x7b, 0x70, 0xe3, 0x2b, 0x49, 0x0a, 0x6a, 0x76, 0x14, 0xd9, 0x04, 0xce, 0x35, 0x80,
0x3b, 0xb7, 0x3d, 0xa0, 0x23, 0xb8, 0xfd, 0x77, 0xbc, 0xaa, 0x7e, 0x30, 0xb6, 0xdc, 0x66, 0x01,
0xae, 0x5e, 0x80, 0xfb, 0x41, 0x67, 0xf8, 0xbb, 0xad, 0xe5, 0x8e, 0x14, 0x17, 0xbf, 0x6c, 0x10,
0xac, 0x8b, 0xd1, 0x21, 0xec, 0x25, 0x71, 0xd6, 0xea, 0xf9, 0x5b, 0xab, 0xd2, 0x56, 0x71, 0xa0,
0x4e, 0x94, 0x43, 0x24, 0x24, 0x2f, 0x4e, 0x64, 0xc1, 0xe9, 0xec, 0x1d, 0x95, 0x64, 0x46, 0x24,
0x31, 0xbb, 0x6a, 0x3e, 0xd6, 0x7a, 0x3e, 0x77, 0x5f, 0xcd, 0x7f, 0xd6, 0x0a, 0x1e, 0xfe, 0x5f,
0xfd, 0x9c, 0xa5, 0xb1, 0xa4, 0x69, 0x2e, 0xcf, 0x83, 0x7b, 0x7a, 0x8f, 0x27, 0xb0, 0x5f, 0x2f,
0x93, 0x72, 0xf4, 0x0a, 0xf6, 0x6a, 0x84, 0xf6, 0xd7, 0x3a, 0xb7, 0xbe, 0x1f, 0xeb, 0xe0, 0x2e,
0xdd, 0x6e, 0xdf, 0xf0, 0x3f, 0xce, 0x17, 0xd8, 0xb8, 0x5a, 0x60, 0xe3, 0x66, 0x81, 0xc1, 0xb7,
0x0a, 0x83, 0x1f, 0x15, 0x06, 0x97, 0x15, 0x06, 0xf3, 0x0a, 0x83, 0xdf, 0x15, 0x06, 0xd7, 0x15,
0x36, 0x6e, 0x2a, 0x0c, 0x2e, 0x96, 0xd8, 0x98, 0x2f, 0xb1, 0x71, 0xb5, 0xc4, 0xc6, 0xe7, 0x61,
0x18, 0xcb, 0xa8, 0x98, 0xba, 0x27, 0x2c, 0xf5, 0x42, 0x4e, 0x4e, 0x49, 0x46, 0xbc, 0x84, 0x9d,
0xc5, 0x9e, 0xfe, 0x19, 0xa6, 0x7d, 0xa5, 0xf6, 0xf2, 0x4f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x3a,
0x46, 0x64, 0x71, 0x1f, 0x03, 0x00, 0x00,
// 527 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x84, 0x53, 0xc1, 0x6e, 0xd3, 0x40,
0x10, 0xf5, 0x26, 0x6e, 0xda, 0x6e, 0x4a, 0xa9, 0x96, 0xb6, 0x18, 0xab, 0x5a, 0x47, 0x16, 0x87,
0x1c, 0xc0, 0x96, 0xc2, 0x81, 0x0b, 0x97, 0x58, 0x42, 0xea, 0xa1, 0x48, 0x95, 0x41, 0x20, 0x71,
0xdb, 0x34, 0x5b, 0xdb, 0xaa, 0xed, 0x35, 0xbb, 0x6b, 0xa4, 0xde, 0xf8, 0x84, 0xf2, 0x17, 0x7c,
0x01, 0xdf, 0xd0, 0x63, 0x8e, 0x15, 0x07, 0x43, 0x9c, 0x0b, 0xca, 0xa9, 0x9f, 0x80, 0xbc, 0xb6,
0x49, 0x28, 0x48, 0x5c, 0x36, 0x6f, 0x66, 0x67, 0xde, 0x7b, 0x99, 0x1d, 0xc3, 0x07, 0xd9, 0x45,
0xe0, 0x66, 0xb9, 0x08, 0xd5, 0xe1, 0x64, 0x9c, 0x49, 0x86, 0xb6, 0x62, 0x16, 0x28, 0x64, 0xee,
0x07, 0x2c, 0x60, 0x0a, 0xba, 0x15, 0xaa, 0xef, 0x4d, 0x2b, 0x60, 0x2c, 0x88, 0xa9, 0xab, 0xa2,
0x49, 0x7e, 0xee, 0xca, 0x28, 0xa1, 0x42, 0x92, 0x24, 0xab, 0x0b, 0xec, 0x77, 0xb0, 0x7f, 0x9a,
0x8b, 0xd0, 0xa7, 0x1f, 0x72, 0x2a, 0x24, 0x3a, 0x86, 0x9b, 0x42, 0x72, 0x4a, 0x12, 0x61, 0x80,
0x41, 0x77, 0xd8, 0x1f, 0x3d, 0x74, 0x5a, 0x05, 0xe7, 0xb5, 0xba, 0x18, 0x4f, 0x49, 0x26, 0x29,
0xf7, 0x0e, 0xbe, 0x15, 0x56, 0xaf, 0x4e, 0x2d, 0x0b, 0xab, 0xed, 0xf2, 0x5b, 0x60, 0xef, 0xc2,
0x9d, 0x9a, 0x58, 0x64, 0x2c, 0x15, 0xd4, 0xfe, 0x0c, 0xe0, 0xbd, 0x3f, 0x18, 0x90, 0x0d, 0x7b,
0x31, 0x99, 0xd0, 0xb8, 0x92, 0x02, 0xc3, 0x6d, 0x0f, 0x2e, 0x0b, 0xab, 0xc9, 0xf8, 0xcd, 0x2f,
0x1a, 0xc3, 0x4d, 0x9a, 0x4a, 0x1e, 0x51, 0x61, 0x74, 0x94, 0x9f, 0xc3, 0x95, 0x9f, 0x97, 0xa9,
0xe4, 0x97, 0xad, 0x9d, 0xfb, 0xd7, 0x85, 0xa5, 0x55, 0x46, 0x9a, 0x72, 0xbf, 0x05, 0xe8, 0x11,
0xd4, 0x43, 0x22, 0x42, 0xa3, 0x3b, 0x00, 0x43, 0xdd, 0xdb, 0x58, 0x16, 0x16, 0x78, 0xea, 0xab,
0x94, 0xfd, 0x02, 0xee, 0x9d, 0x54, 0x3a, 0xa7, 0x24, 0xe2, 0xad, 0x2b, 0x04, 0xf5, 0x94, 0x24,
0xb4, 0xf6, 0xe4, 0x2b, 0x8c, 0xf6, 0xe1, 0xc6, 0x47, 0x12, 0xe7, 0xd4, 0xe8, 0xa8, 0x64, 0x1d,
0xd8, 0x5f, 0x3b, 0x70, 0x67, 0xdd, 0x03, 0x3a, 0x86, 0xdb, 0xbf, 0xc7, 0xab, 0xfa, 0xfb, 0x23,
0xd3, 0xa9, 0x1f, 0xc0, 0x69, 0x1f, 0xc0, 0x79, 0xd3, 0x56, 0x78, 0xbb, 0x8d, 0xe5, 0x8e, 0x14,
0x57, 0xdf, 0x2d, 0xe0, 0xaf, 0x9a, 0xd1, 0x11, 0xd4, 0xe3, 0x28, 0x6d, 0xf4, 0xbc, 0xad, 0x65,
0x61, 0xa9, 0xd8, 0x57, 0x27, 0xca, 0x20, 0x12, 0x92, 0xe7, 0x67, 0x32, 0xe7, 0x74, 0xfa, 0x8a,
0x4a, 0x32, 0x25, 0x92, 0x18, 0x5d, 0x35, 0x1f, 0x73, 0x35, 0x9f, 0xbb, 0x7f, 0xcd, 0x7b, 0xdc,
0x08, 0x1e, 0xfd, 0xdd, 0xfd, 0x84, 0x25, 0x91, 0xa4, 0x49, 0x26, 0x2f, 0xfd, 0x7f, 0x70, 0xa3,
0x13, 0xd8, 0xcb, 0x08, 0x17, 0x74, 0x6a, 0xe8, 0xff, 0x55, 0x31, 0x1a, 0x95, 0xbd, 0xba, 0x63,
0x8d, 0xb9, 0xe1, 0x18, 0x8d, 0x61, 0xaf, 0x5a, 0x0d, 0xca, 0xd1, 0x73, 0xa8, 0x57, 0x08, 0x1d,
0xac, 0xf8, 0xd6, 0xb6, 0xd1, 0x3c, 0xbc, 0x9b, 0x6e, 0x76, 0x49, 0xf3, 0xde, 0xce, 0xe6, 0x58,
0xbb, 0x99, 0x63, 0xed, 0x76, 0x8e, 0xc1, 0xa7, 0x12, 0x83, 0x2f, 0x25, 0x06, 0xd7, 0x25, 0x06,
0xb3, 0x12, 0x83, 0x1f, 0x25, 0x06, 0x3f, 0x4b, 0xac, 0xdd, 0x96, 0x18, 0x5c, 0x2d, 0xb0, 0x36,
0x5b, 0x60, 0xed, 0x66, 0x81, 0xb5, 0xf7, 0x83, 0x20, 0x92, 0x61, 0x3e, 0x71, 0xce, 0x58, 0xe2,
0x06, 0x9c, 0x9c, 0x93, 0x94, 0xb8, 0x31, 0xbb, 0x88, 0xdc, 0xf6, 0xd3, 0x9a, 0xf4, 0x94, 0xda,
0xb3, 0x5f, 0x01, 0x00, 0x00, 0xff, 0xff, 0x7e, 0xaa, 0x57, 0xd3, 0x6d, 0x03, 0x00, 0x00,
}
func (this *PushRequest) Equal(that interface{}) bool {
@ -465,6 +477,14 @@ func (this *EntryAdapter) Equal(that interface{}) bool {
return false
}
}
if len(this.Parsed) != len(that1.Parsed) {
return false
}
for i := range this.Parsed {
if !this.Parsed[i].Equal(&that1.Parsed[i]) {
return false
}
}
return true
}
func (this *PushRequest) GoString() string {
@ -519,7 +539,7 @@ func (this *EntryAdapter) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 7)
s := make([]string, 0, 8)
s = append(s, "&push.EntryAdapter{")
s = append(s, "Timestamp: "+fmt.Sprintf("%#v", this.Timestamp)+",\n")
s = append(s, "Line: "+fmt.Sprintf("%#v", this.Line)+",\n")
@ -530,6 +550,13 @@ func (this *EntryAdapter) GoString() string {
}
s = append(s, "StructuredMetadata: "+fmt.Sprintf("%#v", vs)+",\n")
}
if this.Parsed != nil {
vs := make([]*LabelPairAdapter, len(this.Parsed))
for i := range vs {
vs[i] = &this.Parsed[i]
}
s = append(s, "Parsed: "+fmt.Sprintf("%#v", vs)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
@ -788,6 +815,20 @@ func (m *EntryAdapter) MarshalToSizedBuffer(dAtA []byte) (int, error) {
_ = i
var l int
_ = l
if len(m.Parsed) > 0 {
for iNdEx := len(m.Parsed) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Parsed[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintPush(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0x22
}
}
if len(m.StructuredMetadata) > 0 {
for iNdEx := len(m.StructuredMetadata) - 1; iNdEx >= 0; iNdEx-- {
{
@ -912,6 +953,12 @@ func (m *EntryAdapter) Size() (n int) {
n += 1 + l + sovPush(uint64(l))
}
}
if len(m.Parsed) > 0 {
for _, e := range m.Parsed {
l = e.Size()
n += 1 + l + sovPush(uint64(l))
}
}
return n
}
@ -977,10 +1024,16 @@ func (this *EntryAdapter) String() string {
repeatedStringForStructuredMetadata += strings.Replace(strings.Replace(f.String(), "LabelPairAdapter", "LabelPairAdapter", 1), `&`, ``, 1) + ","
}
repeatedStringForStructuredMetadata += "}"
repeatedStringForParsed := "[]LabelPairAdapter{"
for _, f := range this.Parsed {
repeatedStringForParsed += strings.Replace(strings.Replace(f.String(), "LabelPairAdapter", "LabelPairAdapter", 1), `&`, ``, 1) + ","
}
repeatedStringForParsed += "}"
s := strings.Join([]string{`&EntryAdapter{`,
`Timestamp:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.Timestamp), "Timestamp", "types.Timestamp", 1), `&`, ``, 1) + `,`,
`Line:` + fmt.Sprintf("%v", this.Line) + `,`,
`StructuredMetadata:` + repeatedStringForStructuredMetadata + `,`,
`Parsed:` + repeatedStringForParsed + `,`,
`}`,
}, "")
return s
@ -1516,6 +1569,40 @@ func (m *EntryAdapter) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
case 4:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Parsed", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPush
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthPush
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthPush
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Parsed = append(m.Parsed, LabelPairAdapter{})
if err := m.Parsed[len(m.Parsed)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipPush(dAtA[iNdEx:])

@ -46,4 +46,11 @@ message EntryAdapter {
(gogoproto.nullable) = false,
(gogoproto.jsontag) = "structuredMetadata,omitempty"
];
// This field shouldn't be used by clients to push data to Loki.
// It is only used by Loki to return parsed log lines in query responses.
// TODO: Remove this field from the write path Proto.
repeated LabelPairAdapter parsed = 4 [
(gogoproto.nullable) = false,
(gogoproto.jsontag) = "parsed,omitempty"
];
}

@ -25,12 +25,38 @@ type Entry struct {
Timestamp time.Time `protobuf:"bytes,1,opt,name=timestamp,proto3,stdtime" json:"ts"`
Line string `protobuf:"bytes,2,opt,name=line,proto3" json:"line"`
StructuredMetadata LabelsAdapter `protobuf:"bytes,3,opt,name=structuredMetadata,proto3" json:"structuredMetadata,omitempty"`
Parsed LabelsAdapter `protobuf:"bytes,4,opt,name=parsed,proto3" json:"parsed,omitempty"`
}
// MarshalJSON implements json.Marshaler.
// In Loki, this method should only be used by the
// Legacy encoder used when hitting the deprecated /api/promt/query endpoint.
// We will ignore the categorized labels and only return the stream labels.
func (m *Stream) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Labels string `json:"labels"`
Entries []Entry `json:"entries"`
}{
Labels: m.Labels,
Entries: m.Entries,
})
}
// MarshalJSON implements json.Marshaler.
// In Loki, this method should only be used by the
// Legacy encoder used when hitting the deprecated /api/promt/query endpoint.
// We will ignore the structured metadata.
func (m *Entry) MarshalJSON() ([]byte, error) {
type raw Entry
e := raw(*m)
e.StructuredMetadata = nil
return json.Marshal(e)
}
// LabelAdapter should be a copy of the Prometheus labels.Label type.
// We cannot import Prometheus in this package because it would create many dependencies
// in other projects importing this package. Instead, we copy the definition here, which should
// be kept in sync with the original so it can be casted to the prometheus type.
// be kept in sync with the original, so it can be cast to the prometheus type.
type LabelAdapter struct {
Name, Value string
}
@ -172,6 +198,20 @@ func (m *Entry) MarshalToSizedBuffer(dAtA []byte) (int, error) {
_ = i
var l int
_ = l
if len(m.Parsed) > 0 {
for iNdEx := len(m.Parsed) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Parsed[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintPush(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0x22
}
}
if len(m.StructuredMetadata) > 0 {
for iNdEx := len(m.StructuredMetadata) - 1; iNdEx >= 0; iNdEx-- {
{
@ -471,6 +511,40 @@ func (m *Entry) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
case 4:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Parsed", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPush
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthPush
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthPush
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Parsed = append(m.Parsed, LabelAdapter{})
if err := m.Parsed[len(m.Parsed)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipPush(dAtA[iNdEx:])
@ -661,6 +735,12 @@ func (m *Entry) Size() (n int) {
n += 1 + l + sovPush(uint64(l))
}
}
if len(m.Parsed) > 0 {
for _, e := range m.Parsed {
l = e.Size()
n += 1 + l + sovPush(uint64(l))
}
}
return n
}
@ -711,7 +791,10 @@ func (m *Stream) Equal(that interface{}) bool {
return false
}
}
return m.Hash == that1.Hash
if m.Hash != that1.Hash {
return false
}
return true
}
func (m *Entry) Equal(that interface{}) bool {
@ -739,11 +822,22 @@ func (m *Entry) Equal(that interface{}) bool {
if m.Line != that1.Line {
return false
}
if len(m.StructuredMetadata) != len(that1.StructuredMetadata) {
return false
}
for i := range m.StructuredMetadata {
if !m.StructuredMetadata[i].Equal(that1.StructuredMetadata[i]) {
return false
}
}
if len(m.Parsed) != len(that1.Parsed) {
return false
}
for i := range m.Parsed {
if !m.Parsed[i].Equal(that1.Parsed[i]) {
return false
}
}
return true
}

@ -903,7 +903,7 @@ github.com/grafana/go-gelf/v2/gelf
# github.com/grafana/gomemcache v0.0.0-20230914135007-70d78eaabfe1
## explicit; go 1.18
github.com/grafana/gomemcache/memcache
# github.com/grafana/loki/pkg/push v0.0.0-20231017172654-cfc4f0e84adc => ./pkg/push
# github.com/grafana/loki/pkg/push v0.0.0-20231023154132-0a7737e7c7eb => ./pkg/push
## explicit; go 1.19
github.com/grafana/loki/pkg/push
# github.com/grafana/regexp v0.0.0-20221122212121-6b5c0a4cb7fd => github.com/grafana/regexp v0.0.0-20221122212121-6b5c0a4cb7fd

Loading…
Cancel
Save