Docs: improve wording and grammar (#6861)

pull/6882/head
Karen Miller 4 years ago committed by GitHub
parent d602c1331a
commit 8ac08e65d9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 29
      docs/sources/clients/lambda-promtail/_index.md
  2. 15
      docs/sources/operations/storage/boltdb-shipper.md

@ -19,12 +19,12 @@ The Terraform deployment also takes in an array of log group and bucket names, a
There's also a flag to keep the log stream label when propagating the logs from Cloudwatch, which defaults to false. This can be helpful when the cardinality is too large, such as the case of a log stream per lambda invocation.
Additionally, an environment variable can be configured to add extra lables to the logs streamed by lambda-protmail.
These extra labels will take the form `__extra_<name>=<value>`
Additionally, an environment variable can be configured to add extra labels to the logs streamed by lambda-protmail.
These extra labels will take the form `__extra_<name>=<value>`.
Optional environment variable can be configured to add tenant id to the logs streamed by lambda-protmail.
An optional environment variable can be configured to add the tenant ID to the logs streamed by lambda-protmail.
In an effort to make deployment of lambda-promtail as simple as possible, we've created a [public ECR repo](https://gallery.ecr.aws/grafana/lambda-promtail) to publish our builds of lambda-promtail. Users are still able to clone this repo, make their own modifications to the Go code, and upload their own image to their own ECR repo if they wish.
In an effort to make deployment of lambda-promtail as simple as possible, we've created a [public ECR repo](https://gallery.ecr.aws/grafana/lambda-promtail) to publish our builds of lambda-promtail. Users may clone this repo, make their own modifications to the Go code, and upload their own image to their own ECR repo.
### Examples
@ -33,7 +33,8 @@ Terraform:
terraform apply -var "lambda_promtail_image=<repo:tag>" -var "write_address=https://logs-prod-us-central1.grafana.net/loki/api/v1/push" -var "password=<password>" -var "username=<user>" -var 'log_group_names=["/aws/lambda/log-group-1", "/aws/lambda/log-group-2"]' -var 'bucket_names=["bucket-a", "bucket-b"]' -var 'batch_size=131072'
```
The first few lines of `main.tf` define the AWS region to deploy to, you are free to modify this or remove and deploy to
The first few lines of `main.tf` define the AWS region to deploy to.
Modify as desired, or remove and deploy to
```
provider "aws" {
region = "us-east-2"
@ -42,18 +43,20 @@ provider "aws" {
To keep the log group label add `-var "keep_stream=true"`.
To add extra labels add `-var 'extra_labels="name1,value1,name2,value2"'`
To add extra labels add `-var 'extra_labels="name1,value1,name2,value2"'`.
To add tenant id add `-var "tenant_id=value"`
To add tenant id add `-var "tenant_id=value"`.
Note that the creation of subscription filter on Cloudwatch in the provided Terraform file only accepts an array of log group names, it does **not** accept strings for regex filtering on the logs contents via the subscription filters. We suggest extending the Terraform file to do so, or having lambda-promtail write to Promtail and using [pipeline stages](https://grafana.com/docs/loki/latest/clients/promtail/stages/drop/).
Note that the creation of a subscription filter on Cloudwatch in the provided Terraform file only accepts an array of log group names.
It does **not** accept strings for regex filtering on the logs contents via the subscription filters. We suggest extending the Terraform file to do so.
Or, have lambda-promtail write to Promtail and use [pipeline stages](https://grafana.com/docs/loki/latest/clients/promtail/stages/drop/).
CloudFormation:
```
aws cloudformation create-stack --stack-name lambda-promtail --template-body file://template.yaml --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM --region us-east-2 --parameters ParameterKey=WriteAddress,ParameterValue=https://logs-prod-us-central1.grafana.net/loki/api/v1/push ParameterKey=Username,ParameterValue=<user> ParameterKey=Password,ParameterValue=<password> ParameterKey=LambdaPromtailImage,ParameterValue=<repo:tag>
```
Within the CloudFormation template file you should copy/paste and modify the subscription filter section as needed for each log group:
Within the CloudFormation template file, copy, paste, and modify the subscription filter section as needed for each log group:
```
MainLambdaPromtailSubscriptionFilter:
Type: AWS::Logs::SubscriptionFilter
@ -63,13 +66,13 @@ MainLambdaPromtailSubscriptionFilter:
LogGroupName: "/aws/lambda/some-lamda-log-group"
```
To keep the log group label add `ParameterKey=KeepStream,ParameterValue=true`.
To keep the log group label, add `ParameterKey=KeepStream,ParameterValue=true`.
To add extra labels, include `ParameterKey=ExtraLabels,ParameterValue="name1,value1,name2,value2"`
To add extra labels, include `ParameterKey=ExtraLabels,ParameterValue="name1,value1,name2,value2"`.
To add tenant id add `ParameterKey=TenantID,ParameterValue=value`.
To add a tenant ID, add `ParameterKey=TenantID,ParameterValue=value`.
To modify an already created CloudFormation stack you need to use [update-stack](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/update-stack.html).
To modify an existing CloudFormation stack, use [update-stack](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/update-stack.html).
## Uses

@ -73,14 +73,17 @@ Let us talk about more in depth about how both Ingesters and Queriers work when
### Ingesters
Ingesters keep writing the index to BoltDB files in `active_index_directory` and BoltDB Shipper keeps looking for new and updated files in that directory every 1 Minutes to upload them to the shared object store.
When running Loki in clustered mode there could be multiple ingesters serving write requests hence each of them generating BoltDB files locally.
Ingesters write the index to BoltDB files in `active_index_directory`,
and the BoltDB Shipper looks for new and updated files in that directory at 1 minute intervals, to upload them to the shared object store.
When running Loki in microservices mode, there could be multiple ingesters serving write requests.
Each ingester generates BoltDB files locally.
**Note:** To avoid any loss of index when Ingester crashes it is recommended to run Ingesters as statefulset(when using k8s) with a persistent storage for storing index files.
**Note:** To avoid any loss of index when an ingester crashes, we recommend running ingesters as a statefulset (when using Kubernetes) with a persistent storage for storing index files.
Another important detail to note is when chunks are flushed they are available for reads in object store instantly while index is not since we only upload them every 15 Minutes with BoltDB shipper.
Ingesters expose a new RPC for letting Queriers query the Ingester's local index for chunks which were recently flushed but its index might not be available yet with Queriers.
For all the queries which require chunks to be read from the store, Queriers also query Ingesters over RPC for IDs of chunks which were recently flushed which is to avoid missing any logs from queries.
When chunks are flushed, they are available for reads in the object store instantly. The index is not available instantly, since we upload every 15 minutes with the BoltDB shipper.
Ingesters expose a new RPC for letting queriers query the ingester's local index for chunks which were recently flushed, but its index might not be available yet with queriers.
For all the queries which require chunks to be read from the store, queriers also query ingesters over RPC for IDs of chunks which were recently flushed.
This avoids missing any logs from queries.
### Queriers

Loading…
Cancel
Save