docs: use repetitive numbering (#2699)

pull/2708/head
San Nguyen 5 years ago committed by GitHub
parent 499e4efc24
commit c00c7ed252
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 30
      docs/sources/architecture/_index.md
  2. 14
      docs/sources/clients/promtail/pipelines.md
  3. 4
      docs/sources/clients/promtail/stages/cri.md
  4. 4
      docs/sources/clients/promtail/stages/docker.md
  5. 4
      docs/sources/clients/promtail/stages/tenant.md
  6. 8
      docs/sources/clients/promtail/troubleshooting.md
  7. 4
      docs/sources/community/_index.md
  8. 10
      docs/sources/design-documents/2020-02-Promtail-Push-API.md
  9. 6
      docs/sources/getting-started/_index.md
  10. 6
      docs/sources/getting-started/get-logs-into-loki.md
  11. 2
      docs/sources/maintaining/_index.md
  12. 2
      docs/sources/maintaining/release-loki-build-image.md
  13. 34
      docs/sources/maintaining/release.md
  14. 14
      docs/sources/operations/_index.md
  15. 2
      docs/sources/operations/storage/_index.md
  16. 2
      docs/sources/operations/storage/table-manager.md

@ -50,7 +50,7 @@ processes with the following limitations:
monolithic mode with more than one replica, as each replica must be able to monolithic mode with more than one replica, as each replica must be able to
access the same storage backend, and local storage is not safe for concurrent access the same storage backend, and local storage is not safe for concurrent
access. access.
2. Individual components cannot be scaled independently, so it is not possible 1. Individual components cannot be scaled independently, so it is not possible
to have more read components than write components. to have more read components than write components.
## Components ## Components
@ -117,17 +117,17 @@ the hash ring. Each ingester has a state of either `PENDING`, `JOINING`,
1. `PENDING` is an Ingester's state when it is waiting for a handoff from 1. `PENDING` is an Ingester's state when it is waiting for a handoff from
another ingester that is `LEAVING`. another ingester that is `LEAVING`.
2. `JOINING` is an Ingester's state when it is currently inserting its tokens 1. `JOINING` is an Ingester's state when it is currently inserting its tokens
into the ring and initializing itself. It may receive write requests for into the ring and initializing itself. It may receive write requests for
tokens it owns. tokens it owns.
3. `ACTIVE` is an Ingester's state when it is fully initialized. It may receive 1. `ACTIVE` is an Ingester's state when it is fully initialized. It may receive
both write and read requests for tokens it owns. both write and read requests for tokens it owns.
4. `LEAVING` is an Ingester's state when it is shutting down. It may receive 1. `LEAVING` is an Ingester's state when it is shutting down. It may receive
read requests for data it still has in memory. read requests for data it still has in memory.
5. `UNHEALTHY` is an Ingester's state when it has failed to heartbeat to 1. `UNHEALTHY` is an Ingester's state when it has failed to heartbeat to
Consul. `UNHEALTHY` is set by the distributor when it periodically checks the ring. Consul. `UNHEALTHY` is set by the distributor when it periodically checks the ring.
Each log stream that an ingester receives is built up into a set of many Each log stream that an ingester receives is built up into a set of many
@ -137,8 +137,8 @@ interval.
Chunks are compressed and marked as read-only when: Chunks are compressed and marked as read-only when:
1. The current chunk has reached capacity (a configurable value). 1. The current chunk has reached capacity (a configurable value).
2. Too much time has passed without the current chunk being updated 1. Too much time has passed without the current chunk being updated
3. A flush occurs. 1. A flush occurs.
Whenever a chunk is compressed and marked as read-only, a writable chunk takes Whenever a chunk is compressed and marked as read-only, a writable chunk takes
its place. its place.
@ -320,12 +320,12 @@ writes and improve query performance.
To summarize, the read path works as follows: To summarize, the read path works as follows:
1. The querier receives an HTTP/1 request for data. 1. The querier receives an HTTP/1 request for data.
2. The querier passes the query to all ingesters for in-memory data. 1. The querier passes the query to all ingesters for in-memory data.
3. The ingesters receive the read request and return data matching the query, if 1. The ingesters receive the read request and return data matching the query, if
any. any.
4. The querier lazily loads data from the backing store and runs the query 1. The querier lazily loads data from the backing store and runs the query
against it if no ingesters returned data. against it if no ingesters returned data.
5. The querier iterates over all received data and deduplicates, returning a 1. The querier iterates over all received data and deduplicates, returning a
final set of data over the HTTP/1 connection. final set of data over the HTTP/1 connection.
## Write Path ## Write Path
@ -335,9 +335,9 @@ To summarize, the read path works as follows:
To summarize, the write path works as follows: To summarize, the write path works as follows:
1. The distributor receives an HTTP/1 request to store data for streams. 1. The distributor receives an HTTP/1 request to store data for streams.
2. Each stream is hashed using the hash ring. 1. Each stream is hashed using the hash ring.
3. The distributor sends each stream to the appropriate ingesters and their 1. The distributor sends each stream to the appropriate ingesters and their
replicas (based on the configured replication factor). replicas (based on the configured replication factor).
4. Each ingester will create a chunk or append to an existing chunk for the 1. Each ingester will create a chunk or append to an existing chunk for the
stream's data. A chunk is unique per tenant and per labelset. stream's data. A chunk is unique per tenant and per labelset.
5. The distributor responds with a success code over the HTTP/1 connection. 1. The distributor responds with a success code over the HTTP/1 connection.

@ -14,14 +14,14 @@ stages:
1. **Parsing stages** parse the current log line and extract data out of it. The 1. **Parsing stages** parse the current log line and extract data out of it. The
extracted data is then available for use by other stages. extracted data is then available for use by other stages.
2. **Transform stages** transform extracted data from previous stages. 1. **Transform stages** transform extracted data from previous stages.
3. **Action stages** take extracted data from previous stages and do something 1. **Action stages** take extracted data from previous stages and do something
with them. Actions can: with them. Actions can:
1. Add or modify existing labels to the log line 1. Add or modify existing labels to the log line
2. Change the timestamp of the log line 1. Change the timestamp of the log line
3. Change the content of the log line 1. Change the content of the log line
4. Create a metric based on the extracted data 1. Create a metric based on the extracted data
4. **Filtering stages** optionally apply a subset of stages or drop entries based on some 1. **Filtering stages** optionally apply a subset of stages or drop entries based on some
condition. condition.
Typical pipelines will start with a parsing stage (such as a Typical pipelines will start with a parsing stage (such as a
@ -37,7 +37,7 @@ Note that pipelines can not currently be used to deduplicate logs; Loki will
receive the same log line multiple times if, for example: receive the same log line multiple times if, for example:
1. Two scrape configs read from the same file 1. Two scrape configs read from the same file
2. Duplicate log lines in a file are sent through a pipeline. Deduplication is 1. Duplicate log lines in a file are sent through a pipeline. Deduplication is
not done. not done.
However, Loki will perform some deduplication at query time for logs that have However, Loki will perform some deduplication at query time for logs that have

@ -16,8 +16,8 @@ supports the specific CRI log format. CRI specifies log lines log lines as
space-delimited values with the following components: space-delimited values with the following components:
1. `time`: The timestamp string of the log 1. `time`: The timestamp string of the log
2. `stream`: Either stdout or stderr 1. `stream`: Either stdout or stderr
3. `log`: The contents of the log line 1. `log`: The contents of the log line
No whitespace is permitted between the components. In the following example, No whitespace is permitted between the components. In the following example,
only the first log line can be properly formatted using the `cri` stage: only the first log line can be properly formatted using the `cri` stage:

@ -17,8 +17,8 @@ only supports the specific Docker log format. Each log line from Docker is
written as JSON with the following keys: written as JSON with the following keys:
1. `log`: The content of log line 1. `log`: The content of log line
2. `stream`: Either `stdout` or `stderr` 1. `stream`: Either `stdout` or `stderr`
3. `time`: The timestamp string of the log line 1. `time`: The timestamp string of the log line
## Examples ## Examples

@ -77,7 +77,7 @@ Given the following log line:
The pipeline would: The pipeline would:
1. Decode the JSON log 1. Decode the JSON log
2. Set the label `app="api"` 1. Set the label `app="api"`
3. Process the `match` stage checking if the `{app="api"}` selector matches 1. Process the `match` stage checking if the `{app="api"}` selector matches
and - whenever it matches - run the sub stages. The `tenant` sub stage and - whenever it matches - run the sub stages. The `tenant` sub stage
would override the tenant with the value `"team-api"`. would override the tenant with the value `"team-api"`.

@ -71,10 +71,10 @@ cat my.log | promtail --config.file promtail.yaml
Given the following order of events: Given the following order of events:
1. `promtail` is tailing `/app.log` 1. `promtail` is tailing `/app.log`
2. `promtail` current position for `/app.log` is `100` (byte offset) 1. `promtail` current position for `/app.log` is `100` (byte offset)
3. `promtail` is stopped 1. `promtail` is stopped
4. `/app.log` is truncated and new logs are appended to it 1. `/app.log` is truncated and new logs are appended to it
5. `promtail` is restarted 1. `promtail` is restarted
When `promtail` is restarted, it reads the previous position (`100`) from the When `promtail` is restarted, it reads the previous position (`100`) from the
positions file. Two scenarios are then possible: positions file. Two scenarios are then possible:

@ -5,5 +5,5 @@ weight: 1100
# Community # Community
1. [Governance](governance/) 1. [Governance](governance/)
2. [Getting in Touch](getting-in-touch/) 1. [Getting in Touch](getting-in-touch/)
3. [Contributing](contributing/) 1. [Contributing](contributing/)

@ -56,7 +56,7 @@ rejected pushes. Users are recommended to do one of the following:
1. Have a dedicated Promtail instance for receiving pushes. This also applies to 1. Have a dedicated Promtail instance for receiving pushes. This also applies to
using the syslog target. using the syslog target.
2. Have a separated k8s service that always resolves to the same Promtail pod, 1. Have a separated k8s service that always resolves to the same Promtail pod,
bypassing the load balancing issue. bypassing the load balancing issue.
## Implementation ## Implementation
@ -100,10 +100,10 @@ Loki uses. There are some concerns with this approach:
1. The gRPC Gateway reverse proxy will need to play nice with the existing HTTP 1. The gRPC Gateway reverse proxy will need to play nice with the existing HTTP
mux used in Promtail. mux used in Promtail.
2. We couldn't control the HTTP and Protobuf formats separately as Loki can. 1. We couldn't control the HTTP and Protobuf formats separately as Loki can.
3. Log lines will be double-encoded thanks to the reverse proxy. 1. Log lines will be double-encoded thanks to the reverse proxy.
4. A small overhead of using a reverse proxy in-process will be introduced. 1. A small overhead of using a reverse proxy in-process will be introduced.
5. This breaks our normal pattern of writing our own shim functions; may add 1. This breaks our normal pattern of writing our own shim functions; may add
some cognitive overhead of having to deal with the gRPC gateway as an outlier some cognitive overhead of having to deal with the gRPC gateway as an outlier
in the code. in the code.

@ -5,7 +5,7 @@ weight: 300
# Getting started with Loki # Getting started with Loki
1. [Grafana](grafana/) 1. [Grafana](grafana/)
2. [LogCLI](logcli/) 1. [LogCLI](logcli/)
3. [Labels](labels/) 1. [Labels](labels/)
4. [Troubleshooting](troubleshooting/) 1. [Troubleshooting](troubleshooting/)

@ -17,7 +17,7 @@ The following instructions should help you get started.
wget https://raw.githubusercontent.com/grafana/loki/master/cmd/promtail/promtail-local-config.yaml wget https://raw.githubusercontent.com/grafana/loki/master/cmd/promtail/promtail-local-config.yaml
``` ```
2. Open the config file in the text editor of your choice. It should look similar to this: 1. Open the config file in the text editor of your choice. It should look similar to this:
``` ```
server: server:
@ -42,7 +42,7 @@ scrape_configs:
The seven lines under `scrape_configs` are what send the logs that Loki generates to Loki, which then outputs them in the command line and http://localhost:3100/metrics. The seven lines under `scrape_configs` are what send the logs that Loki generates to Loki, which then outputs them in the command line and http://localhost:3100/metrics.
3. Copy the seven lines under `scrape_configs`, and then paste them under the original job (you can also just edit the original seven lines). 1. Copy the seven lines under `scrape_configs`, and then paste them under the original job (you can also just edit the original seven lines).
Below is an example that sends logs from a default Grafana installation to Loki. We updated the following fields: Below is an example that sends logs from a default Grafana installation to Loki. We updated the following fields:
- job_name - This differentiates the logs collected from other log groups. - job_name - This differentiates the logs collected from other log groups.
@ -60,7 +60,7 @@ scrape_configs:
__path__: "C:/Program Files/GrafanaLabs/grafana/data/log/grafana.log" __path__: "C:/Program Files/GrafanaLabs/grafana/data/log/grafana.log"
``` ```
4. Enter the following command to run Promtail. Examples below assume you have put the config file in the same directory as the binary. 1. Enter the following command to run Promtail. Examples below assume you have put the config file in the same directory as the binary.
**Windows** **Windows**

@ -7,4 +7,4 @@ weight: 1200
This section details information for maintainers of Loki. This section details information for maintainers of Loki.
1. [Releasing Loki](release/) 1. [Releasing Loki](release/)
2. [Releasing `loki-build-image`](release-loki-build-image/) 1. [Releasing `loki-build-image`](release-loki-build-image/)

@ -13,4 +13,4 @@ The [`loki-build-image`](https://github.com/grafana/loki/tree/master/loki-build-
1. .circleci/config.yml 1. .circleci/config.yml
1. Run `make drone` to rebuild the drone yml file with the new image version (the image version in the Makefile is used) 1. Run `make drone` to rebuild the drone yml file with the new image version (the image version in the Makefile is used)
1. Commit your changes (else you will get a WIP tag) 1. Commit your changes (else you will get a WIP tag)
2. Run `make build-image-push` 1. Run `make build-image-push`

@ -20,9 +20,9 @@ page](https://github.com/settings/keys). If the GPG key for the email address
used to commit with Loki is not present, follow these instructions to add it: used to commit with Loki is not present, follow these instructions to add it:
1. Run `gpg --armor --export <your email address>` 1. Run `gpg --armor --export <your email address>`
2. Copy the output. 1. Copy the output.
3. In the settings page linked above, click "New GPG Key". 1. In the settings page linked above, click "New GPG Key".
4. Copy and paste the PGP public key block. 1. Copy and paste the PGP public key block.
#### Signing Commits and Tags by Default #### Signing Commits and Tags by Default
@ -50,22 +50,22 @@ export GPG_TTY=$(tty)
1. Create a new branch to update `CHANGELOG.md` and references to version 1. Create a new branch to update `CHANGELOG.md` and references to version
numbers across the entire repository (e.g. README.md in the project root). numbers across the entire repository (e.g. README.md in the project root).
2. Modify `CHANGELOG.md` with the new version number and its release date. 1. Modify `CHANGELOG.md` with the new version number and its release date.
3. List all the merged PRs since the previous release. This command is helpful 1. List all the merged PRs since the previous release. This command is helpful
for generating the list (modifying the date to the date of the previous release): `curl https://api.github.com/search/issues?q=repo:grafana/loki+is:pr+"merged:>=2019-08-02" | jq -r ' .items[] | "* [" + (.number|tostring) + "](" + .html_url + ") **" + .user.login + "**: " + .title'` for generating the list (modifying the date to the date of the previous release): `curl https://api.github.com/search/issues?q=repo:grafana/loki+is:pr+"merged:>=2019-08-02" | jq -r ' .items[] | "* [" + (.number|tostring) + "](" + .html_url + ") **" + .user.login + "**: " + .title'`
4. Go through `docs/` and find references to the previous release version and 1. Go through `docs/` and find references to the previous release version and
update them to reference the new version. update them to reference the new version.
5. *Without creating a tag*, create a commit based on your changes and open a PR 1. *Without creating a tag*, create a commit based on your changes and open a PR
for updating the release notes. for updating the release notes.
1. Until [852](https://github.com/grafana/loki/issues/852) is fixed, updating 1. Until [852](https://github.com/grafana/loki/issues/852) is fixed, updating
Helm and Ksonnet configs needs to be done in a separate commit following Helm and Ksonnet configs needs to be done in a separate commit following
the release tag so that Helm tests pass. the release tag so that Helm tests pass.
6. Merge the changelog PR. 1. Merge the changelog PR.
7. Create a new tag for the release. 1. Create a new tag for the release.
1. Once this step is done, the CI will be triggered to create release 1. Once this step is done, the CI will be triggered to create release
artifacts and publish them to a draft release. The tag will be made artifacts and publish them to a draft release. The tag will be made
publicly available immediately. publicly available immediately.
2. Run the following to create the tag: 1. Run the following to create the tag:
```bash ```bash
RELEASE=v1.2.3 # UPDATE ME to reference new release RELEASE=v1.2.3 # UPDATE ME to reference new release
@ -74,7 +74,7 @@ export GPG_TTY=$(tty)
git tag -s $RELEASE -m "tagging release $RELEASE" git tag -s $RELEASE -m "tagging release $RELEASE"
git push origin $RELEASE git push origin $RELEASE
``` ```
8. Watch CircleCI and wait for all the jobs to finish running. 1. Watch CircleCI and wait for all the jobs to finish running.
## Updating Helm and Ksonnet configs ## Updating Helm and Ksonnet configs
@ -82,10 +82,10 @@ These steps should be executed after the previous section, once CircleCI has
finished running all the release jobs. finished running all the release jobs.
1. Run `bash ./tools/release_prepare.sh` 1. Run `bash ./tools/release_prepare.sh`
2. When prompted for the release version, enter the latest tag. 1. When prompted for the release version, enter the latest tag.
3. When prompted for new Helm version numbers, the defaults should suffice (a 1. When prompted for new Helm version numbers, the defaults should suffice (a
minor version bump). minor version bump).
4. Commit the changes to a new branch, push, make a PR, and get it merged. 1. Commit the changes to a new branch, push, make a PR, and get it merged.
## Publishing the Release Draft ## Publishing the Release Draft
@ -93,9 +93,9 @@ Once the previous two steps are completed, you can publish your draft!
1. Go to the [GitHub releases page](https://github.com/grafana/loki/releases) 1. Go to the [GitHub releases page](https://github.com/grafana/loki/releases)
and find the drafted release. and find the drafted release.
2. Edit the drafted release, copying and pasting *notable changes* from the 1. Edit the drafted release, copying and pasting *notable changes* from the
CHANGELOG. Add a link to the CHANGELOG, noting that the full list of changes CHANGELOG. Add a link to the CHANGELOG, noting that the full list of changes
can be found there. Refer to other releases for help with formatting this. can be found there. Refer to other releases for help with formatting this.
3. Optionally, have other team members review the release draft so you feel 1. Optionally, have other team members review the release draft so you feel
comfortable with it. comfortable with it.
4. Publish the release! 1. Publish the release!

@ -5,11 +5,11 @@ weight: 800
# Operating Loki # Operating Loki
1. [Upgrading](upgrade/) 1. [Upgrading](upgrade/)
2. [Authentication](authentication/) 1. [Authentication](authentication/)
3. [Observability](observability/) 1. [Observability](observability/)
4. [Scalability](scalability/) 1. [Scalability](scalability/)
5. [Storage](storage/) 1. [Storage](storage/)
1. [Table Manager](storage/table-manager/) 1. [Table Manager](storage/table-manager/)
2. [Retention](storage/retention/) 1. [Retention](storage/retention/)
6. [Multi-tenancy](multi-tenancy/) 1. [Multi-tenancy](multi-tenancy/)
7. [Loki Canary](loki-canary/) 1. [Loki Canary](loki-canary/)

@ -19,7 +19,7 @@ how to configure the storage and the index.
For more information: For more information:
1. [Table Manager](table-manager/) 1. [Table Manager](table-manager/)
2. [Retention](retention/) 1. [Retention](retention/)
## Supported Stores ## Supported Stores

@ -193,7 +193,7 @@ to `0`.
The Table Manager can be executed in two ways: The Table Manager can be executed in two ways:
1. Implicitly executed when Loki runs in monolithic mode (single process) 1. Implicitly executed when Loki runs in monolithic mode (single process)
2. Explicitly executed when Loki runs in microservices mode 1. Explicitly executed when Loki runs in microservices mode
### Monolithic mode ### Monolithic mode

Loading…
Cancel
Save