Like Prometheus, but for logs.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
loki/cmd/logql-analyzer/main.go

62 lines
1.8 KiB

package main
import (
"flag"
"net/http"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/gorilla/mux"
"github.com/grafana/dskit/server"
"github.com/prometheus/client_golang/prometheus"
"github.com/grafana/loki/pkg/logqlanalyzer"
"github.com/grafana/loki/pkg/sizing"
util_log "github.com/grafana/loki/pkg/util/log"
)
func main() {
cfg := getConfig()
util_log.InitLogger(&server.Config{
LogLevel: cfg.LogLevel,
Use a line-buffered logger to deamplify write syscalls (#6954) We initialise a global logger in `pkg/util/log/log.go` and use it extensively throughout the Loki codebase. Every time we write a log message, a `write` syscall is invoked. Syscalls are problematic because they transition the process from userspace to kernelspace, which means: - a context-switch is incurred, which is inherently expensive ([1-2 microseconds](https://eli.thegreenplace.net/2018/measuring-context-switching-and-memory-overheads-for-linux-threads/)) - the goroutine executing the code is **blocked** - the underlying OS thread (_M_ in the go scheduler model) is **also blocked** - the goroutine has to be rescheduled once the syscall exits - the go scheduler may need to spawn additional OS threads if all are blocked in syscalls - which can also be expensive This change introduces the use of a line-buffered logger. It has a buffer of [256](https://gist.github.com/dannykopping/0704db32c0b08751d1d2494efaa734c2) entries, and once that buffer is filled it will flush to disk. However, a situation will arise in which that buffer remains somewhat empty for a period of time, so there is a periodic flush mechanism, configured to flush every 100ms. There is also a preallocated bytes slice of 10MB which is reused, to avoid excessive slice resizing & garbage collection. This does mean that we could lose up to 256 log messages in case of an ungraceful termination of the process, but this would need to be precisely timed within the 100ms flushes - in other words, the likelihood is low, and generally we shouldn't `kill -9` any Loki process.
3 years ago
}, prometheus.DefaultRegisterer, true, false)
s, err := createServer(cfg, util_log.Logger)
if err != nil {
level.Error(util_log.Logger).Log("msg", "error while creating the server", "err", err)
}
err = s.Run()
defer s.Shutdown()
if err != nil {
level.Error(util_log.Logger).Log("msg", "error while running the server", "err", err)
}
}
func getConfig() server.Config {
cfg := server.Config{}
cfg.RegisterFlags(flag.CommandLine)
flag.Parse()
return cfg
}
func createServer(cfg server.Config, logger log.Logger) (*server.Server, error) {
s, err := server.New(cfg)
if err != nil {
return nil, err
}
s.HTTP.Use(mux.CORSMethodMiddleware(s.HTTP))
s.HTTP.Use(logqlanalyzer.CorsMiddleware())
s.HTTP.Handle("/api/logql-analyze", &logqlanalyzer.LogQLAnalyzeHandler{}).Methods(http.MethodPost, http.MethodOptions)
sizingHandler := sizing.NewHandler(log.With(logger, "component", "sizing"))
s.HTTP.Handle("/api/sizing/helm", http.HandlerFunc(sizingHandler.GenerateHelmValues)).Methods(http.MethodGet, http.MethodOptions)
s.HTTP.Handle("/api/sizing/nodes", http.HandlerFunc(sizingHandler.Nodes)).Methods(http.MethodGet, http.MethodOptions)
s.HTTP.Handle("/api/sizing/cluster", http.HandlerFunc(sizingHandler.Cluster)).Methods(http.MethodGet, http.MethodOptions)
s.HTTP.HandleFunc("/ready", func(w http.ResponseWriter, _ *http.Request) {
http.Error(w, "ready", http.StatusOK)
}).Methods(http.MethodGet)
return s, err
}