mirror of https://github.com/watcha-fr/synapse
Support generating structured logs in addition to standard logs. (#8607)
This modifies the configuration of structured logging to be usable from the standard Python logging configuration. This also separates the formatting of logs from the transport allowing JSON logs to files or standard logs to sockets.code_spécifique_watcha
parent
9a7e0d2ea6
commit
00b24aa545
@ -0,0 +1 @@ |
||||
Re-organize the structured logging code to separate the TCP transport handling from the JSON formatting. |
@ -1,83 +1,161 @@ |
||||
# Structured Logging |
||||
|
||||
A structured logging system can be useful when your logs are destined for a machine to parse and process. By maintaining its machine-readable characteristics, it enables more efficient searching and aggregations when consumed by software such as the "ELK stack". |
||||
A structured logging system can be useful when your logs are destined for a |
||||
machine to parse and process. By maintaining its machine-readable characteristics, |
||||
it enables more efficient searching and aggregations when consumed by software |
||||
such as the "ELK stack". |
||||
|
||||
Synapse's structured logging system is configured via the file that Synapse's `log_config` config option points to. The file must be YAML and contain `structured: true`. It must contain a list of "drains" (places where logs go to). |
||||
Synapse's structured logging system is configured via the file that Synapse's |
||||
`log_config` config option points to. The file should include a formatter which |
||||
uses the `synapse.logging.TerseJsonFormatter` class included with Synapse and a |
||||
handler which uses the above formatter. |
||||
|
||||
There is also a `synapse.logging.JsonFormatter` option which does not include |
||||
a timestamp in the resulting JSON. This is useful if the log ingester adds its |
||||
own timestamp. |
||||
|
||||
A structured logging configuration looks similar to the following: |
||||
|
||||
```yaml |
||||
structured: true |
||||
version: 1 |
||||
|
||||
formatters: |
||||
structured: |
||||
class: synapse.logging.TerseJsonFormatter |
||||
|
||||
handlers: |
||||
file: |
||||
class: logging.handlers.TimedRotatingFileHandler |
||||
formatter: structured |
||||
filename: /path/to/my/logs/homeserver.log |
||||
when: midnight |
||||
backupCount: 3 # Does not include the current log file. |
||||
encoding: utf8 |
||||
|
||||
loggers: |
||||
synapse: |
||||
level: INFO |
||||
handlers: [remote] |
||||
synapse.storage.SQL: |
||||
level: WARNING |
||||
|
||||
drains: |
||||
console: |
||||
type: console |
||||
location: stdout |
||||
file: |
||||
type: file_json |
||||
location: homeserver.log |
||||
``` |
||||
|
||||
The above logging config will set Synapse as 'INFO' logging level by default, with the SQL layer at 'WARNING', and will have two logging drains (to the console and to a file, stored as JSON). |
||||
|
||||
## Drain Types |
||||
The above logging config will set Synapse as 'INFO' logging level by default, |
||||
with the SQL layer at 'WARNING', and will log to a file, stored as JSON. |
||||
|
||||
Drain types can be specified by the `type` key. |
||||
It is also possible to figure Synapse to log to a remote endpoint by using the |
||||
`synapse.logging.RemoteHandler` class included with Synapse. It takes the |
||||
following arguments: |
||||
|
||||
### `console` |
||||
- `host`: Hostname or IP address of the log aggregator. |
||||
- `port`: Numerical port to contact on the host. |
||||
- `maximum_buffer`: (Optional, defaults to 1000) The maximum buffer size to allow. |
||||
|
||||
Outputs human-readable logs to the console. |
||||
A remote structured logging configuration looks similar to the following: |
||||
|
||||
Arguments: |
||||
```yaml |
||||
version: 1 |
||||
|
||||
- `location`: Either `stdout` or `stderr`. |
||||
formatters: |
||||
structured: |
||||
class: synapse.logging.TerseJsonFormatter |
||||
|
||||
### `console_json` |
||||
handlers: |
||||
remote: |
||||
class: synapse.logging.RemoteHandler |
||||
formatter: structured |
||||
host: 10.1.2.3 |
||||
port: 9999 |
||||
|
||||
Outputs machine-readable JSON logs to the console. |
||||
loggers: |
||||
synapse: |
||||
level: INFO |
||||
handlers: [remote] |
||||
synapse.storage.SQL: |
||||
level: WARNING |
||||
``` |
||||
|
||||
Arguments: |
||||
The above logging config will set Synapse as 'INFO' logging level by default, |
||||
with the SQL layer at 'WARNING', and will log JSON formatted messages to a |
||||
remote endpoint at 10.1.2.3:9999. |
||||
|
||||
- `location`: Either `stdout` or `stderr`. |
||||
## Upgrading from legacy structured logging configuration |
||||
|
||||
### `console_json_terse` |
||||
Versions of Synapse prior to v1.23.0 included a custom structured logging |
||||
configuration which is deprecated. It used a `structured: true` flag and |
||||
configured `drains` instead of ``handlers`` and `formatters`. |
||||
|
||||
Outputs machine-readable JSON logs to the console, separated by newlines. This |
||||
format is not designed to be read and re-formatted into human-readable text, but |
||||
is optimal for a logging aggregation system. |
||||
Synapse currently automatically converts the old configuration to the new |
||||
configuration, but this will be removed in a future version of Synapse. The |
||||
following reference can be used to update your configuration. Based on the drain |
||||
`type`, we can pick a new handler: |
||||
|
||||
Arguments: |
||||
1. For a type of `console`, `console_json`, or `console_json_terse`: a handler |
||||
with a class of `logging.StreamHandler` and a `stream` of `ext://sys.stdout` |
||||
or `ext://sys.stderr` should be used. |
||||
2. For a type of `file` or `file_json`: a handler of `logging.FileHandler` with |
||||
a location of the file path should be used. |
||||
3. For a type of `network_json_terse`: a handler of `synapse.logging.RemoteHandler` |
||||
with the host and port should be used. |
||||
|
||||
- `location`: Either `stdout` or `stderr`. |
||||
Then based on the drain `type` we can pick a new formatter: |
||||
|
||||
### `file` |
||||
1. For a type of `console` or `file` no formatter is necessary. |
||||
2. For a type of `console_json` or `file_json`: a formatter of |
||||
`synapse.logging.JsonFormatter` should be used. |
||||
3. For a type of `console_json_terse` or `network_json_terse`: a formatter of |
||||
`synapse.logging.TerseJsonFormatter` should be used. |
||||
|
||||
Outputs human-readable logs to a file. |
||||
For each new handler and formatter they should be added to the logging configuration |
||||
and then assigned to either a logger or the root logger. |
||||
|
||||
Arguments: |
||||
An example legacy configuration: |
||||
|
||||
- `location`: An absolute path to the file to log to. |
||||
```yaml |
||||
structured: true |
||||
|
||||
### `file_json` |
||||
loggers: |
||||
synapse: |
||||
level: INFO |
||||
synapse.storage.SQL: |
||||
level: WARNING |
||||
|
||||
Outputs machine-readable logs to a file. |
||||
drains: |
||||
console: |
||||
type: console |
||||
location: stdout |
||||
file: |
||||
type: file_json |
||||
location: homeserver.log |
||||
``` |
||||
|
||||
Arguments: |
||||
Would be converted into a new configuration: |
||||
|
||||
- `location`: An absolute path to the file to log to. |
||||
```yaml |
||||
version: 1 |
||||
|
||||
### `network_json_terse` |
||||
formatters: |
||||
json: |
||||
class: synapse.logging.JsonFormatter |
||||
|
||||
Delivers machine-readable JSON logs to a log aggregator over TCP. This is |
||||
compatible with LogStash's TCP input with the codec set to `json_lines`. |
||||
handlers: |
||||
console: |
||||
class: logging.StreamHandler |
||||
location: ext://sys.stdout |
||||
file: |
||||
class: logging.FileHandler |
||||
formatter: json |
||||
filename: homeserver.log |
||||
|
||||
Arguments: |
||||
loggers: |
||||
synapse: |
||||
level: INFO |
||||
handlers: [console, file] |
||||
synapse.storage.SQL: |
||||
level: WARNING |
||||
``` |
||||
|
||||
- `host`: Hostname or IP address of the log aggregator. |
||||
- `port`: Numerical port to contact on the host. |
||||
The new logging configuration is a bit more verbose, but significantly more |
||||
flexible. It allows for configuration that were not previously possible, such as |
||||
sending plain logs over the network, or using different handlers for different |
||||
modules. |
||||
|
@ -0,0 +1,20 @@ |
||||
# -*- coding: utf-8 -*- |
||||
# Copyright 2020 The Matrix.org Foundation C.I.C. |
||||
# |
||||
# Licensed under the Apache License, Version 2.0 (the "License"); |
||||
# you may not use this file except in compliance with the License. |
||||
# You may obtain a copy of the License at |
||||
# |
||||
# http://www.apache.org/licenses/LICENSE-2.0 |
||||
# |
||||
# Unless required by applicable law or agreed to in writing, software |
||||
# distributed under the License is distributed on an "AS IS" BASIS, |
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
# See the License for the specific language governing permissions and |
||||
# limitations under the License. |
||||
|
||||
# These are imported to allow for nicer logging configuration files. |
||||
from synapse.logging._remote import RemoteHandler |
||||
from synapse.logging._terse_json import JsonFormatter, TerseJsonFormatter |
||||
|
||||
__all__ = ["RemoteHandler", "JsonFormatter", "TerseJsonFormatter"] |
@ -0,0 +1,33 @@ |
||||
# -*- coding: utf-8 -*- |
||||
# Copyright 2020 The Matrix.org Foundation C.I.C. |
||||
# |
||||
# Licensed under the Apache License, Version 2.0 (the "License"); |
||||
# you may not use this file except in compliance with the License. |
||||
# You may obtain a copy of the License at |
||||
# |
||||
# http://www.apache.org/licenses/LICENSE-2.0 |
||||
# |
||||
# Unless required by applicable law or agreed to in writing, software |
||||
# distributed under the License is distributed on an "AS IS" BASIS, |
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
# See the License for the specific language governing permissions and |
||||
# limitations under the License. |
||||
import logging |
||||
|
||||
from typing_extensions import Literal |
||||
|
||||
|
||||
class MetadataFilter(logging.Filter): |
||||
"""Logging filter that adds constant values to each record. |
||||
|
||||
Args: |
||||
metadata: Key-value pairs to add to each record. |
||||
""" |
||||
|
||||
def __init__(self, metadata: dict): |
||||
self._metadata = metadata |
||||
|
||||
def filter(self, record: logging.LogRecord) -> Literal[True]: |
||||
for key, value in self._metadata.items(): |
||||
setattr(record, key, value) |
||||
return True |
@ -0,0 +1,34 @@ |
||||
# -*- coding: utf-8 -*- |
||||
# Copyright 2019 The Matrix.org Foundation C.I.C. |
||||
# |
||||
# Licensed under the Apache License, Version 2.0 (the "License"); |
||||
# you may not use this file except in compliance with the License. |
||||
# You may obtain a copy of the License at |
||||
# |
||||
# http://www.apache.org/licenses/LICENSE-2.0 |
||||
# |
||||
# Unless required by applicable law or agreed to in writing, software |
||||
# distributed under the License is distributed on an "AS IS" BASIS, |
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
# See the License for the specific language governing permissions and |
||||
# limitations under the License. |
||||
import logging |
||||
|
||||
|
||||
class LoggerCleanupMixin: |
||||
def get_logger(self, handler): |
||||
""" |
||||
Attach a handler to a logger and add clean-ups to remove revert this. |
||||
""" |
||||
# Create a logger and add the handler to it. |
||||
logger = logging.getLogger(__name__) |
||||
logger.addHandler(handler) |
||||
|
||||
# Ensure the logger actually logs something. |
||||
logger.setLevel(logging.INFO) |
||||
|
||||
# Ensure the logger gets cleaned-up appropriately. |
||||
self.addCleanup(logger.removeHandler, handler) |
||||
self.addCleanup(logger.setLevel, logging.NOTSET) |
||||
|
||||
return logger |
@ -0,0 +1,153 @@ |
||||
# -*- coding: utf-8 -*- |
||||
# Copyright 2019 The Matrix.org Foundation C.I.C. |
||||
# |
||||
# Licensed under the Apache License, Version 2.0 (the "License"); |
||||
# you may not use this file except in compliance with the License. |
||||
# You may obtain a copy of the License at |
||||
# |
||||
# http://www.apache.org/licenses/LICENSE-2.0 |
||||
# |
||||
# Unless required by applicable law or agreed to in writing, software |
||||
# distributed under the License is distributed on an "AS IS" BASIS, |
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
# See the License for the specific language governing permissions and |
||||
# limitations under the License. |
||||
from twisted.test.proto_helpers import AccumulatingProtocol |
||||
|
||||
from synapse.logging import RemoteHandler |
||||
|
||||
from tests.logging import LoggerCleanupMixin |
||||
from tests.server import FakeTransport, get_clock |
||||
from tests.unittest import TestCase |
||||
|
||||
|
||||
def connect_logging_client(reactor, client_id): |
||||
# This is essentially tests.server.connect_client, but disabling autoflush on |
||||
# the client transport. This is necessary to avoid an infinite loop due to |
||||
# sending of data via the logging transport causing additional logs to be |
||||
# written. |
||||
factory = reactor.tcpClients.pop(client_id)[2] |
||||
client = factory.buildProtocol(None) |
||||
server = AccumulatingProtocol() |
||||
server.makeConnection(FakeTransport(client, reactor)) |
||||
client.makeConnection(FakeTransport(server, reactor, autoflush=False)) |
||||
|
||||
return client, server |
||||
|
||||
|
||||
class RemoteHandlerTestCase(LoggerCleanupMixin, TestCase): |
||||
def setUp(self): |
||||
self.reactor, _ = get_clock() |
||||
|
||||
def test_log_output(self): |
||||
""" |
||||
The remote handler delivers logs over TCP. |
||||
""" |
||||
handler = RemoteHandler("127.0.0.1", 9000, _reactor=self.reactor) |
||||
logger = self.get_logger(handler) |
||||
|
||||
logger.info("Hello there, %s!", "wally") |
||||
|
||||
# Trigger the connection |
||||
client, server = connect_logging_client(self.reactor, 0) |
||||
|
||||
# Trigger data being sent |
||||
client.transport.flush() |
||||
|
||||
# One log message, with a single trailing newline |
||||
logs = server.data.decode("utf8").splitlines() |
||||
self.assertEqual(len(logs), 1) |
||||
self.assertEqual(server.data.count(b"\n"), 1) |
||||
|
||||
# Ensure the data passed through properly. |
||||
self.assertEqual(logs[0], "Hello there, wally!") |
||||
|
||||
def test_log_backpressure_debug(self): |
||||
""" |
||||
When backpressure is hit, DEBUG logs will be shed. |
||||
""" |
||||
handler = RemoteHandler( |
||||
"127.0.0.1", 9000, maximum_buffer=10, _reactor=self.reactor |
||||
) |
||||
logger = self.get_logger(handler) |
||||
|
||||
# Send some debug messages |
||||
for i in range(0, 3): |
||||
logger.debug("debug %s" % (i,)) |
||||
|
||||
# Send a bunch of useful messages |
||||
for i in range(0, 7): |
||||
logger.info("info %s" % (i,)) |
||||
|
||||
# The last debug message pushes it past the maximum buffer |
||||
logger.debug("too much debug") |
||||
|
||||
# Allow the reconnection |
||||
client, server = connect_logging_client(self.reactor, 0) |
||||
client.transport.flush() |
||||
|
||||
# Only the 7 infos made it through, the debugs were elided |
||||
logs = server.data.splitlines() |
||||
self.assertEqual(len(logs), 7) |
||||
self.assertNotIn(b"debug", server.data) |
||||
|
||||
def test_log_backpressure_info(self): |
||||
""" |
||||
When backpressure is hit, DEBUG and INFO logs will be shed. |
||||
""" |
||||
handler = RemoteHandler( |
||||
"127.0.0.1", 9000, maximum_buffer=10, _reactor=self.reactor |
||||
) |
||||
logger = self.get_logger(handler) |
||||
|
||||
# Send some debug messages |
||||
for i in range(0, 3): |
||||
logger.debug("debug %s" % (i,)) |
||||
|
||||
# Send a bunch of useful messages |
||||
for i in range(0, 10): |
||||
logger.warning("warn %s" % (i,)) |
||||
|
||||
# Send a bunch of info messages |
||||
for i in range(0, 3): |
||||
logger.info("info %s" % (i,)) |
||||
|
||||
# The last debug message pushes it past the maximum buffer |
||||
logger.debug("too much debug") |
||||
|
||||
# Allow the reconnection |
||||
client, server = connect_logging_client(self.reactor, 0) |
||||
client.transport.flush() |
||||
|
||||
# The 10 warnings made it through, the debugs and infos were elided |
||||
logs = server.data.splitlines() |
||||
self.assertEqual(len(logs), 10) |
||||
self.assertNotIn(b"debug", server.data) |
||||
self.assertNotIn(b"info", server.data) |
||||
|
||||
def test_log_backpressure_cut_middle(self): |
||||
""" |
||||
When backpressure is hit, and no more DEBUG and INFOs cannot be culled, |
||||
it will cut the middle messages out. |
||||
""" |
||||
handler = RemoteHandler( |
||||
"127.0.0.1", 9000, maximum_buffer=10, _reactor=self.reactor |
||||
) |
||||
logger = self.get_logger(handler) |
||||
|
||||
# Send a bunch of useful messages |
||||
for i in range(0, 20): |
||||
logger.warning("warn %s" % (i,)) |
||||
|
||||
# Allow the reconnection |
||||
client, server = connect_logging_client(self.reactor, 0) |
||||
client.transport.flush() |
||||
|
||||
# The first five and last five warnings made it through, the debugs and |
||||
# infos were elided |
||||
logs = server.data.decode("utf8").splitlines() |
||||
self.assertEqual( |
||||
["warn %s" % (i,) for i in range(5)] |
||||
+ ["warn %s" % (i,) for i in range(15, 20)], |
||||
logs, |
||||
) |
@ -1,214 +0,0 @@ |
||||
# -*- coding: utf-8 -*- |
||||
# Copyright 2019 The Matrix.org Foundation C.I.C. |
||||
# |
||||
# Licensed under the Apache License, Version 2.0 (the "License"); |
||||
# you may not use this file except in compliance with the License. |
||||
# You may obtain a copy of the License at |
||||
# |
||||
# http://www.apache.org/licenses/LICENSE-2.0 |
||||
# |
||||
# Unless required by applicable law or agreed to in writing, software |
||||
# distributed under the License is distributed on an "AS IS" BASIS, |
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
# See the License for the specific language governing permissions and |
||||
# limitations under the License. |
||||
|
||||
import logging |
||||
import os |
||||
import os.path |
||||
import shutil |
||||
import sys |
||||
import textwrap |
||||
|
||||
from twisted.logger import Logger, eventAsText, eventsFromJSONLogFile |
||||
|
||||
from synapse.config.logger import setup_logging |
||||
from synapse.logging._structured import setup_structured_logging |
||||
from synapse.logging.context import LoggingContext |
||||
|
||||
from tests.unittest import DEBUG, HomeserverTestCase |
||||
|
||||
|
||||
class FakeBeginner: |
||||
def beginLoggingTo(self, observers, **kwargs): |
||||
self.observers = observers |
||||
|
||||
|
||||
class StructuredLoggingTestBase: |
||||
""" |
||||
Test base that registers a cleanup handler to reset the stdlib log handler |
||||
to 'unset'. |
||||
""" |
||||
|
||||
def prepare(self, reactor, clock, hs): |
||||
def _cleanup(): |
||||
logging.getLogger("synapse").setLevel(logging.NOTSET) |
||||
|
||||
self.addCleanup(_cleanup) |
||||
|
||||
|
||||
class StructuredLoggingTestCase(StructuredLoggingTestBase, HomeserverTestCase): |
||||
""" |
||||
Tests for Synapse's structured logging support. |
||||
""" |
||||
|
||||
def test_output_to_json_round_trip(self): |
||||
""" |
||||
Synapse logs can be outputted to JSON and then read back again. |
||||
""" |
||||
temp_dir = self.mktemp() |
||||
os.mkdir(temp_dir) |
||||
self.addCleanup(shutil.rmtree, temp_dir) |
||||
|
||||
json_log_file = os.path.abspath(os.path.join(temp_dir, "out.json")) |
||||
|
||||
log_config = { |
||||
"drains": {"jsonfile": {"type": "file_json", "location": json_log_file}} |
||||
} |
||||
|
||||
# Begin the logger with our config |
||||
beginner = FakeBeginner() |
||||
setup_structured_logging( |
||||
self.hs, self.hs.config, log_config, logBeginner=beginner |
||||
) |
||||
|
||||
# Make a logger and send an event |
||||
logger = Logger( |
||||
namespace="tests.logging.test_structured", observer=beginner.observers[0] |
||||
) |
||||
logger.info("Hello there, {name}!", name="wally") |
||||
|
||||
# Read the log file and check it has the event we sent |
||||
with open(json_log_file, "r") as f: |
||||
logged_events = list(eventsFromJSONLogFile(f)) |
||||
self.assertEqual(len(logged_events), 1) |
||||
|
||||
# The event pulled from the file should render fine |
||||
self.assertEqual( |
||||
eventAsText(logged_events[0], includeTimestamp=False), |
||||
"[tests.logging.test_structured#info] Hello there, wally!", |
||||
) |
||||
|
||||
def test_output_to_text(self): |
||||
""" |
||||
Synapse logs can be outputted to text. |
||||
""" |
||||
temp_dir = self.mktemp() |
||||
os.mkdir(temp_dir) |
||||
self.addCleanup(shutil.rmtree, temp_dir) |
||||
|
||||
log_file = os.path.abspath(os.path.join(temp_dir, "out.log")) |
||||
|
||||
log_config = {"drains": {"file": {"type": "file", "location": log_file}}} |
||||
|
||||
# Begin the logger with our config |
||||
beginner = FakeBeginner() |
||||
setup_structured_logging( |
||||
self.hs, self.hs.config, log_config, logBeginner=beginner |
||||
) |
||||
|
||||
# Make a logger and send an event |
||||
logger = Logger( |
||||
namespace="tests.logging.test_structured", observer=beginner.observers[0] |
||||
) |
||||
logger.info("Hello there, {name}!", name="wally") |
||||
|
||||
# Read the log file and check it has the event we sent |
||||
with open(log_file, "r") as f: |
||||
logged_events = f.read().strip().split("\n") |
||||
self.assertEqual(len(logged_events), 1) |
||||
|
||||
# The event pulled from the file should render fine |
||||
self.assertTrue( |
||||
logged_events[0].endswith( |
||||
" - tests.logging.test_structured - INFO - None - Hello there, wally!" |
||||
) |
||||
) |
||||
|
||||
def test_collects_logcontext(self): |
||||
""" |
||||
Test that log outputs have the attached logging context. |
||||
""" |
||||
log_config = {"drains": {}} |
||||
|
||||
# Begin the logger with our config |
||||
beginner = FakeBeginner() |
||||
publisher = setup_structured_logging( |
||||
self.hs, self.hs.config, log_config, logBeginner=beginner |
||||
) |
||||
|
||||
logs = [] |
||||
|
||||
publisher.addObserver(logs.append) |
||||
|
||||
# Make a logger and send an event |
||||
logger = Logger( |
||||
namespace="tests.logging.test_structured", observer=beginner.observers[0] |
||||
) |
||||
|
||||
with LoggingContext("testcontext", request="somereq"): |
||||
logger.info("Hello there, {name}!", name="steve") |
||||
|
||||
self.assertEqual(len(logs), 1) |
||||
self.assertEqual(logs[0]["request"], "somereq") |
||||
|
||||
|
||||
class StructuredLoggingConfigurationFileTestCase( |
||||
StructuredLoggingTestBase, HomeserverTestCase |
||||
): |
||||
def make_homeserver(self, reactor, clock): |
||||
|
||||
tempdir = self.mktemp() |
||||
os.mkdir(tempdir) |
||||
log_config_file = os.path.abspath(os.path.join(tempdir, "log.config.yaml")) |
||||
self.homeserver_log = os.path.abspath(os.path.join(tempdir, "homeserver.log")) |
||||
|
||||
config = self.default_config() |
||||
config["log_config"] = log_config_file |
||||
|
||||
with open(log_config_file, "w") as f: |
||||
f.write( |
||||
textwrap.dedent( |
||||
"""\ |
||||
structured: true |
||||
|
||||
drains: |
||||
file: |
||||
type: file_json |
||||
location: %s |
||||
""" |
||||
% (self.homeserver_log,) |
||||
) |
||||
) |
||||
|
||||
self.addCleanup(self._sys_cleanup) |
||||
|
||||
return self.setup_test_homeserver(config=config) |
||||
|
||||
def _sys_cleanup(self): |
||||
sys.stdout = sys.__stdout__ |
||||
sys.stderr = sys.__stderr__ |
||||
|
||||
# Do not remove! We need the logging system to be set other than WARNING. |
||||
@DEBUG |
||||
def test_log_output(self): |
||||
""" |
||||
When a structured logging config is given, Synapse will use it. |
||||
""" |
||||
beginner = FakeBeginner() |
||||
publisher = setup_logging(self.hs, self.hs.config, logBeginner=beginner) |
||||
|
||||
# Make a logger and send an event |
||||
logger = Logger(namespace="tests.logging.test_structured", observer=publisher) |
||||
|
||||
with LoggingContext("testcontext", request="somereq"): |
||||
logger.info("Hello there, {name}!", name="steve") |
||||
|
||||
with open(self.homeserver_log, "r") as f: |
||||
logged_events = [ |
||||
eventAsText(x, includeTimestamp=False) for x in eventsFromJSONLogFile(f) |
||||
] |
||||
|
||||
logs = "\n".join(logged_events) |
||||
self.assertTrue("***** STARTING SERVER *****" in logs) |
||||
self.assertTrue("Hello there, steve!" in logs) |
Loading…
Reference in new issue