在下面给出的python logger实现中,每次运行程序时,每次都会将附加日志添加到现有日志文件中。如何确保每次运行应用程序代码时,它们都会写入新的日志文件?
是否发生这种情况是因为我将RotatingFileHandler的备份计数设置为20,每个文件大小为10MB?我应该将其转换为简单文件处理程序吗?
我在python logger中使用了基于yaml的日志配置。
1 version: 1
2
3 formatters:
4 simple:
5 format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
6
7 handlers:
8 console:
9 class: logging.StreamHandler
10 level: DEBUG
11 formatter: simple
12 stream: ext://sys.stdout
13
14 info_handler:
15 class: logging.handlers.RotatingFileHandler
16 level: INFO
17 formatter: simple
18 filename: info.log
19 maxBytes: 10485760 # 10MB
20 backupCount: 20
21 encoding: utf8
22
23 error_handler:
24 class: logging.handlers.RotatingFileHandler
25 level: ERROR
26 formatter: simple
27 filename: errors.log
28 maxBytes: 10485760 # 10MB
29 backupCount: 20
30 encoding: utf8
31
32 loggers:
33 my_module:
34 level: ERROR
35 handlers: [console]
36 propagate: no
37
38 root:
39 level: DEBUG
40 handlers: [info_handler, info_handler]
我正在使用以下python logger初始化代码来初始化我的记录器。
1 import os
2 import logging.config
3 import yaml
4
5 """Setup logging configuration """
6 default_path='logging.yaml'
7 default_level=logging.INFO
8 env_key='LOG_CFG'
9
10 class MyLogger():
11
12 def __init__(self):
13 path = default_path
14 value = os.getenv(env_key, None)
15 if value:
16 path = value
17 if os.path.exists(path):
18 with open(path, 'rt') as f:
19 config = yaml.safe_load(f.read())
20 logging.config.dictConfig(config)
21 else:
22 logging.basicConfig(filemode='w', level=default_level)
答案 0 :(得分:1)
将filemode
设置为w
,默认为a
(追加)。
或者只是添加以下行来覆盖旧的日志文件(在阅读yaml文件之后):
with open(config['handlers']['info_handler']['filename'], 'w') as f:
pass
答案 1 :(得分:1)
我知道这已经有了一个已接受的答案,但我不确定它是否非常干净/完整地回答了这个问题,而我正在添加这个答案以防其他人遇到类似的问题。相反,可以在配置YAML文件中包含此选项:
handlers:
info_file_handler:
class: logging.FileHandler
formatter: complex
filename: log_info.log
mode: 'w'
' w' in mode与basicConfig()中使用的filemode选项相同,基本上会覆盖日志文件而不是附加(这将是选项' a')。我还认为模式键可以用于RotatingFileHandler。
然后我添加到我的主代码中,以便每个模型运行时更改每个文件名(这当前每天更改日志文件名[我每天运行程序])。见下文:
def setup_logging(default_path="logging_config.yml",
default_level=logging.INFO,
env_key="LOG_CFG"):
"""
Setup logging configuration
This reads a .yml file and extracts the relevant logging information
required by the logging module.
"""
path = default_path
value = os.getenv(env_key, None)
if value:
path = value
# Open the logging configuration file
if os.path.exists(path):
with open(path, "rb") as f:
config = yaml.safe_load(f.read())
# Append the date stamp to the filename for each handler
# This is so each log file has a unique filename if run
# on a separate date.
# This can be changed to suit needs, hourly/minutely etc.
for i in (config["handlers"].keys()):
log_filename = config["handlers"][i]["filename"]
base, extension = os.path.splitext(log_filename)
today = datetime.datetime.today()
log_filename = "{}{}{}".format(base,
today.strftime("_%Y_%m_%d"),
extension)
config["handlers"][i]["filename"] = log_filename
logging.config.dictConfig(config)
else:
logging.basicConfig(level=default_level)
如果接受的答案确实对您有用,我希望这会有所帮助和道歉,我只是觉得这是OP提出的问题的一个更清洁的解决方案。