Python,希望使用日志轮换和压缩进行日志记录

时间:2011-12-11 22:18:50

标签: python logging

任何人都可以建议使用python进行日志记录:

  • 每天记录轮换
  • 旋转时压缩日志
  • 可选 - 删除最旧的日志文件以保留X MB的可用空间
  • 可选 - sftp日志文件到服务器

感谢您的回复, 佛瑞德

9 个答案:

答案 0 :(得分:64)

    每天
  • 日志轮换:使用TimedRotatingFileHandler
  • 压缩日志:设置encoding='bz2'参数。 (注意这个“技巧”仅适用于Python2。'bz2'不再被认为是Python3中的编码。)
  • 可选 - 删除最旧的日志文件以保留X MB的可用空间。 您可以(间接)使用RotatingFileHandler进行排列。通过设置maxBytes参数,日志文件在达到特定大小时将进行翻转。通过设置backupCount参数,您可以控制保留的翻转次数。这两个参数一起允许您控制日志文件占用的最大空间。您可以将TimeRotatingFileHandler子类化为将此行为合并到其中。

只是为了好玩,这里是你如何继承TimeRotatingFileHandler的方法。当您运行下面的脚本时,它会将日志文件写入/tmp/log_rotate*

time.sleep的值较小(例如0.1),日志文件会快速填满,达到maxBytes限制,然后翻转。

对于大time.sleep(例如1.0),日志文件缓慢填满,未达到maxBytes限制,但是当达到定时间隔(10秒)时它们仍会翻转。

以下所有代码均来自logging/handlers.py。我只是以最直接的方式将TimeRotatingFileHandler与RotatingFileHandler网格化。

import time
import re
import os
import stat
import logging
import logging.handlers as handlers


class SizedTimedRotatingFileHandler(handlers.TimedRotatingFileHandler):
    """
    Handler for logging to a set of files, which switches from one file
    to the next when the current file reaches a certain size, or at certain
    timed intervals
    """

    def __init__(self, filename, maxBytes=0, backupCount=0, encoding=None,
                 delay=0, when='h', interval=1, utc=False):
        handlers.TimedRotatingFileHandler.__init__(
            self, filename, when, interval, backupCount, encoding, delay, utc)
        self.maxBytes = maxBytes

    def shouldRollover(self, record):
        """
        Determine if rollover should occur.

        Basically, see if the supplied record would cause the file to exceed
        the size limit we have.
        """
        if self.stream is None:                 # delay was set...
            self.stream = self._open()
        if self.maxBytes > 0:                   # are we rolling over?
            msg = "%s\n" % self.format(record)
            # due to non-posix-compliant Windows feature
            self.stream.seek(0, 2)
            if self.stream.tell() + len(msg) >= self.maxBytes:
                return 1
        t = int(time.time())
        if t >= self.rolloverAt:
            return 1
        return 0


def demo_SizedTimedRotatingFileHandler():
    log_filename = '/tmp/log_rotate'
    logger = logging.getLogger('MyLogger')
    logger.setLevel(logging.DEBUG)
    handler = SizedTimedRotatingFileHandler(
        log_filename, maxBytes=100, backupCount=5,
        when='s', interval=10,
        # encoding='bz2',  # uncomment for bz2 compression
    )
    logger.addHandler(handler)
    for i in range(10000):
        time.sleep(0.1)
        logger.debug('i=%d' % i)

demo_SizedTimedRotatingFileHandler()

答案 1 :(得分:17)

在旋转期间压缩日志文件的另一种方法(在python 3.3中是新的)使用BaseRotatingHandler(和所有继承的)类属性 rotator ,例如:

import gzip
import os
import logging
import logging.handlers

class GZipRotator:
    def __call__(self, source, dest):
        os.rename(source, dest)
        f_in = open(dest, 'rb')
        f_out = gzip.open("%s.gz" % dest, 'wb')
        f_out.writelines(f_in)
        f_out.close()
        f_in.close()
        os.remove(dest)

logformatter = logging.Formatter('%(asctime)s;%(levelname)s;%(message)s')
log = logging.handlers.TimedRotatingFileHandler('debug.log', 'midnight', 1, backupCount=5)
log.setLevel(logging.DEBUG)
log.setFormatter(logformatter)
log.rotator = GZipRotator()

logger = logging.getLogger('main')
logger.addHandler(log)    
logger.setLevel(logging.DEBUG)

....

您可以看到更多here

答案 2 :(得分:10)

除了unutbu的回答:这里是如何修改TimedRotatingFileHandler以使用zip文件进行压缩。

import logging
import logging.handlers
import zipfile
import codecs
import sys
import os
import time
import glob


class TimedCompressedRotatingFileHandler(logging.handlers.TimedRotatingFileHandler):
    """
    Extended version of TimedRotatingFileHandler that compress logs on rollover.
    """
    def doRollover(self):
        """
        do a rollover; in this case, a date/time stamp is appended to the filename
        when the rollover happens.  However, you want the file to be named for the
        start of the interval, not the current time.  If there is a backup count,
        then we have to get a list of matching filenames, sort them and remove
        the one with the oldest suffix.
        """

        self.stream.close()
        # get the time that this sequence started at and make it a TimeTuple
        t = self.rolloverAt - self.interval
        timeTuple = time.localtime(t)
        dfn = self.baseFilename + "." + time.strftime(self.suffix, timeTuple)
        if os.path.exists(dfn):
            os.remove(dfn)
        os.rename(self.baseFilename, dfn)
        if self.backupCount > 0:
            # find the oldest log file and delete it
            s = glob.glob(self.baseFilename + ".20*")
            if len(s) > self.backupCount:
                s.sort()
                os.remove(s[0])
        #print "%s -> %s" % (self.baseFilename, dfn)
        if self.encoding:
            self.stream = codecs.open(self.baseFilename, 'w', self.encoding)
        else:
            self.stream = open(self.baseFilename, 'w')
        self.rolloverAt = self.rolloverAt + self.interval
        if os.path.exists(dfn + ".zip"):
            os.remove(dfn + ".zip")
        file = zipfile.ZipFile(dfn + ".zip", "w")
        file.write(dfn, os.path.basename(dfn), zipfile.ZIP_DEFLATED)
        file.close()
        os.remove(dfn)

if __name__=='__main__':
    ## Demo of using TimedCompressedRotatingFileHandler() to log every 5 seconds,
    ##     to one uncompressed file and five rotated and compressed files

    os.nice(19)   # I always nice test code

    logHandler = TimedCompressedRotatingFileHandler("mylog", when="S",
        interval=5, backupCount=5) # Total of six rotated log files, rotating every 5 secs
    logFormatter = logging.Formatter(
        fmt='%(asctime)s.%(msecs)03d %(message)s', 
        datefmt='%Y-%m-%d %H:%M:%S'
        )
    logHandler.setFormatter(logFormatter)
    mylogger = logging.getLogger('MyLogRef')
    mylogger.addHandler(logHandler)
    mylogger.setLevel(logging.DEBUG)

    # Write lines non-stop into the logger and rotate every 5 seconds
    ii = 0
    while True:
        mylogger.debug("Test {0}".format(ii))
        ii += 1

答案 3 :(得分:3)

我想加入派对已经太晚了,但这就是我所做的。我创建了一个继承#!/usr/bin/env python # Import all the needed modules import logging.handlers import sys import time import gzip import os import shutil import random import string __version__ = 1.0 __descr__ = "This logic is written keeping in mind UNIX/LINUX/OSX platforms only" # Create a new class that inherits from RotatingFileHandler. This is where we add the new feature to compress the logs class CompressedRotatingFileHandler(logging.handlers.RotatingFileHandler): def doRollover(self): """ Do a rollover, as described in __init__(). """ if self.stream: self.stream.close() if self.backupCount > 0: for i in range(self.backupCount - 1, 0, -1): sfn = "%s.%d.gz" % (self.baseFilename, i) dfn = "%s.%d.gz" % (self.baseFilename, i + 1) if os.path.exists(sfn): # print "%s -> %s" % (sfn, dfn) if os.path.exists(dfn): os.remove(dfn) os.rename(sfn, dfn) dfn = self.baseFilename + ".1.gz" if os.path.exists(dfn): os.remove(dfn) # These two lines below are the only new lines. I commented out the os.rename(self.baseFilename, dfn) and # replaced it with these two lines. with open(self.baseFilename, 'rb') as f_in, gzip.open(dfn, 'wb') as f_out: shutil.copyfileobj(f_in, f_out) # os.rename(self.baseFilename, dfn) # print "%s -> %s" % (self.baseFilename, dfn) self.mode = 'w' self.stream = self._open() # Specify which file will be used for our logs log_filename = "/Users/myname/Downloads/test_logs/sample_log.txt" # Create a logger instance and set the facility level my_logger = logging.getLogger() my_logger.setLevel(logging.DEBUG) # Create a handler using our new class that rotates and compresses file_handler = CompressedRotatingFileHandler(filename=log_filename, maxBytes=1000000, backupCount=10) # Create a stream handler that shows the same log on the terminal (just for debug purposes) view_handler = logging.StreamHandler(stream=sys.stdout) # Add all the handlers to the logging instance my_logger.addHandler(file_handler) my_logger.addHandler(view_handler) # This is optional to beef up the logs random_huge_data = "".join(random.choice(string.ascii_letters) for _ in xrange(10000)) # All this code is user-specific, write your own code if you want to play around count = 0 while True: my_logger.debug("This is the message number %s" % str(count)) my_logger.debug(random_huge_data) count += 1 if count % 100 == 0: count = 0 time.sleep(2) 类的新类,并在移动之前添加了几行来gzip文件。

https://github.com/rkreddy46/python_code_reference/blob/master/compressed_log_rotator.py

    import xml.etree.ElementTree as ET
    input = '''<collection shelf="New Arrivals">
    <movie title="Enemy Behind">
      <type>Wr, Thrller</type>
      <type> Wwwwar, Thrilllllller</type>   
      <format>DVD</format>
      <year>2003</year>
    </movie>
    <movie title="Transformers">
      <type>Anime, Science Fiction</type>
      <format>DVD</format>
      <year>1989</year>
    </movie>

   </collection>'''

   collection = ET.fromstring(input)
   lst = collection.findall('movie')
   print ('Movie count:', len(lst))
   for item in lst:
       print ('Movie Title', item.get("title"))

       typelst = collection.findall('movie')
       for item in typelst:
           print ('Type', item.find('type').text)
       print ('Format', item.find('format').text)
       print ('Year',item.find('year').text)

答案 4 :(得分:1)

请注意:python 3中的类签名已更改。这是我在python 3.6中的工作示例

import logging.handlers
import os
import zlib


def namer(name):
    return name + ".gz"


def rotator(source, dest):
    print(f'compressing {source} -> {dest}')
    with open(source, "rb") as sf:
        data = sf.read()
        compressed = zlib.compress(data, 9)
        with open(dest, "wb") as df:
            df.write(compressed)
    os.remove(source)


err_handler = logging.handlers.TimedRotatingFileHandler('/data/errors.log', when="M", interval=1,
                                                        encoding='utf-8', backupCount=30, utc=True)
err_handler.rotator = rotator
err_handler.namer = namer

logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.ERROR)

logger.addHandler(err_handler)

答案 5 :(得分:0)

要复制文件,请将复制的文件gzip(使用纪元时间),然后以不会扰乱日志记录模块的方式清除现有文件:

import gzip
import logging
import os
from shutil import copy2
from time import time

def logRoll(logfile_name):
    log_backup_name = logfile_name + '.' + str(int(time()))
    try:
        copy2(logfile_name, log_backup_name)   
    except IOError, err:
        logging.debug(' No logfile to roll')
        return
    f_in = open(log_backup_name, 'rb')
    f_out = gzip.open(log_backup_name + '.gz', 'wb')
    f_out.writelines(f_in)
    f_out.close()
    f_in.close()
    os.remove(log_backup_name)
    f=open(logfile_name, 'w')
    f.close()

答案 6 :(得分:0)

我认为最好的选择是使用TimedRotatingFileHandler的当前实现,并在将日志文件重命名为旋转版本后将其压缩:

import zipfile
import os
from logging.handlers import TimedRotatingFileHandler


class TimedCompressedRotatingFileHandler(TimedRotatingFileHandler):
    """
    Extended version of TimedRotatingFileHandler that compress logs on rollover.
    """
    def find_last_rotated_file(self):
        dir_name, base_name = os.path.split(self.baseFilename)
        file_names = os.listdir(dir_name)
        result = []
        prefix = '{}.20'.format(base_name)  # we want to find a rotated file with eg filename.2017-12-12... name
        for file_name in file_names:
            if file_name.startswith(prefix) and not file_name.endswith('.zip'):
                result.append(file_name)
        result.sort()
        return result[0]

    def doRollover(self):
        super(TimedCompressedRotatingFileHandler, self).doRollover()

        dfn = self.find_last_rotated_file()
        dfn_zipped = '{}.zip'.format(dfn)
        if os.path.exists(dfn_zipped):
            os.remove(dfn_zipped)
        with zipfile.ZipFile(dfn_zipped, 'w') as f:
            f.write(dfn, dfn_zipped, zipfile.ZIP_DEFLATED)
        os.remove(dfn)

答案 7 :(得分:0)

这是我的解决方案(从evgenek修改而来),​​非常简单,并且在gzip压缩大型日志文件时不会阻止python代码:

class GZipRotator:
    def __call__(self, source, dest):
        os.rename(source, dest)
        subprocess.Popen(['gzip', dest])

答案 8 :(得分:0)

我在以下一种解决方案中添加了我基本上压缩旧版本的解决方案 备份日志以压缩并带有时间戳。使用一个额外的变量 称为ZipbackupCount。 #没有要压缩的旧文件

例如我们有这样的日志。 (backupcount = 5,ZipbackupCount = 2)

a.log.1

a.log.2

a.log.3

a.log.4

a.log.11-09-2020-11-11-11.zip

一旦备份日志的数量达到5,就会触发将a.log.5和a.log.4压缩到上面的zip并继续。

import os
import datetime
import gzip
import logging.handlers
import zipfile

from config.config import PROJECT_PATH, LOG_DIR, LOG_FILE_NAME, LOG_FILESIZE


class NewRotatingFileHandler(logging.handlers.RotatingFileHandler):
    def __init__(self, filename, **kws):
        backupCount = kws.get('backupCount', 0)
        self.backup_count = backupCount
        self.ZipbackupCount = kws.pop('ZipbackupCount', 0)
        self.file_name = filename
        self.log_dir = os.path.split(self.file_name)[0]
        self.log_file_name = os.path.split(self.file_name)[-1]

        logging.handlers.RotatingFileHandler.__init__(self, filename, **kws)

    def doArchive(self, old_log):
        with open(old_log) as log:
            with gzip.open(old_log + '.gz', 'wb') as comp_log:
                comp_log.writelines(log)

        os.remove(old_log)

    def doRollover(self):
        super(NewRotatingFileHandler, self).doRollover()

        zip_file_name = self.log_file_name + "." + datetime.datetime.now().strftime("%d-%m-%Y-%H-%M-%S") + ".zip"

        if os.path.exists(self.rotation_filename("%s.%d" % (self.baseFilename, self.backupCount))) and self.ZipbackupCount > 0 and self.file_name:

            with zipfile.ZipFile(os.path.join(self.log_dir, zip_file_name), "w", zipfile.ZIP_DEFLATED, allowZip64=True) as zf:

                for i in range(self.backupCount, self.backupCount - self.ZipbackupCount, -1):

                    sfn = self.rotation_filename("%s.%d" % (self.baseFilename, i))

                    if os.path.exists(sfn):
                        zf.write(sfn, "%s.%d" % (self.log_file_name, i))
                        os.remove(sfn)
                    else:
                        continue

                zf.close()


# handler = NewRotatingFileHandler(filename=os.path.join(PROJECT_PATH, LOG_DIR, LOG_FILE_NAME),
#                                  maxBytes=LOG_FILESIZE, backupCount=5, ZipbackupCount=2)
#
# handler.doRollover()