2015-09-18 115 views
1

我添加了一个自定义日志处理程序到我的django应用程序,将日志条目写入数据库。django添加自定义日志记录到芹菜日志记录

class DbLogHandler(logging.Handler): # Inherit from logging.Handler 
    def __init__(self): 
     # run the regular Handler __init__ 
     logging.Handler.__init__(self) 
     self.entries = [] 
     logging.debug("*****************[DB] INIT db handler") 

    def emit(self, record): 
     # instantiate the model 
     logging.debug("*****************[DB] called emit on db handler") 
     try: 
      revision_instance = getattr(record, 'revision', None) 
      logEntry = MyModel(name=record.name, 
            log_level_name=record.levelname, 
            message = record.msg, 
            module = record.module, 
            func_name = record.funcName, 
            line_no = record.lineno, 
            exception = record.exc_text, 
            revision = revision_instance 
           ) 
      if revision_instance is None: 
       return 
      self.entries.append(logEntry) 

     except Exception as ex: 
      print(ex) 
     return 

    def flush(self): 
     if self.entries: 
      MyModel.objects.bulk_create(self.entries) 
      logging.info("[+] Successfully flushed {0:d} log entries to " 
         "the DB".format(len(self.entries))) 
     else: 
      logging.info("[*] No log entries for DB logger") 

当我直接调用一个函数时,假设通过运行管理命令,处理程序正确使用。然而,在生产中,入口点将是芹菜任务。我的理解是芹菜有它自己的记录机制。我正在尝试做但不能工作的是将我的db处理程序添加到芹菜日志记录中。也就是说,所有的芹菜日志也将发送到DbLogHandler

这就是我试图做到的。在my_app.celery_logging.logger

from celery.utils.log import get_task_logger 

class CeleryAdapter(logging.LoggerAdapter): 
    """Adapter to add current task context to "extra" log fields.""" 
    def process(self, msg, kwargs): 
     if not celery.current_task: 
      return msg, kwargs 

     kwargs = kwargs.copy() 
     kwargs.setdefault('extra', {})['celery'] = \ 
      vars(celery.current_task.request) 
     return msg, kwargs 

def task_logger(name): 
    """ 
    Return a custom celery task logger that will also log to db. 

    We need to add the db handler explicitly otherwise it is not picked 
    up by celery. 

    Also, we wrap the logger in a CeleryAdapter to provide some extra celery- 
    related context to the logging messages. 

    """ 
    # first get the default celery task logger 
    log = get_task_logger(name) 

    # if available, add the db-log handler explicitly to the celery task 
    # logger 
    handlers = settings.LOGGING.get('handlers', []) 
    if handlers: 
     db_handler_dict = handlers.get('db', None) 
     if (db_handler_dict != settings.NULL_HANDLER_PARAMS and 
       db_handler_dict is not None): 
      db_handler = {'db': {'class': 'my_app.db_logging.db_logger.DbLogHandler', 
            'formatter': 'verbose', 
            'level': 'DEBUG'}} 
      log.addHandler(db_handler) 

    # wrap the logger by the CeleryAdapter to add some celery specific 
    # context to the logs 
    return CeleryAdapter(log, {}) 

之后,终于在我的task.py

from my_app.celery_logging.logger import task_logger 
logger = task_logger(__name__) 

但是从这一点上看,这是一个痛苦的世界。我甚至无法描述究竟发生了什么。当我启动服务器并查看芹菜日志输出时,我发现我的db-logger实际上正在调用,但芹菜似乎放松了工人。

[2015-09-18 10:30:57,158: INFO/MainProcess] [*] No log entries for DB logger 
Raven is not configured (logging is disabled). Please see the documentation for more information. 
2015-09-18 10:30:58,659 raven.contrib.django.client.DjangoClient INFO Raven is not configured (logging is disabled). Please see the documentation for more information. 
[2015-09-18 10:30:59,155: DEBUG/MainProcess] | Worker: Preparing bootsteps. 
[2015-09-18 10:30:59,157: DEBUG/MainProcess] | Worker: Building graph... 
[2015-09-18 10:30:59,158: DEBUG/MainProcess] | Worker: New boot order: {Timer, Hub, Queues (intra), Pool, Autoscaler, Autoreloader, StateDB, Beat, Consumer} 
[2015-09-18 10:30:59,161: DEBUG/MainProcess] | Consumer: Preparing bootsteps. 
[2015-09-18 10:30:59,161: DEBUG/MainProcess] | Consumer: Building graph... 
[2015-09-18 10:30:59,164: DEBUG/MainProcess] | Consumer: New boot order: {Connection, Events, Mingle, Tasks, Control, Gossip, Agent, Heart, event loop} 
[2015-09-18 10:30:59,167: DEBUG/MainProcess] | Worker: Starting Hub 
[2015-09-18 10:30:59,167: DEBUG/MainProcess] ^-- substep ok 
[2015-09-18 10:30:59,167: DEBUG/MainProcess] | Worker: Starting Pool 
[2015-09-18 10:30:59,173: DEBUG/MainProcess] ^-- substep ok 
[2015-09-18 10:30:59,173: DEBUG/MainProcess] | Worker: Starting Consumer 
[2015-09-18 10:30:59,174: DEBUG/MainProcess] | Consumer: Starting Connection 
[2015-09-18 10:30:59,180: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672// 
[2015-09-18 10:30:59,180: DEBUG/MainProcess] ^-- substep ok 
[2015-09-18 10:30:59,180: DEBUG/MainProcess] | Consumer: Starting Events 
[2015-09-18 10:30:59,188: DEBUG/MainProcess] ^-- substep ok 
[2015-09-18 10:30:59,188: DEBUG/MainProcess] | Consumer: Starting Mingle 
[2015-09-18 10:30:59,188: INFO/MainProcess] mingle: searching for neighbors 
[2015-09-18 10:31:00,196: INFO/MainProcess] mingle: all alone 
[2015-09-18 10:31:00,196: DEBUG/MainProcess] ^-- substep ok 
[2015-09-18 10:31:00,197: DEBUG/MainProcess] | Consumer: Starting Tasks 
[2015-09-18 10:31:00,203: DEBUG/MainProcess] ^-- substep ok 
[2015-09-18 10:31:00,204: DEBUG/MainProcess] | Consumer: Starting Control 
[2015-09-18 10:31:00,207: DEBUG/MainProcess] ^-- substep ok 
[2015-09-18 10:31:00,208: DEBUG/MainProcess] | Consumer: Starting Gossip 
[2015-09-18 10:31:00,211: DEBUG/MainProcess] ^-- substep ok 
[2015-09-18 10:31:00,211: DEBUG/MainProcess] | Consumer: Starting Heart 
[2015-09-18 10:31:00,212: DEBUG/MainProcess] ^-- substep ok 
[2015-09-18 10:31:00,212: DEBUG/MainProcess] | Consumer: Starting event loop 
[2015-09-18 10:31:00,213: WARNING/MainProcess] [email protected] ready. 
[2015-09-18 10:31:00,213: DEBUG/MainProcess] | Worker: Hub.register Pool... 
[2015-09-18 10:31:00,255: ERROR/MainProcess] Unrecoverable error: WorkerLostError('Could not start worker processes',) 
Traceback (most recent call last): 
    File "/home/vagrant/.buildout/eggs/celery-3.1.18-py2.7.egg/celery/worker/__init__.py", line 206, in start 
    self.blueprint.start(self) 
    File "/home/vagrant/.buildout/eggs/celery-3.1.18-py2.7.egg/celery/bootsteps.py", line 123, in start 
    step.start(parent) 
    File "/home/vagrant/.buildout/eggs/celery-3.1.18-py2.7.egg/celery/bootsteps.py", line 374, in start 
    return self.obj.start() 
    File "/home/vagrant/.buildout/eggs/celery-3.1.18-py2.7.egg/celery/worker/consumer.py", line 278, in start 
    blueprint.start(self) 
    File "/home/vagrant/.buildout/eggs/celery-3.1.18-py2.7.egg/celery/bootsteps.py", line 123, in start 
    step.start(parent) 
    File "/home/vagrant/.buildout/eggs/celery-3.1.18-py2.7.egg/celery/worker/consumer.py", line 821, in start 
    c.loop(*c.loop_args()) 
    File "/home/vagrant/.buildout/eggs/celery-3.1.18-py2.7.egg/celery/worker/loops.py", line 48, in asynloop 
    raise WorkerLostError('Could not start worker processes') 

当调用芹菜任务时,我也看不到任何日志了。

+0

你确定这不是导致工人退出异常? – patrys

+0

工作人员损失在我启动服务器后立即发生,因此所有工作人员都应该等待任务。 – LarsVegas

+0

我在问芹菜是从任务发现开始的,这会导致你的任务模块被导入,并调用'get_task_logger(...)',而它似乎试图访问'settings'而不先导入它。 – patrys

回答

0

集worker_hijack_root_logger为False配置,并且定制记录

link

+0

请发表一个答案,然后参考。 – Sachith