Handlers#

Base Handler#

class picologging.Handler#

Handler interface.

acquire()#

Acquire the lock.

close()#

Tidy up any resources used by the handler.

createLock()#

Create a new lock instance.

emit()#

Emit a record.

flush()#

Ensure all logging output has been flushed.

format()#

Format a record.

formatter#

Handler formatter

get_name()#

Get the name of the handler.

handle()#

Handle a record.

handleError()#

Handle an error during an emit().

level#

Handler level

name#

Handler name

release()#

Release the lock.

setFormatter()#

Set the formatter of the handler.

setLevel()#

Set the level of the handler.

set_name()#

Set the name of the handler.

Watched File Handler#

class picologging.handlers.WatchedFileHandler(filename, mode='a', encoding=None, delay=False)[source]#

A handler for logging to a file, which watches the file to see if it has changed while in use. This can happen because of usage of programs such as newsyslog and logrotate which perform log file rotation. This handler, intended for use under Unix, watches the file to see if it has changed since the last emit. (A file has changed if its device or inode have changed.) If it has changed, the old file stream is closed, and the file opened to get a new stream.

This handler is not appropriate for use under Windows, because under Windows open files cannot be moved or renamed - logging opens the files with exclusive locks - and so there is no need for such a handler. Furthermore, ST_INO is not supported under Windows; stat always returns zero for this value.

reopenIfNeeded()[source]#

Reopen log file if needed.

Checks if the underlying file has changed, and if it has, close the old stream and reopen the file to get the current stream.

emit(record)[source]#

Emit a record.

If underlying file has changed, reopen the file before emitting the record to it.

Base Rotating Handler#

class picologging.handlers.BaseRotatingHandler(filename, mode, encoding=None, delay=False)[source]#

Base class for handlers that rotate log files at a certain point. Not meant to be instantiated directly. Instead, use RotatingFileHandler or TimedRotatingFileHandler.

shouldRollover(record)[source]#

Determine if rollover should occur. Should be implemented in inherited classes.

doRollover(record)[source]#

Do a rollover. Should be implemented in inherited classes.

emit(record)[source]#

Emit a record. Output the record to the file, catering for rollover as described in doRollover().

rotation_filename(default_name)[source]#

Modify the filename of a log file when rotating. This is provided so that a custom filename can be provided. The default implementation calls the ‘namer’ attribute of the handler, if it’s callable, passing the default name to it. If the attribute isn’t callable (the default is None), the name is returned unchanged. :param default_name: The default name for the log file.

rotate(source, dest)[source]#

When rotating, rotate the current log. The default implementation calls the ‘rotator’ attribute of the handler, if it’s callable, passing the source and dest arguments to it. If the attribute isn’t callable (the default is None), the source is simply renamed to the destination.

Parameters:
  • source – The source filename. This is normally the base filename, e.g. ‘test.log’

  • dest – The destination filename. This is normally what the source is rotated to, e.g. ‘test.log.1’.

Rotating File Handler#

class picologging.handlers.RotatingFileHandler(filename, mode='a', maxBytes=0, backupCount=0, encoding=None, delay=False)[source]#

Handler for logging to a set of files, which switches from one file to the next when the current file reaches a certain size.

doRollover()[source]#

Do a rollover, as described in __init__().

shouldRollover(record)[source]#

Determine if rollover should occur. Basically, see if the supplied record would cause the file to exceed the size limit we have.

Timed Rotating File Handler#

class picologging.handlers.TimedRotatingFileHandler(filename, when='h', interval=1, backupCount=0, encoding=None, delay=False, utc=False, atTime=None)[source]#

Handler for logging to a file, rotating the log file at certain timed intervals. If backupCount is > 0, when rollover is done, no more than backupCount files are kept - the oldest ones are deleted.

computeRollover(current_time)[source]#

Work out the rollover time based on the specified time.

shouldRollover(record)[source]#

Determine if rollover should occur. record is not used, as we are just comparing times, but it is needed so the method signatures are the same

getFilesToDelete()[source]#

Determine the files to delete when rolling over. More specific than the earlier method, which just used glob.glob().

doRollover()[source]#

do a rollover; in this case, a date/time stamp is appended to the filename when the rollover happens. However, you want the file to be named for the start of the interval, not the current time. If there is a backup count, then we have to get a list of matching filenames, sort them and remove the one with the oldest suffix.

Queue Handler#

class picologging.handlers.QueueHandler(queue)[source]#

This handler sends events to a queue. Typically, it would be used together with a multiprocessing Queue to centralise logging to file in one process (in a multi-process application), so as to avoid file write contention between processes.

This code is new in Python 3.2, but this class can be copy pasted into user code for use with earlier Python versions.

enqueue(record)[source]#

Enqueue a record.

The base implementation uses put_nowait. You may want to override this method if you want to use blocking, timeouts or custom queue implementations.

prepare(record)[source]#

Prepare a record for queuing. The object returned by this method is enqueued.

The base implementation formats the record to merge the message and arguments, and removes unpickleable items from the record in-place. Specifically, it overwrites the record’s msg and message attributes with the merged message (obtained by calling the handler’s format method), and sets the args, exc_info and exc_text attributes to None.

You might want to override this method if you want to convert the record to a dict or JSON string, or send a modified copy of the record while leaving the original intact.

emit(record: LogRecord)[source]#

Emit a record.

Writes the LogRecord to the queue, copying it first.

Queue Listener#

The queue listener and queue handler can be combined for non-blocking logging, for example:

logger = picologging.Logger("test", picologging.DEBUG)
stream = io.StringIO()
stream_handler = picologging.StreamHandler(stream)
q = queue.Queue()
listener = QueueListener(q, stream_handler)
listener.start()
handler = QueueHandler(q)
logger.addHandler(handler)
logger.debug("test")

listener.stop()
assert stream.getvalue() == "test\n"
class picologging.handlers.QueueListener(queue, *handlers, respect_handler_level=False)[source]#

This class implements an internal threaded listener which watches for LogRecords being added to a queue, removes them and passes them to a list of handlers for processing.

dequeue(block)[source]#

Dequeue a record and return it, optionally blocking.

The base implementation uses get. You may want to override this method if you want to use timeouts or work with custom queue implementations.

start()[source]#

Start the listener.

This starts up a background thread to monitor the queue for LogRecords to process.

prepare(record)[source]#

Prepare a record for handling.

This method just returns the passed-in record. You may want to override this method if you need to do any custom marshalling or manipulation of the record before passing it to the handlers.

handle(record)[source]#

Handle a record.

This just loops through the handlers offering them the record to handle.

enqueue_sentinel()[source]#

This is used to enqueue the sentinel record.

The base implementation uses put_nowait. You may want to override this method if you want to use timeouts or work with custom queue implementations.

stop()[source]#

Stop the listener.

This asks the thread to terminate, and then waits for it to do so. Note that if you don’t call this before your application exits, there may be some records still left on the queue, which won’t be processed.

Buffering Handler#

class picologging.handlers.BufferingHandler(capacity)[source]#

A handler class which buffers logging records in memory. Whenever each record is added to the buffer, a check is made to see if the buffer should be flushed. If it should, then flush() is expected to do what’s needed.

emit(record)[source]#

Emit a record. Append the record and call flush() if criteria is met.

flush()[source]#

Override to implement custom flushing behaviour. This version just zaps the buffer to empty.

close()[source]#

Close the handler. This version just flushes and chains to the parent class’ close().

Memory Handler#

class picologging.handlers.MemoryHandler(capacity, flushLevel=40, target=None, flushOnClose=True)[source]#

A handler class which buffers logging records in memory, periodically flushing them to a target handler. Flushing occurs whenever the buffer is full, or when an event of a certain severity or greater is seen.

setTarget(target)[source]#

Set the target handler for this handler.

flush()[source]#

For a MemoryHandler, flushing means just sending the buffered records to the target, if there is one. Override if you want different behaviour. The record buffer is also cleared by this operation.

close()[source]#

Flush, if appropriately configured, set the target to None and lose the buffer.

emit(record)[source]#

Emit a record. Append the record and call flush() if criteria is met.

Socket Handler#

class picologging.handlers.SocketHandler(host, port)[source]#

A handler class which writes logging records, in pickle format, to a streaming socket. The socket is kept open across logging calls. If the peer resets it, an attempt is made to reconnect on the next call. The pickle which is sent is that of the LogRecord’s attribute dictionary (__dict__), so that the receiver does not need to have the logging module installed in order to process the logging event. To unpickle the record at the receiving end into a LogRecord, use the makeLogRecord function.

makeSocket(timeout=1)[source]#

A factory method which allows subclasses to define the precise type of socket they want.

createSocket()[source]#

Try to create a socket, using an exponential backoff with a max retry time.

send(s)[source]#

Send a pickled string to the socket. This function allows for partial sends which can happen when the network is busy.

makePickle(record)[source]#

Pickles the record in binary format with a length prefix, and returns it ready for transmission across the socket.

handleError(record)[source]#

Handle an error during logging. An error has occurred during logging. Most likely cause - connection lost. Close the socket so that we can retry on the next event.

emit(record)[source]#

Emit a record. Pickles the record and writes it to the socket in binary format. If there is an error with the socket, silently drop the packet. If there was a problem with the socket, re-establishes the socket.

close()[source]#

Closes the socket.