According to the GELF spec v1.1 [1], the full_message field in GELF is
optional. The log4js implemention until now has sent identitical
short_message and full_message fields. Since this does not add any new
information to the log message, I suggest that full_message be dropped
from GELF.
--
[1] http://graylog2.org/gelf#specs
It turns out that whenever the clusterred appender is used the log event is passed to all actual appenders.
The actual appender's category is ignored.
Signed-off-by: Vladimir Mitev <idalv@users.noreply.github.com>
Updated logger.js to support disabling all log writes.
Updated log4js.js shutdown function to disable log writes.
Added tests.
Update gitignore to ignore rolling date stream's test output.
loadAppender will check for a shutdown function exposed by
a loaded appender. If present, it will be cached so that the
shutdown function can execute it.
The intent here is that a Node application would not invoked
process.exit until after the log4js shutdown callback returns.
Previously, appenders could only be added by specifying the filepath
to the appender. This required the appender's path to be specified
either relative to the log4js installation, relative to a NODE_PATH
token, or absolute. This creates a coupling between the log4js
configurer and the log4js installation location, or a coupling between
the log4js configurer and the global NODE_PATH. This commit removes
the coupling by allowing the loading of appenders to be done relative
to the log4js configurer.
using levels.toLevel and this.isLevelEnabled prior to emiting the event will prevent the appenders from being notified if the log level provided is below the loggers level.
Stop making use of any layout by default, because they are intended to
format a line for human reading. Loggly indexes the values (of all
properties of objects) and makes them available for querying.
+ lib/appenders/clustered.js
+ test/clusteredAppender-test.js
Instead os using sockets (like multiprocess) or dead and unmaintained hook.io, Clustered appender
uses process.send(message) / worker.on('message', callback) mechanisms for transporting data
between worker processes and master logger.
Master logger takes an "appenders" array of actual appenders that are triggered when worker appenders send some data.
This guarantees sequential writes to appenders, so the log messages are not mixed in single lines of log.
This filtering appender allows you to choose some category
names that won't be logged to the delegated appender. This
is useful if you have e.g. a category that you use to log
web requests to one file, but want to keep those entries
out of the main log file without having to explicitly list
all the other categories that you _do_ want to include.
Has one option, "exclude", which is a category name or
array of category names. The child appender is set in
"appender", modelled on the logLevelFilter.
During an evaluation of multiple loggers, I saw a slow down when trying to
quickly log more than 100,000 messages to a file:
```javascript
counter = 150000;
while (counter) {
logger.info('Message[' + counter + ']');
counter -= 1;
}
```
My detailed test can be found here:
- https://gist.github.com/NicolasPelletier/4773843
The test demonstrate that writing 150,000 lines straight in a FileStream
takes about 22 seconds until the file content stabilizes. When calling
logger.debug() 150,000 times, the file stabilizes to its final content
after 229s ( almost 4 minutes ! ).
After investigation, it turns out that the problem is using an Array() to
accumulate the data. Pushing the data in the Array with Array.push() is
quick, but the code flushing the buffer uses Array.shift(), which forces
re-indexing of all 149,999 elements remaining in the Array. This is
exponentially slower as the buffer grows.
The solution is to use something else than an Array to accumulate the
messages. The fix was made using a package called Dequeue
( https://github.com/lleo/node-dequeue ). By replacing the Array with
a Dequeue object, it brought the logging of 150,000 messages back down to
31s. Seven times faster than the previous 229s.
There is a caveat that each log event is slightly longer due to the need
to create an object to put in the double-ended queue inside the Dequeue
object. According to a quick test, it takes about 4% more time per call
to logger.debug().