It turns out that whenever the clusterred appender is used the log event is passed to all actual appenders.
The actual appender's category is ignored.
Signed-off-by: Vladimir Mitev <idalv@users.noreply.github.com>
Updated logger.js to support disabling all log writes.
Updated log4js.js shutdown function to disable log writes.
Added tests.
Update gitignore to ignore rolling date stream's test output.
loadAppender will check for a shutdown function exposed by
a loaded appender. If present, it will be cached so that the
shutdown function can execute it.
The intent here is that a Node application would not invoked
process.exit until after the log4js shutdown callback returns.
Previously, appenders could only be added by specifying the filepath
to the appender. This required the appender's path to be specified
either relative to the log4js installation, relative to a NODE_PATH
token, or absolute. This creates a coupling between the log4js
configurer and the log4js installation location, or a coupling between
the log4js configurer and the global NODE_PATH. This commit removes
the coupling by allowing the loading of appenders to be done relative
to the log4js configurer.
using levels.toLevel and this.isLevelEnabled prior to emiting the event will prevent the appenders from being notified if the log level provided is below the loggers level.
Stop making use of any layout by default, because they are intended to
format a line for human reading. Loggly indexes the values (of all
properties of objects) and makes them available for querying.
+ lib/appenders/clustered.js
+ test/clusteredAppender-test.js
Instead os using sockets (like multiprocess) or dead and unmaintained hook.io, Clustered appender
uses process.send(message) / worker.on('message', callback) mechanisms for transporting data
between worker processes and master logger.
Master logger takes an "appenders" array of actual appenders that are triggered when worker appenders send some data.
This guarantees sequential writes to appenders, so the log messages are not mixed in single lines of log.
This filtering appender allows you to choose some category
names that won't be logged to the delegated appender. This
is useful if you have e.g. a category that you use to log
web requests to one file, but want to keep those entries
out of the main log file without having to explicitly list
all the other categories that you _do_ want to include.
Has one option, "exclude", which is a category name or
array of category names. The child appender is set in
"appender", modelled on the logLevelFilter.
During an evaluation of multiple loggers, I saw a slow down when trying to
quickly log more than 100,000 messages to a file:
```javascript
counter = 150000;
while (counter) {
logger.info('Message[' + counter + ']');
counter -= 1;
}
```
My detailed test can be found here:
- https://gist.github.com/NicolasPelletier/4773843
The test demonstrate that writing 150,000 lines straight in a FileStream
takes about 22 seconds until the file content stabilizes. When calling
logger.debug() 150,000 times, the file stabilizes to its final content
after 229s ( almost 4 minutes ! ).
After investigation, it turns out that the problem is using an Array() to
accumulate the data. Pushing the data in the Array with Array.push() is
quick, but the code flushing the buffer uses Array.shift(), which forces
re-indexing of all 149,999 elements remaining in the Array. This is
exponentially slower as the buffer grows.
The solution is to use something else than an Array to accumulate the
messages. The fix was made using a package called Dequeue
( https://github.com/lleo/node-dequeue ). By replacing the Array with
a Dequeue object, it brought the logging of 150,000 messages back down to
31s. Seven times faster than the previous 229s.
There is a caveat that each log event is slightly longer due to the need
to create an object to put in the double-ended queue inside the Dequeue
object. According to a quick test, it takes about 4% more time per call
to logger.debug().
Errors sometimes carry additional attributes on them as part of the passed error data.
A utility that utilizes it, for example - is called 'errs', which is in use for instance 'nano' - the couch-db driver.
when only the stack is printed - all the additional information that is augmented on the error object does not sink to the log and is lost.
consider the following code:
```
//the oups throwing utility
function oups(){
e = new Error();
extend(
{ message : "Oups error"
, description: "huston, we got a problem"
, status : "MESS"
, errorCode : 991
, arr :[1,2,3,4,{}]
, data:
{ c:{}
, d:{e:{}}
}
}
throw e;
}
var log = require('log4js')
try{
oups()
} catch( e ) {
log.error("error on oups", e );
}
```
output before the fix
```
error on oups Error: Oups error
at repl:1:11
at REPLServer.eval (repl.js:80:21)
at Interface.<anonymous> (repl.js:182:12)
at Interface.emit (events.js:67:17)
at Interface._onLine (readline.js:162:10)
at Interface._line (readline.js:426:8)
at Interface._ttyWrite (readline.js:603:14)
at ReadStream.<anonymous> (readline.js:82:12)
at ReadStream.emit (events.js:88:20)
```
output after the fix would be
```
error on oups { [Error: My error message]
name: 'Error',
description: 'huston, we got a problem',
status: 'MESS',
errorCode: 991,
arr: [ 1, 2, 3, 4, {} ],
data: { c: {}, d: { e: {} } } }
Error: Oups error
at repl:1:11
at REPLServer.eval (repl.js:80:21)
at Interface.<anonymous> (repl.js:182:12)
at Interface.emit (events.js:67:17)
at Interface._onLine (readline.js:162:10)
at Interface._line (readline.js:426:8)
at Interface._ttyWrite (readline.js:603:14)
at ReadStream.<anonymous> (readline.js:82:12)
at ReadStream.emit (events.js:88:20)
```