Что такое appender log4j

Асинхронное логирование с log4j для любознательных

В статье рассматриваются основы асинхронного логирования с помощью log4j2.
Рассмотрим что это, для чего нужно и как можно реализовать.

Что такое appender log4j. Смотреть фото Что такое appender log4j. Смотреть картинку Что такое appender log4j. Картинка про Что такое appender log4j. Фото Что такое appender log4j

Давайте начнем с того, что разберемся, что такое асинхронное логировагие.
Представим – есть основной поток программы, в нем делается полезная работа. Время от времени программа логирует свою работу. Если запись в лог делается в основном потоке программы, прерывая полезную работу, значит это синхронное логирование. Если лог пишется в другом потоке и полезную работу не прерывает, значит это логирование асинхронное. При этом будем считать, что вычислительных ресурсов достаточно и для потока логирования и для основного потока.

Зачем кому-то может понадобиться асинхронное логирование? Ну, прервется выполнение полезной работы на мгновение, кому от этого будет плохо? Понятно, что есть вид особо требовательных приложений, супер оптимизированных на максимальную производительность, там каждая микросекунда на счету, с ними все понятно. Однако в современном мире микросервисов в облаках асинхронное логирование может быть полезно и в приложениях «общего назначения» без особых требований к скорости работы. Например, такое приложение пишет логи на удаленный сервер в GrayLog. Сетевое соединение может прерваться, и если логирование синхронное, это неприятно повлияет на работу приложения. Чтобы защититься от такой напасти можно писать логи асинхронно.

Давайте смоделируем пример.
Для имитации «сломанного сетевого логера» сделаем свой appender. Самодельный appender будет писать в system.out, но с задержкой в несколько секунд, как будто бы это проблемы с сетью.

Вот как может выглядеть Apender. Это максимально-максимально простой appender, делает абсолютный минимум работы:

Создадим минимальную конфигурацию log4j и подключим наш appender
(Конфигурация приведена с сокращениями)

Чтобы было удобнее понимать временные затраты для эмуляции полезной работы, будем использовать обычный консольный логгер (logger), а loggerExperimental – наш сломанный логгер.
Запустим и видим такую картину:

23:03:47.837 [main] INFO ru.logs.LoggerTest — action before:0
23:03:50.840: Thread:[main]: logging in slow:0
23:03:50.841 [main] INFO ru.logs.LoggerTest — action after:0

Видно, как сломанный логгер остановил работу на 3 сек. Обратите внимание, для наглядности сломанный логгер выводит имя потока, в котором запущен, т.е. это main – основной поток приложения. Давайте это изменим и выведем логирование в другой поток. Для этого надо сконфигурировать appender как асинхронный. Вариантов конфигурации может быть много, это один из возможных.
Конфигурирование делается добавлением такого раздела в конфиг:

Тут мы говорим, что теперь SlowAppender будет асинхронным. Это значит, что основной поток (main) положит сообщение в очередь. А другой выделенный поток логирования обратится к этой очереди и передаст сообщение в appender, т.е. в нашем эмуляторе просто выведет в консоль.

Если есть очередь – значит надо задать ее предельный размер, это делается с помощью параметра bufferSize. В примере у меня это 10, чтобы проще показать эффект заполнения очереди.

А что делать, если очередь заполнена? За это отвечает параметр blocking. В этом примере blocking=«false», это значит, что в логгер errorRef=«Console» будет выведено сообщение об ошибке, и сообщение, которое надо залогировать. При этом основной поток продолжит работать. Т.е. получается, что в случае заполнения очереди сообщение выведется не в «сломанный логгер», а в errorRef.

В отладке это выглядит так:

23:23:34.865 [main] INFO ru.logs.LoggerTest — action before:29
2020-06-06 23:23:34,865 main ERROR Appender Async-SlowAppender is unable to write primary appenders. queue is full
23:23:34.865 [main] INFO ru.logs.LoggerTest.slowAsync — logging in slow: 29
23:23:34.866 [main] INFO ru.logs.LoggerTest — action after:29
23:23:35.856: Thread:[AsyncAppender-Async-SlowAppender]: logging in slow: 9
23:23:35.866 [main] INFO ru.logs.LoggerTest — action before:30

Обратите внимание на сообщение о заполнении очереди, и что следующее сообщение попало не в сломанный логгер, а в errorRef.

Если blocking=«true», то основной поток заблокируется, пока в очереди сломанного логера ни появится место, т.е. в случае заполнения очереди логирование станет синхронным, но все сообщения попадут в тот логгер, в который должны (т.е. в сломанный).
Что выбрать, blocking=«true» или blocking=«false»? Это зависит от характера приложения, и того какие логи вы пишите. Обратите внимание, что при blocking=«true» очередь может «подстраховать» приложение при непродолжительных обрывах сети, т.е. эта опция тоже достойна рассмотрения.

Теперь давайте внимательнее посмотрим на эту очередь. Очевидно, что это очень важная часть системы. Есть пара вариантов очередей, я применил самый скоростной вариант – Disruptor. Чтобы его использовать, надо подключить пару дополнительных зависимостей.

И так, что в итоге получается?

23:23:05.848 [main] INFO ru.logs.LoggerTest — action before:0
23:23:05.851 [main] INFO ru.logs.LoggerTest — action after:0
23:23:06.851 [main] INFO ru.logs.LoggerTest — action before:1
23:23:06.851 [main] INFO ru.logs.LoggerTest — action after:1
23:23:07.851 [main] INFO ru.logs.LoggerTest — action before:2
23:23:07.851 [main] INFO ru.logs.LoggerTest — action after:2
23:23:08.852 [main] INFO ru.logs.LoggerTest — action before:3
23:23:08.852 [main] INFO ru.logs.LoggerTest — action after:3
23:23:08.854: Thread:[AsyncAppender-Async-SlowAppender]: logging in slow: 0

Т.е. приложение работает и не ощущает влияния сломанного логгера. Понятно, что после заполнения очереди начнутся неприятности – или приложение начнет тормозить или сообщения теряться (зависит от параметра blocking). Но это все равно лучше, чем просто синхронное логирование.

Внимательный читатель наверняка предложит полностью перейти на синхронные логгеры. Почему бы и нет? У асинхронных логгеров есть неприятность – они сильно нагружают cpu. Например, в моем случае с асинхронным логгером загрузка cpu

100% (по данным утилиты top), а в синхронном варианте порядка 10-20%. Если асинхронных логгеров будет больше, то и загрузка cpu значительно поднимется.

Вопрос — почему так? Я ведь логирую с большими паузами, sleep в секундах измеряется,
на что уходит cpu?

11 июня в 20:00 будет открытый вебинар, в конце вебинара можно будет обсудить этот вопрос.

Источник

Что такое appender log4j

Что такое appender log4j. Смотреть фото Что такое appender log4j. Смотреть картинку Что такое appender log4j. Картинка про Что такое appender log4j. Фото Что такое appender log4j

Appenders

Appenders are responsible for delivering LogEvents to their destination. Every Appender must implement the Appender interface. Most Appenders will extend AbstractAppender which adds Lifecycle and Filterable support. Lifecycle allows components to finish initialization after configuration has completed and to perform cleanup during shutdown. Filterable allows the component to have Filters attached to it which are evaluated during event processing.

Appenders usually are only responsible for writing the event data to the target destination. In most cases they delegate responsibility for formatting the event to a layout. Some appenders wrap other appenders so that they can modify the LogEvent, handle a failure in an Appender, route the event to a subordinate Appender based on advanced Filter criteria or provide similar functionality that does not directly format the event for viewing.

Appenders always have a name so that they can be referenced from Loggers.

AsyncAppender

The AsyncAppender accepts references to other Appenders and causes LogEvents to be written to them on a separate Thread. Note that exceptions while writing to those Appenders will be hidden from the application. The AsyncAppender should be configured after the appenders it references to allow it to shut down properly.

A typical AsyncAppender configuration might look like:

ConsoleAppender

As one might expect, the ConsoleAppender writes its output to either System.err or System.out with System.err being the default target. A Layout must be provided to format the LogEvent.

ConsoleAppender Parameters

Parameter NameTypeDescription
filterFilterA Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter.
layoutLayoutThe Layout to use to format the LogEvent. If no layout is supplied the default pattern layout of «%m%n» will be used.
followbooleanIdentifies whether the appender honors reassignments of System.out or System.err via System.setOut or System.setErr made after configuration. Note that the follow attribute cannot be used with Jansi on Windows.
nameStringThe name of the Appender.
ignoreExceptionsbooleanThe default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender.
targetStringEither «SYSTEM_OUT» or «SYSTEM_ERR». The default is «SYSTEM_ERR».

A typical Console configuration might look like:

FailoverAppender

The FailoverAppender wraps a set of appenders. If the primary Appender fails the secondary appenders will be tried in order until one succeeds or there are no more secondaries to try.

FailoverAppender Parameters

Parameter NameTypeDescription
filterFilterA Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter.
primaryStringThe name of the primary Appender to use.
failoversString[]The names of the secondary Appenders to use.
nameStringThe name of the Appender.
retryIntervalintegerThe number of seconds that should pass before retrying the primary Appender. The default is 60.
ignoreExceptionsbooleanThe default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead.
targetStringEither «SYSTEM_OUT» or «SYSTEM_ERR». The default is «SYSTEM_ERR».

A Failover configuration might look like:

FileAppender

The FileAppender is an OutputStreamAppender that writes to the File named in the fileName parameter. The FileAppender uses a FileManager (which extends OutputStreamManager) to actually perform the file I/O. While FileAppenders from different Configurations cannot be shared, the FileManagers can be if the Manager is accessible. For example, two web applications in a servlet container can have their own configuration and safely write to the same file if Log4j is in a ClassLoader that is common to both of them.

Flushing after every write is only useful when using this appender with synchronous loggers. Asynchronous loggers and appenders will automatically flush at the end of a batch of events, even if immediateFlush is set to false. This also guarantees the data is written to disk but is more efficient.

layoutLayoutThe Layout to use to format the LogEventlockingbooleanWhen set to true, I/O operations will occur only while the file lock is held allowing FileAppenders in multiple JVMs and potentially multiple hosts to write to the same file simultaneously. This will significantly impact performance so should be used carefully. Furthermore, on many systems the file lock is «advisory» meaning that other applications can perform operations on the file without acquiring a lock. The default value is false.nameStringThe name of the Appender.ignoreExceptionsbooleanThe default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender.

Here is a sample File configuration:

FlumeAppender

This is an optional component supplied in a separate jar.

Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating, and moving large amounts of log data from many different sources to a centralized data store. The FlumeAppender takes LogEvents and sends them to a Flume agent as serialized Avro events for consumption.

The Flume Appender supports three modes of operation.

Usage as an embedded agent will cause the messages to be directly passed to the Flume Channel and then control will be immediately returned to the application. All interaction with remote agents will occur asynchronously. Setting the «type» attribute to «Embedded» will force the use of the embedded agent. In addition, configuring agent properties in the appender configuration will also cause the embedded agent to be used.

FlumeAppender Parameters

One or more Property elements that are used to configure the Flume Agent. The properties must be configured without the agent name (the appender name is used for this) and no sources can be configured. Interceptors can be specified for the source using «sources.log4j-source.interceptors». All other Flume configuration properties are allowed. Specifying both Agent and Property elements will result in an error.

When used to configure in Persistent mode the valid properties are:

A sample FlumeAppender configuration that is configured with a primary and a secondary agent, compresses the body, and formats the body using the RFC5424Layout:

A sample FlumeAppender configuration that is configured with a primary and a secondary agent, compresses the body, formats the body using the RFC5424Layout, and persists encrypted events to disk:

A sample FlumeAppender configuration that is configured with a primary and a secondary agent, compresses the body, formats the body using RFC5424Layout and passes the events to an embedded Flume Agent.

A sample FlumeAppender configuration that is configured with a primary and a secondary agent using Flume configuration properties, compresses the body, formats the body using RFC5424Layout and passes the events to an embedded Flume Agent.

JDBCAppender

The JDBCAppender writes log events to a relational database table using standard JDBC. It can be configured to obtain JDBC connections using a JNDI DataSource or a custom factory method. Whichever approach you take, it must be backed by a connection pool. Otherwise, logging performance will suffer greatly.

Parameter NameTypeDescription
agentsAgent[]An array of Agents to which the logging events should be sent. If more than one agent is specified the first Agent will be the primary and subsequent Agents will be used in the order specified as secondaries should the primary Agent fail. Each Agent definition supplies the Agents host and port. The specification of agents and properties are mutually exclusive. If both are configured an error will result.
agentRetriesintegerThe number of times the agent should be retried before failing to a secondary. This parameter is ignored when type=»persistent» is specified (agents are tried once before failing to the next).
batchSizeintegerSpecifies the number of events that should be sent as a batch. The default is 1. This parameter only applies to the Flume NG Appender.
compressbooleanWhen set to true the message body will be compressed using gzip
connectTimeoutintegerThe number of milliseconds Flume will wait before timing out the connection.
dataDirStringDirectory where the Flume write ahead log should be written. Valid only when embedded is set to true and Agent elements are used instead of Property elements.
filterFilterA Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter.
eventPrefixStringThe character string to prepend to each event attribute in order to distinguish it from MDC attributes. The default is an empty string.
flumeEventFactoryFlumeEventFactoryFactory that generates the Flume events from Log4j events. The default factory is the FlumeAvroAppender itself.
layoutLayoutThe Layout to use to format the LogEvent. If no layout is specified RFC5424Layout will be used.
lockTimeoutRetriesintegerThe number of times to retry if a LockConflictException occurs while writing to Berkeley DB. The default is 5.
maxDelayintegerThe maximum number of seconds to wait for batchSize events before publishing the batch.
mdcExcludesStringA comma separated list of mdc keys that should be excluded from the FlumeEvent. This is mutually exclusive with the mdcIncludes attribute.
mdcIncludesStringA comma separated list of mdc keys that should be included in the FlumeEvent. Any keys in the MDC not found in the list will be excluded. This option is mutually exclusive with the mdcExcludes attribute.
mdcRequiredStringA comma separated list of mdc keys that must be present in the MDC. If a key is not present a LoggingException will be thrown.
mdcPrefixStringA string that should be prepended to each MDC key in order to distinguish it from event attributes. The default string is «mdc:».
nameStringThe name of the Appender.
propertiesProperty[]
JDBCAppender Parameters

Parameter NameTypeDescription
nameStringRequired. The name of the Appender.
ignoreExceptionsbooleanThe default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender.
filterFilterA Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter.
bufferSizeintIf an integer greater than 0, this causes the appender to buffer log events and flush whenever the buffer reaches this size.
connectionSourceConnectionSourceRequired. The connections source from which database connections should be retrieved.
tableNameStringRequired. The name of the database table to insert log events into.
columnConfigsColumnConfig[]Required. Information about the columns that log event data should be inserted into and how to insert that data. This is represented with multiple elements.

When configuring the JDBCAppender, you must specify a ConnectionSource implementation from which the Appender gets JDBC connections. You must use exactly one of the or nested elements.

DataSource Parameters

Parameter NameTypeDescription
jndiNameStringRequired. The full, prefixed JNDI name that the javax.sql.DataSource is bound to, such as java:/comp/env/jdbc/LoggingDatabase. The DataSource must be backed by a connection pool; otherwise, logging will be very slow.
ConnectionFactory Parameters

Parameter NameTypeDescription
classClassRequired. The fully qualified name of a class containing a static factory method for obtaining JDBC connections.
methodMethodRequired. The name of a static factory method for obtaining JDBC connections. This method must have no parameters and its return type must be either java.sql.Connection or DataSource. If the method returns Connections, it must obtain them from a connection pool (and they will be returned to the pool when Log4j is done with them); otherwise, logging will be very slow. If the method returns a DataSource, the DataSource will only be retrieved once, and it must be backed by a connection pool for the same reasons.

When configuring the JDBCAppender, use the nested elements to specify which columns in the table should be written to and how to write to them. The JDBCAppender uses this information to formulate a PreparedStatement to insert records without SQL injection vulnerability.

Column Parameters

Parameter NameTypeDescription
nameStringRequired. The name of the database column.
patternStringUse this attribute to insert a value or values from the log event in this column using a PatternLayout pattern. Simply specify any legal pattern in this attribute. Either this attribute, literal, or isEventTimestamp="true" must be specified, but not more than one of these.
literalStringUse this attribute to insert a literal value in this column. The value will be included directly in the insert SQL, without any quoting (which means that if you want this to be a string, your value should contain single quotes around it like this: literal="'Literal String'"). This is especially useful for databases that don’t support identity columns. For example, if you are using Oracle you could specify literal="NAME_OF_YOUR_SEQUENCE.NEXTVAL" to insert a unique ID in an ID column. Either this attribute, pattern, or isEventTimestamp="true" must be specified, but not more than one of these.
isEventTimestampbooleanUse this attribute to insert the event timestamp in this column, which should be a SQL datetime. The value will be inserted as a java.sql.Types.TIMESTAMP. Either this attribute (equal to true), pattern, or isEventTimestamp must be specified, but not more than one of these.
isUnicodebooleanThis attribute is ignored unless pattern is specified. If true or omitted (default), the value will be inserted as unicode (setNString or setNClob). Otherwise, the value will be inserted non-unicode (setString or setClob).
isClobbooleanThis attribute is ignored unless pattern is specified. Use this attribute to indicate that the column stores Character Large Objects (CLOBs). If true, the value will be inserted as a CLOB (setClob or setNClob). If false or omitted (default), the value will be inserted as a VARCHAR or NVARCHAR (setString or setNString).

Here are a couple sample configurations for the JDBCAppender, as well as a sample factory implementation that uses Commons Pooling and Commons DBCP to pool database connections:

JMSQueueAppender

The JMSQueueAppender sends the formatted log event to a JMS Queue.

JMSQueueAppender Parameters

Parameter NameTypeDescription
factoryBindingNameStringThe name to locate in the Context that provides the QueueConnectionFactory.
factoryNameStringThe fully qualified class name that should be used to define the Initial Context Factory as defined in INITIAL_CONTEXT_FACTORY. If no value is provided the default InitialContextFactory will be used. If a factoryName is specified without a providerURL a warning message will be logged as this is likely to cause problems.
filterFilterA Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter.
layoutLayoutThe Layout to use to format the LogEvent. If you do not specify a layout, this appender will use a SerializedLayout.
nameStringThe name of the Appender.
passwordStringThe password to use to create the queue connection.
providerURLStringThe URL of the provider to use as defined by PROVIDER_URL. If this value is null the default system provider will be used.
queueBindingNameStringThe name to use to locate the Queue.
securityPrincipalNameStringThe name of the identity of the Principal as specified by SECURITY_PRINCIPAL. If a securityPrincipalName is specified without securityCredentials a warning message will be logged as this is likely to cause problems.
securityCredentialsStringThe security credentials for the principal as specified by SECURITY_CREDENTIALS.
ignoreExceptionsbooleanThe default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender.
urlPkgPrefixesStringA colon-separated list of package prefixes for the class name of the factory class that will create a URL context factory as defined by URL_PKG_PREFIXES.
userNameStringThe user id used to create the queue connection.

Here is a sample JMSQueueAppender configuration:

JMSTopicAppender

The JMSTopicAppender sends the formatted log event to a JMS Topic.

JMSTopicAppender Parameters

Parameter NameTypeDescription
factoryBindingNameStringThe name to locate in the Context that provides the TopicConnectionFactory.
factoryNameStringThe fully qualified class name that should be used to define the Initial Context Factory as defined in INITIAL_CONTEXT_FACTORY. If no value is provided the default InitialContextFactory will be used. If a factoryName is specified without a providerURL a warning message will be logged as this is likely to cause problems.
filterFilterA Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter.
layoutLayoutThe Layout to use to format the LogEvent. If you do not specify a layout, this appender will use a SerializedLayout.
nameStringThe name of the Appender.
passwordStringThe password to use to create the queue connection.
providerURLStringThe URL of the provider to use as defined by PROVIDER_URL. If this value is null the default system provider will be used.
topicBindingNameStringThe name to use to locate the Topic.
securityPrincipalNameStringThe name of the identity of the Principal as specified by SECURITY_PRINCIPAL. If a securityPrincipalName is specified without securityCredentials a warning message will be logged as this is likely to cause problems.
securityCredentialsStringThe security credentials for the principal as specified by SECURITY_CREDENTIALS.
ignoreExceptionsbooleanThe default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender.
urlPkgPrefixesStringA colon-separated list of package prefixes for the class name of the factory class that will create a URL context factory as defined by URL_PKG_PREFIXES.
userNameStringThe user id used to create the queue connection.

Here is a sample JMSTopicAppender configuration:

JPAAppender

The JPAAppender writes log events to a relational database table using the Java Persistence API 2.1. It requires the API and a provider implementation be on the classpath. It also requires a decorated entity configured to persist to the table desired. The entity should either extend org.apache.logging.log4j.core.appender.db.jpa.BasicLogEventEntity (if you mostly want to use the default mappings) and provide at least an @Id property, or org.apache.logging.log4j.core.appender.db.jpa.AbstractLogEventWrapperEntity (if you want to significantly customize the mappings). See the Javadoc for these two classes for more information. You can also consult the source code of these two classes as an example of how to implement the entity.

JPAAppender Parameters

Parameter NameTypeDescription
nameStringRequired. The name of the Appender.
ignoreExceptionsbooleanThe default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender.
filterFilterA Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter.
bufferSizeintIf an integer greater than 0, this causes the appender to buffer log events and flush whenever the buffer reaches this size.
entityClassNameStringRequired. The fully qualified name of the concrete LogEventWrapperEntity implementation that has JPA annotations mapping it to a database table.
persistenceUnitNameStringRequired. The name of the JPA persistence unit that should be used for persisting log events.

Here is a sample configuration for the JPAAppender. The first XML sample is the Log4j configuration file, the second is the persistence.xml file. EclipseLink is assumed here, but any JPA 2.1 or higher provider will do. You should always create a separate persistence unit for logging, for two reasons. First, must be set to «NONE,» which is usually not desired in normal JPA usage. Also, for performance reasons the logging entity should be isolated in its own persistence unit away from all other entities and you should use a non-JTA data source. Note that your persistence unit must also contain elements for all of the org.apache.logging.log4j.core.appender.db.jpa.converter converter classes.

NoSQLAppender

The NoSQLAppender writes log events to a NoSQL database using an internal lightweight provider interface. Provider implementations currently exist for MongoDB and Apache CouchDB, and writing a custom provider is quite simple.

NoSQLAppender Parameters

Parameter NameTypeDescription
nameStringRequired. The name of the Appender.
ignoreExceptionsbooleanThe default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender.
filterFilterA Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter.
bufferSizeintIf an integer greater than 0, this causes the appender to buffer log events and flush whenever the buffer reaches this size.
NoSqlProviderNoSQLProvider >>Required. The NoSQL provider that provides connections to the chosen NoSQL database.

MongoDB Provider Parameters

Parameter NameTypeDescription
collectionNameStringRequired. The name of the MongoDB collection to insert the events into.
writeConcernConstantFieldBy default, the MongoDB provider inserts records with the instructions com.mongodb.WriteConcern.ACKNOWLEDGED. Use this optional attribute to specify the name of a constant other than ACKNOWLEDGED.
writeConcernConstantClassClassIf you specify writeConcernConstant, you can use this attribute to specify a class other than com.mongodb.WriteConcern to find the constant on (to create your own custom instructions).
factoryClassNameClassTo provide a connection to the MongoDB database, you can use this attribute and factoryMethodName to specify a class and static method to get the connection from. The method must return a com.mongodb.DB or a com.mongodb.MongoClient. If the DB is not authenticated, you must also specify a username and password. If you use the factory method for providing a connection, you must not specify the databaseName, server, or port attributes.
factoryMethodNameMethodSee the documentation for attribute factoryClassName.
databaseNameStringIf you do not specify a factoryClassName and factoryMethodName for providing a MongoDB connection, you must specify a MongoDB database name using this attribute. You must also specify a username and password. You can optionally also specify a server (defaults to localhost), and a port (defaults to the default MongoDB port).
serverStringSee the documentation for attribute databaseName.
portintSee the documentation for attribute databaseName.
usernameStringSee the documentation for attributes databaseName and factoryClassName.
passwordStringSee the documentation for attributes databaseName and factoryClassName.
CouchDB Provider Parameters

Parameter NameTypeDescription
factoryClassNameClassTo provide a connection to the CouchDB database, you can use this attribute and factoryMethodName to specify a class and static method to get the connection from. The method must return a org.lightcouch.CouchDbClient or a org.lightcouch.CouchDbProperties. If you use the factory method for providing a connection, you must not specify the databaseName, protocol, server, port, username, or password attributes.
factoryMethodNameMethodSee the documentation for attribute factoryClassName.
databaseNameStringIf you do not specify a factoryClassName and factoryMethodName for providing a CouchDB connection, you must specify a CouchDB database name using this attribute. You must also specify a username and password. You can optionally also specify a protocol (defaults to http), server (defaults to localhost), and a port (defaults to 80 for http and 443 for https).
protocolStringMust either be «http» or «https.» See the documentation for attribute databaseName.
serverStringSee the documentation for attribute databaseName.
portintSee the documentation for attribute databaseName.
usernameStringSee the documentation for attributes databaseName.
passwordStringSee the documentation for attributes databaseName.

Here are a few sample configurations for the NoSQLAppender:

The following example demonstrates how log events are persisted in NoSQL databases if represented in a JSON format:

OutputStreamAppender

The OutputStreamAppender provides the base for many of the other Appenders such as the File and Socket appenders that write the event to an Output Stream. It cannot be directly configured. Support for immediateFlush and buffering is provided by the OutputStreamAppender. The OutputStreamAppender uses an OutputStreamManager to handle the actual I/O, allowing the stream to be shared by Appenders in multiple configurations.

RandomAccessFileAppender

(Experimental, may replace FileAppender in a future release.)

As of beta-9, the name of this appender has been changed from FastFile to RandomAccessFile. Configurations using the FastFile element no longer work and should be modified to use the RandomAccessFile element.

The RandomAccessFileAppender is similar to the standard FileAppender except it is always buffered (this cannot be switched off) and internally it uses a ByteBuffer + RandomAccessFile instead of a BufferedOutputStream. We saw a 20-200% performance improvement compared to FileAppender with «bufferedIO=true» in our measurements. Similar to the FileAppender, RandomAccessFileAppender uses a RandomAccessFileManager to actually perform the file I/O. While RandomAccessFileAppender from different Configurations cannot be shared, the RandomAccessFileManagers can be if the Manager is accessible. For example, two web applications in a servlet container can have their own configuration and safely write to the same file if Log4j is in a ClassLoader that is common to both of them.

Flushing after every write is only useful when using this appender with synchronous loggers. Asynchronous loggers and appenders will automatically flush at the end of a batch of events, even if immediateFlush is set to false. This also guarantees the data is written to disk but is more efficient.

bufferSizeintThe buffer size, defaults to 262,144 bytes (256 * 1024).layoutLayoutThe Layout to use to format the LogEventnameStringThe name of the Appender.ignoreExceptionsbooleanThe default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender.

Here is a sample RandomAccessFile configuration:

RewriteAppender

The RewriteAppender allows the LogEvent to manipulated before it is processed by another Appender. This can be used to mask sensitive information such as passwords or to inject information into each event. The RewriteAppender must be configured with a RewritePolicy. The RewriteAppender should be configured after any Appenders it references to allow it to shut down properly.

RewriteAppender Parameters

Parameter NameTypeDescription
AppenderRefStringThe name of the Appenders to call after the LogEvent has been manipulated. Multiple AppenderRef elements can be configured.
filterFilterA Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter.
nameStringThe name of the Appender.
rewritePolicyRewritePolicyThe RewritePolicy that will manipulate the LogEvent.
ignoreExceptionsbooleanThe default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender.

RewritePolicy

RewritePolicy is an interface that allows implementations to inspect and possibly modify LogEvents before they are passed to Appender. RewritePolicy declares a single method named rewrite that must be implemented. The method is passed the LogEvent and can return the same event or create a new one.

MapRewritePolicy

MapRewritePolicy will evaluate LogEvents that contain a MapMessage and will add or update elements of the Map.

Parameter NameTypeDescription
modeString«Add» or «Update»
keyValuePairKeyValuePair[]An array of keys and their values.

The following configuration shows a RewriteAppender configured to add a product key and its value to the MapMessage.:

PropertiesRewritePolicy

PropertiesRewritePolicy will add properties configured on the policy to the ThreadContext Map being logged. The properties will not be added to the actual ThreadContext Map. The property values may contain variables that will be evaluated when the configuration is processed as well as when the event is logged.

Parameter NameTypeDescription
propertiesProperty[]One of more Property elements to define the keys and values to be added to the ThreadContext Map.

The following configuration shows a RewriteAppender configured to add a product key and its value to the MapMessage.:

RollingFileAppender

The RollingFileAppender is an OutputStreamAppender that writes to the File named in the fileName parameter and rolls the file over according the TriggeringPolicy and the RolloverPolicy. The RollingFileAppender uses a RollingFileManager (which extends OutputStreamManager) to actually perform the file I/O and perform the rollover. While RolloverFileAppenders from different Configurations cannot be shared, the RollingFileManagers can be if the Manager is accessible. For example, two web applications in a servlet container can have their own configuration and safely write to the same file if Log4j is in a ClassLoader that is common to both of them.

A RollingFileAppender requires a TriggeringPolicy and a RolloverStrategy. The triggering policy determines if a rollover should be performed while the RolloverStrategy defines how the rollover should be done. If no RolloverStrategy is configured, RollingFileAppender will use the DefaultRolloverStrategy.

File locking is not supported by the RollingFileAppender.

Flushing after every write is only useful when using this appender with synchronous loggers. Asynchronous loggers and appenders will automatically flush at the end of a batch of events, even if immediateFlush is set to false. This also guarantees the data is written to disk but is more efficient.

layoutLayoutThe Layout to use to format the LogEventnameStringThe name of the Appender.policyTriggeringPolicyThe policy to use to determine if a rollover should occur.strategyRolloverStrategyThe strategy to use to determine the name and location of the archive file.ignoreExceptionsbooleanThe default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender.

Triggering Policies

Composite Triggering Policy

The CompositeTriggeringPolicy combines multiple triggering policies and returns true if any of the configured policies return true. The CompositeTriggeringPolicy is configured simply by wrapping other policies in a Policies element.

For example, the following XML fragment defines policies that rollover the log when the JVM starts, when the log size reaches twenty megabytes, and when the current date no longer matches the log’s start date.

OnStartup Triggering Policy

The OnStartupTriggeringPolicy policy takes no parameters and causes a rollover if the log file is older than the current JVM’s start time.

Google App Engine note:
When running in Google App Engine, the OnStartup policy causes a rollover if the log file is older than the time when Log4J initialized. (Google App Engine restricts access to certain classes so Log4J cannot determine JVM start time with java.lang.management.ManagementFactory.getRuntimeMXBean().getStartTime() and falls back to Log4J initialization time instead.)

SizeBased Triggering Policy

The SizeBasedTriggeringPolicy causes a rollover once the file has reached the specified size. The size can be specified in bytes, with the suffix KB, MB or GB, for example 20MB.

TimeBased Triggering Policy

The TimeBasedTriggeringPolicy causes a rollover once the date/time pattern no longer applies to the active file. This policy accepts an increment attribute which indicates how frequently the rollover should occur based on the time pattern and a modulate boolean attribute.

TimeBasedTriggeringPolicy Parameters

Parameter NameTypeDescription
intervalintegerHow often a rollover should occur based on the most specific time unit in the date pattern. For example, with a date pattern with hours as the most specific item and and increment of 4 rollovers would occur every 4 hours. The default value is 1.
modulatebooleanIndicates whether the interval should be adjusted to cause the next rollover to occur on the interval boundary. For example, if the item is hours, the current hour is 3 am and the interval is 4 then then the first rollover will occur at 4 am and then next ones will occur at 8 am, noon, 4pm, etc.

Rollover Strategies

Default Rollover Strategy

The default rollover strategy accepts both a date/time pattern and an integer from the filePattern attribute specified on the RollingFileAppender itself. If the date/time pattern is present it will be replaced with the current date and time values. If the pattern contains an integer it will be incremented on each rollover. If the pattern contains both a date/time and integer in the pattern the integer will be incremented until the result of the date/time pattern changes. If the file pattern ends with «.gz» or «.zip» the resulting archive will be compressed using the compression scheme that matches the suffix. The pattern may also contain lookup references that can be resolved at runtime such as is shown in the example below.

The default rollover strategy supports two variations for incrementing the counter. The first is the «fixed window» strategy. To illustrate how it works, suppose that the min attribute is set to 1, the max attribute is set to 3, the file name is «foo.log», and the file name pattern is «foo-%i.log».

Number of rolloversActive output targetArchived log filesDescription
0foo.logAll logging is going to the initial file.
1foo.logfoo-1.logDuring the first rollover foo.log is renamed to foo-1.log. A new foo.log file is created and starts being written to.
2foo.logfoo-1.log, foo-2.logDuring the second rollover foo-1.log is renamed to foo-2.log and foo.log is renamed to foo-1.log. A new foo.log file is created and starts being written to.
3foo.logfoo-1.log, foo-2.log, foo-3.logDuring the third rollover foo-2.log is renamed to foo-3.log, foo-1.log is renamed to foo-2.log and foo.log is renamed to foo-1.log. A new foo.log file is created and starts being written to.
4foo.logfoo-1.log, foo-2.log, foo-3.logIn the fourth and subsequent rollovers, foo-3.log is deleted, foo-2.log is renamed to foo-3.log, foo-1.log is renamed to foo-2.log and foo.log is renamed to foo-1.log. A new foo.log file is created and starts being written to.

By way of contrast, when the the fileIndex attribute is set to «max» but all the other settings are the same the following actions will be performed.

Number of rolloversActive output targetArchived log filesDescription
0foo.logAll logging is going to the initial file.
1foo.logfoo-1.logDuring the first rollover foo.log is renamed to foo-1.log. A new foo.log file is created and starts being written to.
2foo.logfoo-1.log, foo-2.logDuring the second rollover foo.log is renamed to foo-2.log. A new foo.log file is created and starts being written to.
3foo.logfoo-1.log, foo-2.log, foo-3.logDuring the third rollover foo.log is renamed to foo-3.log. A new foo.log file is created and starts being written to.
4foo.logfoo-1.log, foo-2.log, foo-3.logIn the fourth and subsequent rollovers, foo-1.log is deleted, foo-2.log is renamed to foo-1.log, foo-3.log is renamed to foo-2.log and foo.log is renamed to foo-3.log. A new foo.log file is created and starts being written to.
DefaultRolloverStrategy Parameters

Parameter NameTypeDescription
fileIndexStringIf set to «max» (the default), files with a higher index will be newer than files with a smaller index. If set to «min», file renaming and the counter will follow the Fixed Window strategy described above.
minintegerThe minimum value of the counter. The default value is 1.
maxintegerThe maximum value of the counter. Once this values is reached older archives will be deleted on subsequent rollovers.
compressionLevelintegerSets the compression level, 0-9, where 0 = none, 1 = best speed, through 9 = best compression. Only implemented for ZIP files.

Below is a sample configuration that uses a RollingFileAppender with both the time and size based triggering policies, will create up to 7 archives on the same day (1-7) that are stored in a directory based on the current year and month, and will compress each archive using gzip:

This second example shows a rollover strategy that will keep up to 20 files before removing them.

Below is a sample configuration that uses a RollingFileAppender with both the time and size based triggering policies, will create up to 7 archives on the same day (1-7) that are stored in a directory based on the current year and month, and will compress each archive using gzip and will roll every 6 hours when the hour is divisible by 6:

RollingRandomAccessFileAppender

(Experimental, may replace RollingFileAppender in a future release.)

As of beta-9, the name of this appender has been changed from FastRollingFile to RollingRandomAccessFile. Configurations using the FastRollingFile element no longer work and should be modified to use the RollingRandomAccessFile element.

The RollingRandomAccessFileAppender is similar to the standard RollingFileAppender except it is always buffered (this cannot be switched off) and internally it uses a ByteBuffer + RandomAccessFile instead of a BufferedOutputStream. We saw a 20-200% performance improvement compared to RollingFileAppender with «bufferedIO=true» in our measurements. The RollingRandomAccessFileAppender writes to the File named in the fileName parameter and rolls the file over according the TriggeringPolicy and the RolloverPolicy. Similar to the RollingFileAppender, RollingRandomAccessFileAppender uses a RollingRandomAccessFileManager to actually perform the file I/O and perform the rollover. While RollingRandomAccessFileAppender from different Configurations cannot be shared, the RollingRandomAccessFileManagers can be if the Manager is accessible. For example, two web applications in a servlet container can have their own configuration and safely write to the same file if Log4j is in a ClassLoader that is common to both of them.

A RollingRandomAccessFileAppender requires a TriggeringPolicy and a RolloverStrategy. The triggering policy determines if a rollover should be performed while the RolloverStrategy defines how the rollover should be done. If no RolloverStrategy is configured, RollingRandomAccessFileAppender will use the DefaultRolloverStrategy.

File locking is not supported by the RollingRandomAccessFileAppender.

Flushing after every write is only useful when using this appender with synchronous loggers. Asynchronous loggers and appenders will automatically flush at the end of a batch of events, even if immediateFlush is set to false. This also guarantees the data is written to disk but is more efficient.

bufferSizeintThe buffer size, defaults to 262,144 bytes (256 * 1024).layoutLayoutThe Layout to use to format the LogEventnameStringThe name of the Appender.policyTriggeringPolicyThe policy to use to determine if a rollover should occur.strategyRolloverStrategyThe strategy to use to determine the name and location of the archive file.ignoreExceptionsbooleanThe default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender.

Triggering Policies

Rollover Strategies

Below is a sample configuration that uses a RollingRandomAccessFileAppender with both the time and size based triggering policies, will create up to 7 archives on the same day (1-7) that are stored in a directory based on the current year and month, and will compress each archive using gzip:

This second example shows a rollover strategy that will keep up to 20 files before removing them.

Below is a sample configuration that uses a RollingRandomAccessFileAppender with both the time and size based triggering policies, will create up to 7 archives on the same day (1-7) that are stored in a directory based on the current year and month, and will compress each archive using gzip and will roll every 6 hours when the hour is divisible by 6:

RoutingAppender

The RoutingAppender evaluates LogEvents and then routes them to a subordinate Appender. The target Appender may be an appender previously configured and may be referenced by its name or the Appender can be dynamically created as needed. The RoutingAppender should be configured after any Appenders it references to allow it to shut down properly.

RoutingAppender Parameters

Parameter NameTypeDescription
filterFilterA Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter.
nameStringThe name of the Appender.
rewritePolicyRewritePolicyThe RewritePolicy that will manipulate the LogEvent.
routesRoutesContains one or more Route declarations to identify the criteria for choosing Appenders.
ignoreExceptionsbooleanThe default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender.

Routes

The Routes element accepts a single, required attribute named «pattern». The pattern is evaluated against all the registered Lookups and the result is used to select a Route. Each Route may be configured with a key. If the key matches the result of evaluating the pattern then that Route will be selected. If no key is specified on a Route then that Route is the default. Only one Route can be configured as the default.

Each Route must reference an Appender. If the Route contains an AppenderRef attribute then the Route will reference an Appender that was defined in the configuration. If the Route contains an Appender definition then an Appender will be created within the context of the RoutingAppender and will be reused each time a matching Appender name is referenced through a Route.

Below is a sample configuration that uses a RoutingAppender to route all Audit events to a FlumeAppender and all other events will be routed to a RollingFileAppender that captures only the specific event type. Note that the AuditAppender was predefined while the RollingFileAppenders are created as needed.

SMTPAppender

Sends an e-mail when a specific logging event occurs, typically on errors or fatal errors.

The number of logging events delivered in this e-mail depend on the value of BufferSize option. The SMTPAppender keeps only the last BufferSize logging events in its cyclic buffer. This keeps memory requirements at a reasonable level while still delivering useful application context. All events in the buffer are included in the email. The buffer will contain the most recent events of level TRACE to WARN preceding the event that triggered the email.

The default behavior is to trigger sending an email whenever an ERROR or higher severity event is logged and to format it as HTML. The circumstances on when the email is sent can be controlled by setting one or more filters on the Appender. As with other Appenders, the formatting can be controlled by specifying a Layout for the Appender.

SMTPAppender Parameters

Parameter NameTypeDescription
bccStringThe comma-separated list of BCC email addresses.
ccStringThe comma-separated list of CC email addresses.
bufferSizeintegerThe maximum number of log events to be buffered for inclusion in the message. Defaults to 512.
filterFilterA Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter.
fromStringThe email address of the sender.
layoutLayoutThe Layout to use to format the LogEvent. The default is SerializedLayout.
nameStringThe name of the Appender.
replyToStringThe comma-separated list of reply-to email addresses.
smtpDebugbooleanWhen set to true enables session debugging on STDOUT. Defaults to false.
smtpHostStringThe SMTP hostname to send to. This parameter is required.
smtpPasswordStringThe password required to authenticate against the SMTP server.
smtpPortintegerThe SMTP port to send to.
smtpProtocolStringThe SMTP transport protocol (such as «smtps», defaults to «smtp»).
smtpUsernameStringThe username required to authenticate against the SMTP server.
ignoreExceptionsbooleanThe default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender.
toStringThe comma-separated list of recipient email addresses.

SocketAppender

The SocketAppender is an OutputStreamAppender that writes its output to a remote destination specified by a host and port. The data can be sent over either TCP or UDP and can be sent in any format. The default format is to send a Serialized LogEvent. Log4j 2 contains a SocketServer which is capable of receiving serialized LogEvents and routing them through the logging system on the server. You can optionally secure communication with SSL.

This is an unsecured TCP configuration:

This is a secured SSL configuration:

SyslogAppender

The SyslogAppender is a SocketAppender that writes its output to a remote destination specified by a host and port in a format that conforms with either the BSD Syslog format or the RFC 5424 format. The data can be sent over either TCP or UDP.

A sample syslogAppender configuration that is configured with two SyslogAppenders, one using the BSD format and one using RFC 5424.

For SSL this appender writes its output to a remote destination specified by a host and port over SSL in a format that conforms with either the BSD Syslog format or the RFC 5424 format.

Источник

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *