Logging
Configuring Logging Systems
A-Stack runtime supports number of different logging systems -
- SffLog - A-Stack provided logging system
- log4j - Via OPS4j PAX Logging
- slf4j
- jakarta commons
- avalon
- JUL
Following section describes configuration of the different logging systems.
<ConfigurationAdmin service.pid="sff.log"> sff.log=enabled,silent </ConfigurationAdmin
<ConfigurationAdmin service.pid="org.ops4j.pax.logging"> log4j.rootLogger=[:sff.log.level:], R log4j.appender.R=org.apache.log4j.RollingFileAppender log4j.appender.R.File=[:sff.logs.dir:]/engine.log log4j.appender.R.DatePattern='.'yyyy-MM-dd log4j.appender.R.MaxFileSize=20Mb # Keep one backup file log4j.appender.R.MaxBackupIndex=20 log4j.appender.R.layout=org.apache.log4j.PatternLayout log4j.appender.R.layout.ConversionPattern=[%d] %p %c [%t] %m%n </ConfigurationAdmin>
Logging Notes
- Ability to configure any OSGi service by its service.pid (i.e. persistent ID, see OSGi specs for more details). Only service.pid is supported at this time. The way you do it as described in sections above.
- ConfigugurationAdmin is the standard OSGi service responsible for configuration of other services. The configuration target is defined by service.pid="org.ops4j.pax.logging" attribute. In this case we configure pax/log4j logging. Argument format is the same properties format or XML as described above.
- More than one service can be configured.
- A-Stack own SffLog can be configured this way as show in code above.
- You can control it on per level attribute basis (e.g. sff.log.trace.enabled=true, sff.log.trace.silent=true), per level basis (e.g. sff.log.trace=enabled,silent) or the whole thing altogether (e.g. sff.log=silent).
- All the logs posted to sff.log will also appear in log4j, but not vice versa, so sff.log remains our main logging system.
- Note that both logging systems are configured independently therefore if you configure log4j with console appender and leave sff.log loud then you’ll get each message posted on your console twice in different formats, one from sff.log and another from log4j
The example log4j configuration above only outputs into a log file and leaves console output to sff.log. Adding log4j console appender (e.g. log4j.rootLogger=INFO, A1,FILE) and setting sff.log=silent will use log4j for both console and file logging.
Logging Enable Configuration Matrix
SffLog | Log4j Configured | Result | Comments |
---|---|---|---|
sff.log=disabled,silent | YES | No Logs |
|
sff.log=enabled,silent | YES; sff.log.level="INFO" | Only INFO Level log statements as specified in log4j appender | Absolutely no log file at all |
sff.log=enabled,loud | YES; sff.log.level="INFO" | INFO, ERROR and WARN statements are printed in log | Note that sff.log.level is passed into rootLogger value of Log4j configurationAdmin |
sff.log=enabled,loud | YES; sff.log.level="DEBUG" | INFO, ERROR, WARN, DEBUG statements are printed in log | |
sff.log=enabled,loud sff.log.trace=enabled | YES; sff.log.level="TRACE" | Trace will be enabled; Trace allows you to view Protocol level details for example | This is how trace can be enabled. |
Logging Cost
- Note that Log statements that has Template Processing (TP) in message will be evaluated independent of the level.
- Note the only TP will be processed but Strings will NOT be generated if level is not enabled.
- There is a potential to optimize not to perform TP based on level as well. (Enhancement)
Broadcast Log Message
A-Stack allows users to provide facet id on which all the log messages can be broadcast.
To Configure broadcast of message:
<SffLocalConfig> <sff.log.broadcast.all>loggerfacet</sff.log.broadcast.all> <sff.log.broadcast.message>json</sff.log.broadcast.message> </SffLocalConfig>
Explanation
- sff.log.broadcast.<log level name> which defines one or more facet IDs used for log broadcasting. That is:
- <sff.log.broadcast.error>[:MyErrorsFacet:]</sff.log.broadcast.error> - This one will broadcast log error messages to [randomized] [:MyErrorsFacet:] fid
- <sff.log.broadcast.all>wstest</sff.log.broadcast.all> - This one will log messages to wstest fid.
You can specify more than one fid as comma-separated list. - Known Log levels are:
- text
- trace
- debug
- info
- warn
- error
- fatal
- all
sff.log.broadcast.protocol. Default value is “ws,wss”. This is the list of protocols which will be used to select broadcast pipelines. That is, it will *not* broadcast on HTTP connections, for example. When broadcast is enabled, startup will log the list of protocols so you know it was enabled. Obviously, it will *not* print the list of facet IDs used for broadcast.
sff.log.broadcast.message. This is log message definition. It may come in different forms:
<sff.log.broadcast.message>json<sff.log.broadcast.message> This simplest form allows you to define log message type (e.g. xml or json). Default is xml.
Default message in JSON format will look like this:
Default Message{ "Log": { "Level":"WARN", "Source":"SffSequenceFacet", "Timestamp":1479334740055, "Message":"startSchedularForMonitorStatus:MonitorStatus 1:0 @ Wed Nov 16 14:19:00 PST 2016: Scheduled: skip past (-55 ms) event for Wed Nov 16 14:19:00 PST 2016 [0../30sec]" } }
In XML Format:
Default Message<Log> <Level>WARN</Level> <Source>SffSequenceFacet</Source> <Timestamp>1479334740055<Timestamp> <Message>startSchedularForMonitorStatus:MonitorStatus 1:0 @ Wed Nov 16 14:19:00 PST 2016: Scheduled: skip past (-55 ms) event for Wed Nov 16 14:19:00 PST 2016 [0../30sec]</Message> </Log>
In CDM Format:
Log messages are automatically added to the logger window on TQLConsoleDefault Message# Log: Level: WARN Source: SffSequenceFacet Timestamp: 1479334740055 Message: "startSchedularForMonitorStatus:MonitorStatus 1:0 @ Wed Nov 16 14:19:00 PST 2016: Scheduled: skip past (-55 ms) event for Wed Nov 16 14:19:00 PST 2016 [0../30sec]"
Application Monitoring Using Broadcast Facet
Users can build monitoring applications or applications that require to know if Runtime has started etc using the Broadcast facet.
Elastic Search Integration
- Copy Elastic Search, Kibana, Filebeat and Logstash from below link
- http://sandbox.atomiton.com:8080/fid-downloads/res/downloads/loganalysis.zip
- Unzip and update filebeat configuration to read log file from given location
- Path – filebeat/filebeat.yml
- Add below lines for each log file
- input_type: log
paths:
- /<Path>/deviceEngine.log
fields: {host: "172.31.48.38", port: "8085", type: "logs"}
exclude_lines: [".*SffReport:.*"]
####################################
- input_type: log
paths:
- /<Path> /deviceEngine.log
fields: {host: "172.31.48.38", port: "8085", type: "SffReport"}
include_lines: [".*SffReport:.*"]
- Start each service using below commands –
- Elastic search –
- Path to execute command - elasticsearch/bin
- Command - nohup ./elasticsearch > /dev/null &
- Logstash –
- Path to execute command - logstash/logstash/bin
- Command - nohup ./logstash -f logstash-engine.conf > /dev/null &
- Filebeat –
- Path to execute command – filebeat/
- Command - nohup ./filebeat > nohup.out &
- Kibana –
- Path to execute command – kibana/bin
- Command - nohup ./kibana > nohup.out &
- Kibana will start on port 5601
- URL to access Kibana - http://<host>:5601/app/kibana