Logging

Configuring Logging Systems

A-Stack runtime supports number of different logging systems -

  1. SffLog -  A-Stack provided logging system
  2. log4j - Via OPS4j PAX Logging
  3. slf4j
  4. jakarta commons
  5. avalon
  6. JUL 


Following section describes configuration of the different logging systems. 

Configuring sff.log
<ConfigurationAdmin service.pid="sff.log">
      sff.log=enabled,silent
</ConfigurationAdmin
Configuring PAX Logging
<ConfigurationAdmin service.pid="org.ops4j.pax.logging">
                log4j.rootLogger=[:sff.log.level:], R
                log4j.appender.R=org.apache.log4j.RollingFileAppender
                log4j.appender.R.File=[:sff.logs.dir:]/engine.log
                log4j.appender.R.DatePattern='.'yyyy-MM-dd
                log4j.appender.R.MaxFileSize=20Mb
                # Keep one backup file
                log4j.appender.R.MaxBackupIndex=20
                log4j.appender.R.layout=org.apache.log4j.PatternLayout
                log4j.appender.R.layout.ConversionPattern=[%d] %p %c [%t] %m%n
 </ConfigurationAdmin>

Logging Notes

  1. Ability to configure any OSGi service by its service.pid (i.e. persistent ID, see OSGi specs for more details). Only service.pid is supported at this time. The way you do it as described in sections above.
  2. ConfigugurationAdmin is the standard OSGi service responsible for configuration of other services. The configuration target is defined by service.pid="org.ops4j.pax.logging" attribute. In this case we configure pax/log4j logging. Argument format is the same properties format or XML as described above.
  3. More than one service can be configured.
  4. A-Stack own SffLog can be configured this way as show in code above.
  5. You can control it on per level attribute basis (e.g. sff.log.trace.enabled=true, sff.log.trace.silent=true), per level basis (e.g. sff.log.trace=enabled,silent) or the whole thing altogether (e.g. sff.log=silent).
  6. All the logs posted to sff.log will also appear in log4j, but not vice versa, so sff.log remains our main logging system.
  7. Note that both logging systems are configured independently therefore if you configure log4j with console appender and leave sff.log loud then you’ll get each message posted on your console twice in different formats, one from sff.log and another from log4j
  8. The example log4j configuration above only outputs into a log file and leaves console output to sff.log. Adding log4j console appender (e.g. log4j.rootLogger=INFO, A1,FILE) and setting sff.log=silent will use log4j for both console and file logging.

Logging Enable Configuration Matrix

SffLogLog4j ConfiguredResultComments
sff.log=disabled,silentYESNo Logs
  1. If sff.log is marked disabled; No logs are generated. Silent or loud does not matter.
  2. Please note that few message may be printed while the service is getting initialized.
sff.log=enabled,silentYES; sff.log.level="INFO"Only INFO Level log statements as specified in log4j appender

Absolutely no log file at all 

sff.log=enabled,loudYES; sff.log.level="INFO"INFO, ERROR and WARN statements are printed in logNote that sff.log.level is passed into rootLogger value of Log4j configurationAdmin
sff.log=enabled,loudYES; sff.log.level="DEBUG"INFO, ERROR, WARN, DEBUG statements are printed in log

sff.log=enabled,loud

sff.log.trace=enabled

YES; sff.log.level="TRACE"Trace will be enabled; Trace allows you to view Protocol level details for exampleThis is how trace can be enabled.

Logging Cost

  1. Note that Log statements that has Template Processing (TP) in message will be evaluated independent of the level. 
  2. Note the only TP will be processed but Strings will NOT be generated if level is not enabled.
  3. There is a potential to optimize not to perform TP based on level as well. (Enhancement)

Broadcast Log Message

A-Stack allows users to provide facet id on which all the log messages can be broadcast.


To Configure broadcast of message:

Broadcast Log Messages Configuration
<SffLocalConfig>
    <sff.log.broadcast.all>loggerfacet</sff.log.broadcast.all>
    <sff.log.broadcast.message>json</sff.log.broadcast.message>
</SffLocalConfig>

Explanation

  • sff.log.broadcast.<log level name> which defines one or more facet IDs used for log broadcasting. That is:
    •  <sff.log.broadcast.error>[:MyErrorsFacet:]</sff.log.broadcast.error> - This one will broadcast log error messages to [randomized] [:MyErrorsFacet:] fid 
    • <sff.log.broadcast.all>wstest</sff.log.broadcast.all> - This one will log messages to wstest fid.
      You can specify more than one fid as comma-separated list.
    • Known Log levels are
      1. text
      2. trace
      3. debug
      4. info
      5. warn
      6. error
      7. fatal
      8. all
  • sff.log.broadcast.protocol. Default value is “ws,wss”. This is the list of protocols which will be used to select broadcast pipelines. That is, it will *not* broadcast on HTTP connections, for example. When broadcast is enabled, startup will log the list of protocols so you know it was enabled. Obviously, it will *not* print the list of facet IDs used for broadcast.

  • sff.log.broadcast.message. This is log message definition. It may come in different forms:

    • <sff.log.broadcast.message>json<sff.log.broadcast.message> This simplest form allows you to define log message type (e.g. xml or json). Default is xml.

  • Default message in JSON format will look like this:

    Default Message
    {
      "Log":
      {
        "Level":"WARN",
        "Source":"SffSequenceFacet",
        "Timestamp":1479334740055,
        "Message":"startSchedularForMonitorStatus:MonitorStatus 1:0 @ Wed Nov 16 14:19:00 PST 2016: Scheduled: skip past (-55 ms) event for Wed Nov 16 14:19:00 PST 2016 [0../30sec]"
      }
    }

    In XML Format:

    Default Message
    <Log>
      <Level>WARN</Level>
      <Source>SffSequenceFacet</Source>
      <Timestamp>1479334740055<Timestamp>
      <Message>startSchedularForMonitorStatus:MonitorStatus 1:0 @ Wed Nov 16 14:19:00 PST 2016: Scheduled: skip past (-55 ms) event for Wed Nov 16 14:19:00 PST 2016 [0../30sec]</Message>
    </Log>

    In CDM Format:

    Default Message
    #
    Log:
      Level: WARN
      Source: SffSequenceFacet
      Timestamp: 1479334740055
      Message: "startSchedularForMonitorStatus:MonitorStatus 1:0 @ Wed Nov 16 14:19:00 PST 2016: Scheduled: skip past (-55 ms) event for Wed Nov 16 14:19:00 PST 2016 [0../30sec]"
    Log messages are automatically added to the logger window on TQLConsole

Application Monitoring Using Broadcast Facet

Users can build monitoring applications or applications that require to know if Runtime has started etc using the Broadcast facet.


              

Elastic Search Integration


  1. Copy Elastic Search, Kibana, Filebeat and Logstash from below link
  2. http://sandbox.atomiton.com:8080/fid-downloads/res/downloads/loganalysis.zip
  3. Unzip and update filebeat configuration to read log file from given location
    1. Path – filebeat/filebeat.yml
    2. Add below lines for each log file

- input_type: log

  paths:

    - /<Path>/deviceEngine.log

  fields:  {host: "172.31.48.38", port: "8085", type: "logs"}

  exclude_lines: [".*SffReport:.*"]

####################################

- input_type: log

  paths:

    - /<Path> /deviceEngine.log

  fields:  {host: "172.31.48.38", port: "8085", type: "SffReport"}

  include_lines: [".*SffReport:.*"]

  1. Start each service using below commands –
  2. Elastic search –
    1. Path to execute command - elasticsearch/bin
    2. Command - nohup ./elasticsearch > /dev/null &
  3. Logstash –
    1. Path to execute command - logstash/logstash/bin
    2. Command - nohup ./logstash -f logstash-engine.conf > /dev/null &
  4. Filebeat –
    1. Path to execute command – filebeat/
    2. Command - nohup ./filebeat > nohup.out &
  5. Kibana –
    1. Path to execute command – kibana/bin
    2. Command - nohup ./kibana > nohup.out &


  1. Kibana will start on port 5601
  2. URL to access Kibana - http://<host>:5601/app/kibana