Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Gliffy
imageAttachmentIdatt314638341
baseUrlhttps://atomiton.atlassian.net/wiki
nameAlarm State Transition Copy
migration1
diagramAttachmentIdatt314638337
containerId293372058

Table of Contents


Gliffy
imageAttachmentIdatt261554178
baseUrlhttps://atomiton.atlassian.net/wiki
migration1
nameAlarm Heirarchy
diagramAttachmentIdatt261750785
containerId236748960
timestamp1521175914647

Cluster Monitoring Model Description

AlarmCondition Model


AttributeDescription/ Purpose
1SidUnique ID
2NameName of the Rule
3DescriptionDescription and purpose of Rule
4EnabledIdentifies if a rule must be executed (Values: true/false)
5LastRunStores the last time the rule was executed
6LastStatusStores the result of the last run (Values: Success/ ERRCON1/ ERRCON2.. ) In case of error, the time between subsequent executions can be inceased increased and Rule disabled with fatal message at  point. Value will be the most severe failure in case of multiple conditions
7CheckCondition1..* Contitions Conditions to be executed for this rule (e.g. A presence of a facet and a http connection check for a task group)
8isRecoverableIndicates if the rule has a recovery action
9RecoveryActionRecovery Action
10TypeThe type of condition to be tested
11ComponentNameThis is a logical grouping of Alarm conditions based on the target application needs
12AlertCountCount of successive Alerts raised for this Condition (Reset when Condition is successful)


ConditionDef


AttributeDescription/ Purpose
1SidUnique Id
2Protocolhttp or https
3URLHostHostName or IP Address
4URLPortPort Number
5URLEndpointTarget endpoint (e.g. fid-TopicFacet)

Types and

...

attributes:

The ConditionDef will have each of these attributes but will be populated based on the type of the Condition

Type: Facet

Test: Check if the Facet is running


AttributeDescription/ Purpose
1FacetNameName of the facet to be checked
2URLThe URL to be called


Type: HTTP

Test: Check if the HTTP endpoint is UP


AttributeDescription/ Purpose
1URLThe URL to be called
2QueryThe query to be executed
3TimeOutAutoConnectionTimeout for the request
4Headers?0..* header as key value pairs


Type: WS

Test: Check if the websocket endpoint is UP


AttributeDescription/ Purpose
1URLThe URL to be called
2QueryThe query to be executed
3TimeOutAutoConnectionTimeout for the request


Type: Sequence

Test: Check if the sequence is executing


AttributeDescription/ Purpose
1URLThe URL to be called
2SequenceNameThe sequence to be tested


Type: Log

Test: Check if the sequence is executing


AttributeDescription/ Purpose
1MessageMessage to be checked
2SourceSource of the message
3LevelLevel of the message


RecoveryActionModel


AttributeDescription/ Purpose
1SidUnique Id
2RuleIdIdentifies the Rule this condition belongs to
3TypeThe type of recovery action to be taken
4Active(True/False) A recovery action becomes active when the Condition is successful, and inactive when the a condition fails. This prevent the recovery from running before the condition is up and running. Also it prevents multiple attempts at recovery, specially when previous recovery is still in progessprogress
5AttemptAfterCountThe number of alerts after which recovery must be attempted. If value is 1, recovery will be attempted instantly.

Type: ExecuteQuery

Test: Check if the sequence is executing


AttributeDescription/ Purpose
1URLThe URL to be called
2QueryQuery to be executed


Type: Script

Test: Check if the sequence is executing


AttributeDescription/ Purpose
1ScriptLocationLocation of the script to be executed

Alert Model


AttributeDescription/ Purpose
1IdUnique Identifier for Alert
2AlarmIdId of the Alarm that failed, caued caused an alert to be raised
3ClusterIdId of the daemon on which the Alert was created (Only on the Mgmt backend, absent on the daemon)
4InstanceNameName of the node for which the Alert was raised
5RaisedDateThe timestamp when of the alert
6CauseReason of the failuerfailure
7DetailedMessageDetailed message if there is one.
8LevelLevel of the error (Error/ Fatal)
9HasReadIndicates whether the Alert is new or has been read.

...

The following steps will be performed on the deamondaemon:

  1. Rules are added by the use through the API or csv file upload
  2. A job is executed every minute that does the following:
    1. Picks up a AlarmCondition
    2. Executes the condition
    3. On Success
      1. Updates the LastRun with current time and LastStatus to 'Success'
    4. On Failure
      1. Updates the LastRun with current time and LastStatus to 'Failure'
      2. Generates an alert with Condition, ClusterId, InstanceName, and Error Reason
      3. If Recovery Action is present try and execute the action recovery could be
        1. Try and restart failing component
        2. Try and restart the A-Stack engine on the failing node
  3. Find new alerts (Newer than the last run) Sends email notification for the alert

Alarm State Transition Diagram

Gliffy
imageAttachmentIdatt314638341
baseUrlhttps://atomiton.atlassian.net/wiki
migration1
nameAlarm State Transition Copy
diagramAttachmentIdatt314638337
containerId293372058

Provisioning Alarms Using CSV File

Column NameMandatoryDescription
ClusterIdYesThis column defines the cluster the AlarmCondition belongs to and it deployed on the corresponding daemon. It is the only positional field and must always be the first column. All other columns can be shuffled, as long as the name is correct. An alarm with a non-existent ClusterId will be ignored.
InstanceNameYesDefines the instance this AlarmCondition belongs to in the Cluster.
ComponentNameYesThis is a logical grouping of AlarmConditions within an Instance. It can be any name that is meaningful to the application. All AlarmConditions  with the same component will be grouped together on the dashboard.
NameYesThis is a meaningful name for an AlarmCondition it must be unique within an instance in a Cluster. The duplicate entry will simply be ignored.
DescriptionNoThis is a meaningful description for an AlarmCondition.
EnabledYesThis is a boolean field that indicates if this AlarmCondition is enabled. Only Conditions that are enabled will be executed to check for success or failures. An alarm condition can be enabled or disabled using the update AlarmCondition mechanism.
TypeYesThis column defined the type of condition being tested. The currently supported types are: "Facet","HTTP","Sequence","WS","Log","Info" (All case sensitive)
- Facet: Test if a facet is present and active.
- HTTP: Test if an endpoint is up and responding
- Sequence: Test if a scheduled job is present
- WS: Test if a websocket is up and responsive
- Log: Monitor the log files for error and fatal conditions
- Info: Monitor critical parameters like NullChannel, FreeChannel
CheckCondition.ProtocolYesThe protocol for the test.
CheckCondition.URLHostYesThe IP Address of the host to be tested.
CheckCondition.URLPortYesThe port at which the services running.
CheckCondition.URLEndpointConditionalThe endpoint at which the test is to be performed. Mandatory for types: HTTP, Sequence, Log, WS.
CheckCondition.FacetNameConditionalThe Facet whose presence is to be tested. Mandatory for type: Facet.
CheckCondition.SequenceNameConditionalThe Scheduled job whose presence is to be tested. Mandatory for type: Sequence.
CheckCondition.QueryConditionalThe query that must be executed on the endpoint. Mandatory for type: HTTP.
CheckCondition.TimeoutConditionalThe amount of time query will wait for the server to respond, before declaring it a failure. Mandatory for type: Facet, HTTP, Sequence, Info, WS.
CheckCondition.HeadersDataNoAny header information that is required for a query to execute successfully. Applicable to HTTP type.
IsRecoverableYesThis boolean field defines if the failure of an AlarmCondition will trigger a recovery action.
RecoveryAction.ActiveYesWhen a recovery action is present this field defines if the recovery action is true when it is created. If it is set to false, the system will change it to true once the Alarm Condition is active.
RecoveryAction.TypeConditionalMandatory if AlarmCondition is recoverable. There are types of recovery actions, "HTTP" and "RESTART".
HTTP: tries to execute a query on an endpoint in attempt to recover.
RESTART: Restart the target applications A-Stack engine in attempt to recover.
RecoveryAction.AttemptAfterCountConditionalMandatory if AlarmCondition is recoverable. Number of failures after which recovery must be attempted. (Even for multiple failures, Alerts will only be raised on the first transition from "Success" to "Failure".
RecoveryAction.ProtocolConditionalThe protocol for the recovery. Mandatory if recovery type is HTTP.
RecoveryAction.URLHost
The IP Address of the host to be recovered. Mandatory if recovery type is HTTP.
RecoveryAction.URLPort
The port at which the services running. Mandatory if recovery type is HTTP.
RecoveryAction.URLEndpoint
The endpoint at which the recovery is to be performed. Mandatory if recovery type is HTTP.
RecoveryAction.Query
Query to be executed at the endpoint. Mandatory if recovery type is HTTP.
RecoveryAction.Timeout
Amount of time system waits for the server to respond. Mandatory if recovery type is HTTP.

Command Line API

Load Alarm Conditions

This Command loads the AlarmConditions into the monitoring system.

  • Command: tql -monitoringconfig -load Path=

...

  • <Path of the File>/AlarmConditions.csv

Delete Alarm Conditions

This command lets you delete one or all AlarmConditions from the system based on the inputs provided.

  • Command: tql -monitoringconfig -delete ClusterId=Cluster-1,Instance=Instance-1,Name=Test_Endpoint_

...

  • Facet

Deletes AlarmCondtion called Test_Endpoint_Facet from instance Instance-1 on cluster Cluster-1

  • Command: tql -monitoringconfig -delete ClusterId=Cluster-1,Instance=Instance-1

Deletes all AlarmCondtions from instance Instance-1 on cluster Cluster-1

Get Alarm Conditions

This command gets AlarmCondtions from the system based on the inputs provided. And writes it to the file provided in the Path variable

  • Command: tql -monitoringconfig -get ClusterId=Cluster-1,Instance=Instance-1,Name=Test_Endpoint_

...

  • Facet,Path=

...

  • <File Path>\Result.json

Gets AlarmCondtion called Test_Endpoint_Facet from instance Instance-1 on cluster Cluster-1

  • Command: tql -monitoringconfig -get ClusterId=Cluster-1,Instance=Instance-1,Path=

...

  • <File Path>\Result.json

Gets all AlarmCondtions from instance Instance-1 on cluster Cluster-1

Update Alarm Conditions

Updates an AlarmCondition values. First run the get Command to get the AlarmCondition, make the Necessary changes and provide this file path in the "Path" variable.

  • Command: tql -monitoringconfig -update ClusterId=Cluster-1,Path=

...

  • <File Path>\Result.json

Update Email Configuration

Updates the Email configuration. ClusterId is optional, if not specified updates will happen on all cluster.

  • Command: tql -monitoringconfig -config Path=

...

  • <File Path>\EmailConfig.json,Type=setemail,ClusterId=Cluster-1

Updates configuration on cluster Cluster-1

  • Command: tql -monitoringconfig -config Path=<File Path>\EmailConfig.json,Type=setemail,ClusterId=Cluster-1

Updates configuration on all clusters

Sample Configuration:


Code Block
languagejs
{  
   "NotificationConfig":{
	  "Host":"HOST_NAME",
      "Port":"PORT_NUMBER",
      "Username":"UNAME",
      "Password":"PWD",
      "From":"from@domain.com",
      "To":"recepient@domain.com,recepient2@domain1.com",
      "Subject":"Alert Generated Notifications"
   }
}


Get Email Configuration

Get the Email configuration and prints it on the screen. ClusterId is mandatory.

  • Command: tql -monitoringconfig -config Type=getemail,ClusterId=Cluster-1

Update Schedule Configuration

Updates the schedule configuration. ClusterId is optional, if not specified updates will happen on all cluster.

  • Command: tql -monitoringconfig -config Path=E:\Atomiton\Builds\TQLEngineConfiguratorNode1\resources\atomiton\configurator\spaces\SysConfig.json,Type=setschedule,ClusterId=Cluster-1


Code Block
languagejs
{  
   "SysConfig":[
		{"EMAIL_FREQ":"1min"},
		{"FACET_FREQ":"1min"},
		{"HTTP_FREQ":"1min"},
		{"SEQ_FREQ":"1min"},
		{"LOG_FREQ":"1min"},
		{"INFO_FREQ":"1min"},
		{"WS_FREQ":"1min"},
		{"ALERT_PURGE_FREQ":"1min"},
		{"ALERT_PURGE_LIMIT_DAYS":"25"}
   ]
}
  • FACET_FREQ: Frequency with which Facet type AlarmConditions are executed.
  • HTTP_FREQ: Frequency with which HTTP type AlarmConditions are executed.
  • SEQ_FREQ: Frequency with which Sequence type AlarmConditions are executed.
  • LOG_FREQ: Frequency with which Log type AlarmConditions are executed.
  • INFO_FREQ: Frequency with which Info type AlarmConditions are executed.
  • WS_FREQ: Frequency with which WS type AlarmConditions are executed.
  • ALERT_PURGE_FREQ: Frequency with which stale alerts are purged.
  • ALERT_PURGE_LIMIT_DAYS: How old should an alert be before it is considered stale.
  • EMAIL_FREQ: Frequency with which Alert emails are sent.

Get Schedule Configuration

Get the schedule configuration and prints it on the screen. ClusterId is mandatory.

  • Command: tql -monitoringconfig -config Type=getschedule,ClusterId=Cluster-1

Stop Monitoring

Pause a monitoring service. ClusterId is mandatory.

Command: tql -monitoringconfig -stopmonitoring ClusterId=Cluster-1

Start Moniting

Resume a monitoring service. ClusterId is mandatory.

Command: tql -monitoringconfig -startmonitoring ClusterId=Cluster-1

Update purge configuration for management dashboard'

Update Alert purging configuration on the Dashboard.

Command: tql -monitoringconfig -alertpurgedashboard Purgefreq=1min,Purgelimit=20

  • Purgefreq: Frequency with which stale alerts are purged.
  • Purgelimit: How old should an alert be before it is considered stale

Management Dashboard

A scheduled job will run on the dashboard back-end that will periodically (Every minute) pull the AlarmConditions, Alerts and Notifications sent and store it onto itself for displaying on the UI.

...