Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14


Summary 

TQLConsole ThingSpaces Configurator can be used to deploy and manage the A-Stack Runtime Environment consisting of your models (projects) on a target host machine(s).


                                           


A-Stack Runtime Environment deploy and manage can be done using TQLConsole ThingsSpace Configurator User interface or using TQL Command Line Interface (TQL CLI)

It provides following functionalities :

  • Deploy Cluster 
  • Start/Stop/Restart of application in cluster nodes
  • Dynamic Editing of Application Configurations files per cluster peer basis
  • Execute Shell Script(s) and TQL queries against each application node
  • Application Monitoring for Memory, CPU Usage
  • Perform On-Demand Garbage Collection on a per node basis
  • Log Files Aggregation and ELK (Elastic Search, LogStash, Kibana Setup)

Preparing the Target Host(s)

Target Host(s) needs to be prepared before the TQL Applications can be deployed using ThingSpace Configurator.

Target Host(s) Requirements

  1. Target Host(s) OS - Please refer to A-Stack Runtime OS Support Matrix for details. Note the CPU, Memory and Storage of target host(s) is dependent on your Applications need.
  2. A-Stack Configurator Daemon:
    1. Port Requirement - Daemon is started on port 9000.
    2. CPU, Memory footprint - 
    3. Always-on Vs On-Demand Mode

Download and Install A-Stack Configurator Daemon

  • A-Stack Configurator Daemon can be downloaded from your account page.

                    

  • Install A-Stack Configurator Daemon
    • Unzip the A-Stack Configurator Zip fie.
    • Start the configurator using the TQL CLI Command.

      Please note that if you are starting multiple instances of A-Stack on same host; specify the Path parameter to the -engine command.

      TQL CLI to Start the Configurator Daemon
      tql -engine -start -Xmx=512m Path="/home/ec2-user/atomiton/configurator
      
    • Default SSL Certificate - A-Stack Configurator Daemon comes installed with default SSL Certificate. This file is in sslcertificates folder on the installation.
    • Test if the Deamon is running

      Hit the URL:  https://<host>:9000

      Daemon Keep-Live Response
      <FacetAgent Status="200" Reason="OK"/>
      
    • Steps to generate SSL Certificate

      Steps to generate SSL Certificate
      Update openssl.cnf file for SAN update -
      
      1. Update openssl.cnf file located at /etc/pki/tls/openssl.cnf to add below configuration lines
      These lines need to update in v3_req section
      
      subjectAltName = @alt_names
      alt_names
      DNS.1 = localhost
      DNS.2 = <Hostname>
      IP.1 = <IP address>
      
      Execute below openssl commands to create and import ssl certificate -
      
      1. openssl genrsa -des3 -out server.key 1024 -extensions v3_req -extfile /etc/pki/tls/openssl.cnf
      
      2. openssl req -new -key server.key -out server.cer -extensions v3_req
      
      3. cp server.key server.key.org
      
      4. openssl rsa -in server.key.org -out server.key
      
      5. openssl x509 -req -days 365 -in server.cer -signkey server.key -out server.crt -extensions v3_req -extfile /etc/pki/tls/openssl.cnf
      
      6. openssl pkcs12 -export -in server.crt -inkey server.key -out serverKeystore.p12
      
      7. export PATH=$PATH:/opt/jdk1.8.0_144/jre/bin – Find path till java keytool lib directory and pass it to export command
      
      8. keytool -importkeystore -deststorepass test123 -destkeystore server.jks -srckeystore serverKeystore.p12 -srcstoretype PKCS12
      
      Replace certificate in $TQL_HOME/sslcertificates folder; where $TQL_HOME is the directory where A-Stack Configurator Daemon is installed.
      
      If certificate file name is different from server.jks then update its name in sff.local.config.xml file

Creating a Gold Copy of A-Stack Configurator

Docker Container
AWS EC2 AMI


The discovery result is divided into two parts: Local Engine and Cluster Nodes categorized the Group to which the nodes belong. 

User can view the details of the node by clicking on any node on the graph. The details will be published on the right hand side.

View A-Stack General Information

Users can view general information about each of the node by clicking on public IP address of the node.

View A-Stack for CPU, Memory

Performing On-Demand GC

Users can perform On demand Garbage collection on the invidiual nodes by clicking on the "Perform GC" button

TQLConsole DevOps UseCase on AWS InfraStructure

Elastic Search, Log Stash and Kibana Setup

From the invididual nodes the user can setup the log files to be written to a common mount point file system - Say: AWS EFS


Following Search Fields are available:

Elastic Search Queries

Elastic Search Queries
To display all logs that are having fields.type is 'SffReport'
{
  "query": {
    "match": {
      "fields.type": {
        "query": "SffReport",
        "type": "phrase"
      }
    }
  }
}

To display all logs that are having loglevel is 'WARN'
{
  "query": {
    "match": {
      "loglevel": {
        "query": "WARN",
        "type": "phrase"
      }
    }
  }
}

To display all logs that are having fields.port is '8085'
{
  "query": {
    "match": {
      "fields.port": {
        "query": "8085",
        "type": "phrase"
      }
    }
  }
}

To display all logs those are having fields.host is '172.31.48.38'
{
  "query": {
    "match": {
      "fields.host": {
        "query": "172.31.48.38",
        "type": "phrase"
      }
    }
  }
}

To display all logs that contains are 'SubmitSequence' in SffReportsData tag
{
  "query": {
    "match": {
      "SffReportsData": {
        "query": "*SubmitSequence*",
        "type": "phrase"
      }
    }
  }
}


To display all logs that contains are 'SffSequenceFacet' in message tag
{
  "query": {
    "match": {
      "message": {
        "query": "*SffSequenceFacet*",
        "type": "phrase"
      }
    }
  }
}


  • No labels