TQLConsole ThingSpaces Configurator can be used to deploy and manage the A-Stack Runtime Environment consisting of your models (projects) on a target host machine(s).
A-Stack Runtime Environment deploy and manage can be done using TQLConsole ThingsSpace Configurator User interface or using TQL Command Line Interface (TQL CLI).
It provides following functionalities :
Target Host(s) needs to be prepared before the TQL Applications can be deployed using ThingSpace Configurator.
Please note that if you are starting multiple instances of A-Stack on same host; specify the Path parameter to the -engine command.
tql -engine -start -Xmx=512m Path="/home/ec2-user/atomiton/configurator |
Hit the URL: https://<host>:9000
<FacetAgent Status="200" Reason="OK"/> |
Steps to generate SSL Certificate
Update openssl.cnf file for SAN update - 1. Update openssl.cnf file located at /etc/pki/tls/openssl.cnf to add below configuration lines These lines need to update in v3_req section subjectAltName = @alt_names alt_names DNS.1 = localhost DNS.2 = <Hostname> IP.1 = <IP address> Execute below openssl commands to create and import ssl certificate - 1. openssl genrsa -des3 -out server.key 1024 -extensions v3_req -extfile /etc/pki/tls/openssl.cnf 2. openssl req -new -key server.key -out server.cer -extensions v3_req 3. cp server.key server.key.org 4. openssl rsa -in server.key.org -out server.key 5. openssl x509 -req -days 365 -in server.cer -signkey server.key -out server.crt -extensions v3_req -extfile /etc/pki/tls/openssl.cnf 6. openssl pkcs12 -export -in server.crt -inkey server.key -out serverKeystore.p12 7. export PATH=$PATH:/opt/jdk1.8.0_144/jre/bin – Find path till java keytool lib directory and pass it to export command 8. keytool -importkeystore -deststorepass test123 -destkeystore server.jks -srckeystore serverKeystore.p12 -srcstoretype PKCS12 Replace certificate in $TQL_HOME/sslcertificates folder; where $TQL_HOME is the directory where A-Stack Configurator Daemon is installed. If certificate file name is different from server.jks then update its name in sff.local.config.xml file |
The discovery result is divided into two parts: Local Engine and Cluster Nodes categorized the Group to which the nodes belong.
User can view the details of the node by clicking on any node on the graph. The details will be published on the right hand side.
Users can view general information about each of the node by clicking on public IP address of the node.
Users can perform On demand Garbage collection on the invidiual nodes by clicking on the "Perform GC" button
From the invididual nodes the user can setup the log files to be written to a common mount point file system - Say: AWS EFS
Following Search Fields are available:
To display all logs that are having fields.type is 'SffReport' { "query": { "match": { "fields.type": { "query": "SffReport", "type": "phrase" } } } } To display all logs that are having loglevel is 'WARN' { "query": { "match": { "loglevel": { "query": "WARN", "type": "phrase" } } } } To display all logs that are having fields.port is '8085' { "query": { "match": { "fields.port": { "query": "8085", "type": "phrase" } } } } To display all logs those are having fields.host is '172.31.48.38' { "query": { "match": { "fields.host": { "query": "172.31.48.38", "type": "phrase" } } } } To display all logs that contains are 'SubmitSequence' in SffReportsData tag { "query": { "match": { "SffReportsData": { "query": "*SubmitSequence*", "type": "phrase" } } } } To display all logs that contains are 'SffSequenceFacet' in message tag { "query": { "match": { "message": { "query": "*SffSequenceFacet*", "type": "phrase" } } } } |