Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

What is a protocol handler in TQL

The run-time environment of Atomic Domain Languages (TQL, Workflow etc) is TQLEngine. TQLEngine is an Java-based asynchronous asynchronous event-driven framework with network communication at its core for the rapid development of high-performance and high-scale IoT Applications. All communications between devices require that the devices agree on the format of the data. The set of rules defining a format is called a protocol. TQLEngine protocol handler is one of the key extension point that handles communication with external devices. Protocol Handlers does the following: 

  • handle whether synchronous or asynchronous
  • encoding and decoding of data
  • detect and recover from transmission errors

TQLEngine uses Java's Non-blocking I/O (NIO) APIs mechanism at its core to provide access to low-level I/O operations of modern Operating Systems. TQLEngine greatly simplifies and streamlines network programming such as TCP and UDP socket server development. TQLEngine implements a lot of basic protocols such as HTTP, WebSocket. Adding new protocol support in easier without compromising on performance, stability, and flexibility.

Below are Understanding some of the core TQLEngine concepts that is useful in realizing what makes writing a new Protocol Handler much easier .-

Buffer

At the lowest level TQLEngine implements Zero-Copy feature. Zero-Copy avoids context switches between Kernel and Application space. Often copying is not necessary. If data is not modified a slice can be passed forward without copying to a different buffer. Buffer is a random and sequential accessible sequence of zero or more bytes (octets). Buffer provides an abstract view for one or more primitive byte arrays (byte[]). In the context of TQLEngine buffer are normally referred to as ChannelBuffer because everything with TQLEngine is a channel (see below).

Channel

A basic abstraction of a network through which the bytes in a buffer are transported is called a channel. A channel represents an open connection to an entity such as a hardware device, a file, a network socket, or a program component that is capable of performing one or more distinct I/O operations, for example reading or writing. A channel is capable of:

  • Read - reading data from buffer
  • Write - writing data to buffer
  • Connect - open a connection to the device
  • Bind - bind the connection
  • Disconnect - close a connection to the device
Event

Almost everything that happens within TQLEngine occurs asynchronously. There are core driving code patterns that generate events (like Connect, Disconnect and Write) and bits of code that handle those events once they've been executed. All these events in the TQLEngine are instances of ChannelEvent and the code that handles these events are called as ChannelHandlers. In order to implement communication protocols we need handlers that can perform encoding and decoding of messages.

  •     Encoder: Converts a non ChannelBuffer object (Raw data stream or payload) into a ChannelBuffer, suitable for transmission to somewhere else. It might not encode directly to a ChannelBuffer, rather, it might do a partial conversion, implicitly relying on another handler[s] to do the conversion.  An example of a encoder is say PhidgetDataEncoder which converts regular data from Phidget Sensors / Actuators into ChannelBuffers containing the meaningful representation of the read data.
  •     Decoder: The reverse of an encoder where a ChannelBuffer's contents are converted into something more useful. The counterpart of the PhidgetDataEncoder mentioned above, is the PhidgetDataDecoder and it does exactly this.

 

Pipeline

Putting it the above concepts together we have a construct called Pipelines Pipeline (or more specifically, ChannelPipelinesChannelPipeline). A pipeline is a stack of handlers that can manipulate or transform the values that are passed to them. Then, when they're done, they pass the value on to the next handlers. In order to achieve the proper sequence of modifications of the payload, the pipeline keeps strict ordering of the handlers in the pipeline. Another aspect of pipelines is that they are not immutable*, so it is possible to add and remove handlers at runtime. This feature comes in handy while implementing certain types of device communication.

Image Modified

 
Deployment View

 

In order to make engine extensible we need two new capabilities:Ability to TQLEngine extensible it provides a capability to load extra bundles after engine core start and initial configuration as those extension bundles may come from different places defined by config
Ability to understand what kind of stuff the loaded extension bundle provides to the engine (e.g. new protocol handler, new facet type etc.). In case of new object types it configuration. It is also necessary to be able to instantiate those objects when needed.
First capability is achieved by introduction of new config parameter sff.auto.launch which looks, feels and behaves exactly like sff.auto.deploy except it launches extension bundles it points. Just like sff.auto.deploy, the default value for sff.auto.launch parameter is "sff.auto.launch" so you can put your extra bundles in sff.auto.launch folder in the current folder the engine runs in. Just like auto deploy, it can point to a folder or a single bundle and/or contain comma-separated list of files and/or locations.

Moreover, now you can actually mix deployment packages with extension bundles in the same folder. sff.auto.launch will only launch *.jar files while sff.auto.deploy will try to deploy anything but *.jar. So you can point both auto launch and auto deploy to the same folder which may contain both deployment packages and bundles.
In order to provide better control, each item/location in launch/deploy list can be now prefixed with "file:" or "bundle:" protocol. This will limit access to correspondent file system. Items without prefix will be looked up in bundle first and, if not found, in local file system. To support bundle-only self-contained configurations, sff/sff.auto.launch folder is now also defined within base bundle so you can put your extensions next to your sff.main.config.xml before the build. Build will embed your extension bundles within base bundle.

Second capability is provided by virtue of SffObjectFactorySvc interface. As the name suggests, it is intended to be used as an OSGi service (i.e. a singleton instance contributed by each service provider). Each bundle which wants to contribute to the engine must define one or more of these services (it is strongly recommended to use DS annotations for that).
SffObjectFactorySvc interface contains only two methods:
newInstance(ListMap args) is used to instantiate the target object (e.g. new instance of protocol handler for a new pipeline)
getInfo() is used to get factory meta-data. This must return a ListMap which contains information about the objects this factory can instantiate and how they should be used by the engine.
Extension meta-data so far may include the following items (see SffNioFrameworkSvc):
sff.object.name (e.g. "SffRxTxCodec") the name under which extension wants to register the object type and by which it will be referred in pipeline configurations etc.
sff.object.name.map -- any other related names (e.g. various pipeline configurations) to register with the engine
sff.protocol.name -- protocol name this pipeline handler handles.
sff.protocol.port -- default protocol port number (for network protocols only)
sff.protocol.type -- e.g. text, binary, etc.
sff.protocol.class -- e.g. network, local etc.
sff.protocol.scope -- how pipelines serving this protocol must be handled/shared by the engine (e.g. global, task, invoke etc.)
sff.protocol.transport -- default protocol transport, i.e. which pipeline factory to use (e.g. TCP, UDP, LTP, etc.). If given, this will be used for both server and client pipelines
sff.server.transport -- which transport to use specifically for server pipeline if different from one given above
sff.client.transport -- same as above, but for the client pipelines
sff.server.pipeline.args -- default server pipeline configuration
sff.client.pipeline.args -- default client pipeline configuration

The last two are used when pipeline is configured based on URL [protocol] rather than by explicit configuration parameter[s] given in Create/ModifyPipeline FS instruction
I've created two examples: RxTx and Phidget bundles. You can use git diff tool to see what had to be changed in order to extract Phidget bundle. It is rather straightforward:
Factory class is defined and annotated as SffObjectFactory service
Protocol name/references and correspondent packages are removed from the engine registration tables as well as NettyContext.newInstance() factory method. From now on this method will delegate to provided extension factory
New ext.*.bnd file is created and all the packages and other necessary artifacts are moved to the new bundle (don't forget to remove them from netty bundle).
Note that native library packages which used to be defined in separate bundles (e.g. phidget-2.1.8.jar) are now declared as private (or export) packages of the correspondent extension bundle. This cause all the necessary classes to be copied into the extension bundle itself and render library bundle unnecessary. So please don't forget to remove library bundle from the sff.auto.launch folder so the classes will not be loaded twice. Alternatively it is possible to include the whole library jar as a bundle resource and add it into bundle classpath. Depending on the library license we may use one way or another.
A nice side-effect of the new integration system is that supporting library jars are no longer required to be converted into OSGi bundles as they are never loaded by themselves any more. This makes working with non-OSGi jars easier as all you need to do is add it to build path in bnd.bnd and bndtools will do the rest.
Project structure. For pure code management convenience, it is not recommended just yet to create whole new projects for your extension bundles. While certainly possible and eventually very likely, having extension bundles as separate projects will require extra maintenance and release efforts without much gain so far (e.g. we'd have to "officially" release API bundles which can be referenced by those other [extension] projects, mange their versions properly, propagate all changes across multiple projects, find a better place for binaries, etc.). Until we grow enough to warrant such efforts or make engine core into open source so other people may contribute to it, I'd prefer to keep it small and tidy. So all you have to do for now is to create a new ext.*.bnd file, configure your implementation packages into it and release the bundle.

Bundle release. There are two ways to do that:
For quick testing you can simply copy your new bundle from com.atomiton.sff/generated folder to your sff.auto.launch and start the engine. These jar will not have version in its name and will be re-created on each project [re]build
After you finished your development you can release bundle "officially" by right-click on the bundle and selecting "Release Bundle". This will run semantic versioning tool and suggest correct version for the bundle based on the previous release if any. Going through this exercise only makes sense when you intend to publish bundle for public use so it will be living it's own life from now on for eternity. Note that it will not allow you to save your bundle if you try to insist on the same (i.e. wrong) version for the new release (it will give you a weird error which does not make any sense). I guess depending on the circumstances this can be either a blessing or a curse. Until we start publishing/selling our bundles it's more of a nuisance to bump the version every few hours or days. So if you want to play with this process I suggest you quit eclipse, delete the contents of cnf/releaserepo, restart eclipse and try from a clean slate. I see no any advantage in having high number versions until we reach some stability so changes become infrequent and version bumps meaningful. To that end I've removed cnf/releaserepo from our git repo so you can play with release versions to your heart content on your local box. Once you've finished playing, you can check your final result (with version please!) into SFF/run/sff.auto.launch which currently is the "master" run environment. Please don't forget to remove any library bundles which are no longer needed as well.
Tips and tricks:
Not every bnd header or instruction is available from eclipse UI form when you create/edit a bnd file. Useful examples are:
DynamicImport-Package: * header will allow for run-time resolution of packages which are not pre-wired at bundle load time. This is something you need to understand about OSGi. Everything is perfectly dynamic when we use OSGi services (i.e. java interface-base contracts between components). These are fully dynamic and can come and go at any time and they what makes OSGi so attractive a platform in the first place. Everything is seamless when you use services. Then, unfortunately, there are non-service dependencies (i.e. old plain package and class references between classes). These are not dynamic and instead wired statically at bundle load time. What's worse, generally they cannot be changed without full bundle refresh which, in turn, may require another bundle refresh etc. Eventually, many dynamic bundle loaders like felix fileinstall resort to nothing less but full framework stop/restart cycle. This provides consistency, but employs a rather drastic measure to achieve it. Our current implementation of SffStartup does not do full restart nor it tries to refresh any already active bundles which formally renders any new packages inaccessible for old bundles except indirectly via services.  DynamicImport-Package manifest header gives OSGi container a permission to try to resolve requested packages at run-time. Note that this will essentially render OSGi package versioning system disabled and revert to equivalent of old classpath. It is OK if you have to deal only with a single version of your package at a time, but may cause trouble when multiple versions are present as you have no say in which version you'll get this way. Use with caution when absolutely necessary, otherwise avoid.
-exportcontents instruction will help you to eliminate "...export have private references" warning (I suggest you take care of all warnings as they may indicate potential problems). This is often arises when you embed some dependency (e.g. phidget library) into your bundle, but have a mix of private and exported packages. Of course you may export everything (including phidget stuff), but that may not be a good idea. Normally you're supposed to export only your own stuff. This instruction changes manifest Export-Package header in a way that exports given package[s], but does not try to include them from the classpath so they are only "visible" to the bundle itself
Eventually, there should be no reason to expose any of your implementation packages to the rest of the system. So your SffObjectFactory service should be the one and only entry point for your extension. If you think of a reason why you need to export your packages please let me know so I can find a better solution.

What does it do?

...

bundles when needed i.e. on-demand. The bundlers are automatically configured to be loaded from  "sff.auto.launch" folder in the current folder where the core TQLEngine is running.

 

Image Added


Protocol Handler Usage