Skip to content

Overview

mjacobsen4DFM edited this page Oct 18, 2016 · 3 revisions

StormFront is essentially a Publisher-Subscriber based system. Publishers are configured in the database based on the com.DFM.StormFront.Model.Publisher object. For a feed publisher, a URL is required; for an on-demand publisher, the database key is required. Subscribers are loosely configured (no associated object) based on the needs of the endpoint (API, URL, credentials, protocols, etc).

Publishers and Subscribers can be enabled or disabled at the root or feed level. They can be toggled, at the root level, so that they are not included in any processing; they can be toggled, at the feed level, so that the individual source/destination is excluded from processing.

To prevent unnecessary interaction with endpoints, and to avoid excessive overhead within the engine, feeds and article contents are tracked. If a feed has not changed since the last time it was processed, it will be skipped; likewise, if an article within the feed has been processed, it will be skipped. The benefit is that the delivery endpoints (internal systems) are not overly taxed; the consequence is that the sources (third party and internal systems) are hit frequently. To mitigate this (and avoid DoS behavior), there is a minimum processing time set which prevents feeds that do not change frequently from being hit constantly. On the surface, this contradicts the goal of real-time polling; but, if the feed hasn't changed within a constrained amount of time, then it should be given time to accumulate new data; otherwise, the engine will process the feed in quick succession, adding unnecessary load on the system. But, the primary concern is avoiding DoS behavior which could cause third-party publishers to blacklist StormFront; it is better to set a wait time than to be blocked from the data.

StormFront can pull and push between various distinct systems; therefore, there needs to be a very intelligent transformation and delivery workflow.

  • Output can be XML, JSON, or plain text. XSLT is employed to handle the heavy lifting of normalizing data to expose key properties and collections in standardized formats.

  • Delivery can be REST, FTP, XML-RPC, SOAP, or any other format/protocol necessary. Adapters and clients are used to handle interfacing with the endpoints.

To accomplish the transformations for normalization, XSLT converts the source to two formats; one contains general information about the feed item, the other provides more details and includes assets. Internal processing and interaction is accomplished through the resulting XML and Model object classes for both Publishers and Subscribers. Properties of the feed and its content items can be accessed through the classes as properties, objects, and methods. Model objects can be created from XML or JSON, and can be subsequently represented as XML or JSON. If the output needs to be raw text, XSLT can be used to convert the Model's XML to text.

Clone this wiki locally