top of page

Joule Release 1.0.3

Joule is a Low Code Platform for use case development. The platform brings simplicity to developing use cases by providing an expressive language to define processing pipelines using prebuilt and custom processors and data connectors.


Out-of-the-box Joule provides standard data connector implementations, and useful processors that enables you to start building and running use cases quickly.


This early release brings a number of new features, bug fixes, optimisations and general usability enhancements.




Key Features


SQL Support

Joule ships with an embedded in-memory modern SQL engine, DuckDB. This is used to capture events flowing through the processing pipeline along with supporting the metrics engine implementation.


Metrics Engine

The metrics engine computes SQL-defined metrics using events stored by the SQL Tap and scheduled using a runtime policy.


metrics engine:
  runtime policy:
    frequency: 1
    startup delay: 2
    time unit: MINUTES
​
  foreach metric compute:
    metrics:
      - name: BidMovingAverage
          metric key: symbol
          table definition: standardQuoteAnalyticsStream.BidMovingAverage 
                           (symbol VARCHAR, avg_bid_min FLOAT, 
                            avg_bid_avg FLOAT,avg_bid_max FLOAT)
          query:
            SELECT symbol,
            MIN(bid) AS 'avg_bid_min',
            AVG(bid) AS 'avg_bid_avg',
            MAX(bid) AS 'avg_bid_max'
            FROM standardQuoteAnalyticsStream.quote
            WHERE
            ingestTime >= date_trunc('minutes',now() - INTERVAL 2 MINUTES) AND ingestTime <= date_trunc('minutes',now())
            GROUP BY symbol
            ORDER BY 1;
          truncate on start: true
          compaction policy:
            frequency: 8
            time unit: HOURS

Dynamic Rest APIs

All SQL tables created by a Joule process are accessible through a well-defined Rest API.



Multi-Language scripting support

Joule provides a flexible scripting processor implemented using GraalVM. This enables the developer to integrate code written using Python, Node.JS, R, Javascript and Ruby within a streaming context.


Parquet import/export

Data can be stored within the Joule process and can be exported as Parquet files for further analytics use cases. Also, Parquet files can be imported into the Joule process to drive user-defined functionality.


initialisation:
    sql import:
       schema: banking
       parquet:
          -
            table: fxrates
            asView: false
            files: [ 'fxrates.parquet' ]
            drop table: true
               index:
                  fields: [ 'ccy' ]
                  unique: false

Database connectivity

Publisher transport persists processed events to a configured SQL database and table. The insert statement is dynamically generated from an event, attribute names and types need to match the table definition.

This feature is an idea for offline analytics, business reporting, dashboards and process testing.


Documentation

Joule is now shipping with online documentation.

There are many more features, enhancements and fixes within this release. These will be discussed over forthcoming blog posts.


Getting started

To get started, download the following resources to prepare your environment and work through the provided documentation. Feel free to reach out with questions.

  • Download the examples code from GitLab

  • Work through the README file


We’re Here to Help

Feedback is most welcome including thoughts on how to improve and extend Joule and ideas on exciting use cases.


You’re in this with the entire FractalWorks community, who’s openly sharing ideas, and best practices and helping each other on our Community Forum. Feel free to join us there! And if you have any further questions on how to become a partner or customer of FractalWorks, do not hesitate to engage with us, we will be happy to talk about your needs.

9 views0 comments

Recent Posts

See All
bottom of page