Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The pipeline is split into the given sections mostly to aid understanding and configuration of the various steps.

...

Section

Description

Enrich

Extracting additional data from records or converting them into text is counted as enrichments. This includes language detection, deduplication, or converting binary documents to text.

Relate

Linking the ingested items within each other or with other data sources is part of this section. Most importantly this includes the Known Entity Extraction steps.

Discover

Discover includes steps around topic modelling and clustering, as well as analysis for the Content-based Typeahead.

Classify

Text classification, such as the models created with the Squirro AI Studio, are part of this section.

Predict

Time series detection with the Trend Detection module shows up here.

Recommend

This section includes the updating of recommendation models and insights generation. These are currently not yet exposed in the user interface.

Automate

Automated actions, such as sending of emails, is included as automations. Currently this section is empty in the user interface.

Index

This step is not included in the architecture charts, but can be seen and used in the pipeline editor. It includes the required steps to persist Squirro items on disk for searching.

Custom

Custom steps can be added to the pipeline in the form of Pipelets. Currently these pipelets always show up in a section called Custom, but this will be extended to allow each pipelet to be assigned to one of the above sections as well.

Processing

The pipeline steps are run sequentially. When a pipeline step fails for any reason, the item is re-queued and the full pipeline will be re-run on that item. If processing fails persistently (10 times by default) the item is dropped from the pipeline.

...

The configuration option workers under the section processor of the /etc/squirro/ingester.ini file controls the number of threads spawned by each processor. This setting is being used for the execution of certain Pipeline steps which consume a single item from the batch at a time. Other Pipeline steps work on a batch level and therefore this option is irrelevant for to them. The default value of this option is 3. (i.e., approximately 3 items of a batch are executed concurrently by a single processor).

Pipeline Step Dependencies

Some Pipeline steps have dependencies on other steps. If these dependencies are not met, then it is possible that either the processing of the Pipeline workflow will fail completely, or it will be successful but the ingested items will not get transformed as expected. The following list outlines the current known step dependencies per section:

Configuration

A project can have one or more pipelines. Each data source is associated with one such pipeline. The pipelines are configured using the Pipeline Editor in the Setup space.