Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Excerpt

The Squirro pipeline transform a record from a data source to a Squirro item and writes it into the index.

Table of Contents

Table of Contents
outlinetrue
excludeTable of Contents

Architecture Overview

As outlined in the Architecture, the working with Squirro can be split into a number of steps:

...

The Load is handled with the data loading. The data loader will give Squirro a list of records to be indexed.

The pipeline’s task is to now convert those records into properly formatted Squirro items (see Item Format) and store those items in the Squirro storage layer.

From that index they can then be retrieved by the Squirro dashboards for visualization and searching.

Pipeline Sections

The pipeline is split into the given sections mostly to aid understanding and configuration of the various steps.

...

...

Section

...

Description

...

Enrich

...

Extracting additional data from records or converting them into text is counted as enrichments. This includes language detection, deduplication, or converting binary documents to text.

...

Relate

...

Linking the ingested items within each other or with other data sources is part of this section. Most importantly this includes the Known Entity Extraction steps.

...

Discover

...

Discover includes steps around topic modelling and clustering, as well as analysis for the Content-based Typeahead.

...

Classify

...

Text classification, such as the models created with the Squirro AI Studio, are part of this section.

...

Predict

...

Time series detection with the Trend Detection module shows up here.

...

Recommend

...

This section includes the updating of recommendation models and insights generation. These are currently not yet exposed in the user interface.

...

Automate

...

Automated actions, such as sending of emails, is included as automations. Currently this section is empty in the user interface.

...

Index

...

This step is not included in the architecture charts, but can be seen and used in the pipeline editor. It includes the required steps to persist Squirro items on disk for searching.

...

Custom

...

Custom steps can be added to the pipeline in the form of Pipelets . Currently these pipelets always show up in a section called Custom, but this will be extended to allow each pipelet to be assigned to one of the above sections as well.

Processing

The pipeline steps are run sequentially. When a pipeline step fails for any reason, the item is re-queued and the full pipeline will be re-run on that item. If processing fails persistently (10 times by default) the item is dropped from the pipeline.

Some errors are handled by adding an error code to the item. The known error codes for this are documented in the Processing Error table.

Items are only displayed to the users once the full pipeline - with exception of Search Tagging - has run through. For details on the search tagging delay, see the Search Tagging and Alerting documentation.

The sqingesterd service is responsible for executing the Pipeline workflows and their steps.

The configuration option processors under the section ingester of the /etc/squirro/ingester.ini file controls the number of processors used by the sqingesterd service to consume the batch files found under the /var/lib/squirro/inputstream directory. Each processor works on a single batch file at a time. Under the hood, each processor is a separate Unix process. The default value of this option is 1 (i.e., a single processor is spawned by the service for ingesting data).

The configuration option workers under the section processor of the /etc/squirro/ingester.ini file controls the number of threads spawned by each processor. This setting is being used for the execution of certain Pipeline steps which consume a single item from the batch at a time. Other Pipeline steps work on a batch level and therefore this option is irrelevant to them. The default value of this option is 3. (i.e., approximately 3 items of a batch are executed concurrently by a single processor).

Pipeline Step Dependencies

Some Pipeline steps have dependencies on other steps. If these dependencies are not met, then it is possible that either the processing of the Pipeline workflow will fail completely, or it will be successful but the ingested items will not get transformed as expected. The following list outlines the current known step dependencies per section:

Configuration

A project can have one or more pipelines. Each data source is associated with one such pipeline. The pipelines are configured using the Pipeline Editor in the Setup spaceThis page can now be found at Pipeline Overview on the Squirro Docs site.