Query Processing Workflow

Query processing improves a user’s search experience by providing more relevant search results. Squirro achieves this improvement by running the user’s query through a customizeable query processing workflow that parses, filters, enriches, and expands queries before performing the actual search and presenting the search results to the user. For example, part of speech (POS) boosting and filtering removes irrelevant terms like conjunctions from the query and gives more weight to relevant parts of the query like nouns. Items that match boosted query terms are ranked higher in the returned search results.


The figure below illustrates the architecture of the query processing.


In the example shown in the figure, the user enters the query country:us 2020-10 covid-19 cases in new york in a Global Search Bar on the Squirro dashboard. The query is then sent through the Query Understanding Plugin (1) to the ML-Service where the query processing workflow, a Squirro ML-Workflow, is executed to apply the following steps on the incoming query:

  1. Language detection

  2. Language-specific spaCy Analysis is applied using the pre-trained spaCy language model (see example) for the detected language. The analysis includes:

    • Tokenization and lemmatization

    • Part of Speech (POS) tagging

    • Named Entity Recognition (NER)

  3. Part of Speech Booster / Filter

    • Assigns weight to tokens based on their POS tags

    • Conjunctions and determiners are removed

  4. Query Modifier.

The final query modifier step applies all modifications to the initial query to produce the Enriched Query (2) which is then used to retrieve the candidate documents that best match the query from the Elasticsearch index (3).

The query processing improves the search experience by ranking items that match boosted terms higher and reducing the appearance of irrelevant search results for the query. The latter is achieved by combining terms that belong together. Entities like “New York“ will be treated as such in the query, preventing multipage items (e.g., PDFs) that have “new” on one page and “york” on a different page to be matched and appear in the search results.


Starting with Squirro 3.4.5, each project will be pre-configured with a default query processing workflow. The workflow is installed on the server as a global asset and cannot be deleted via the user interface. It is enabled by default.

The behaviour of the workflow is managed in the project configuration under the SETTINGS tab where you can configure the following settings:









Set the value to the workflow_id of the ML-workflow you want to use for query processing. By default, the workflow_id is set to the ID of the pre-configured workflow that is setup upon project creation.

Remove if you want to disable query processing.




Modes for workflow execution:
global (recommended, requires Global Search Bar widget)

  • Executes query processing workflow once for the whole dashboard
    (triggered via Global Search Bar widget)


  • Execute workflow within /query endpoint.
    This option is useful when:

    1. Squirro is used as an API only

    2. the global search bar is not used but query processing is still needed

  • This mode should not be used for Squirro dashboard with many widgets (each widget would trigger the same workflow in parallel)

Workflow Management

You can configure the available workflows under AI STUDIO > ML Workflows. The default query processing workflow is as the ACTIVE QUERY PROCESSOR and is listed along with any other custom workflow.

List of available workflows. Includes default and custom uploaded workflows.

Hovering over a workflow, you can click SET ACTIVE to make the workflow the ACTIVE QUERY PROCESSOR.

Change active query-processing workflow to be used on the project.

Query Processing Workflow Steps

The query processing workflow consists of pre-configured libNLP pipeline steps.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 { "component": "Query-Processing", "cacheable": true, "dataset": { "items": [] }, "pipeline": [ { "fields": [ "query", "user_terms", "facet_filters" ], "step": "loader", "type": "squirro_item" }, { "step": "custom", "type": "parse", "name": "syntax_parser" }, { "step": "custom", "type": "analysis", "name": "lang_detection", "input_field": "user_terms_str" }, { "step": "custom", "name": "custom_spacy_normalizer", "type": "analysis", "infix_split_hyphen": false, "infix_split_chars": ":<>=", "merge_entities": true, "merge_noun_chunks": false, "cacheable": true, "input_fields": [ "user_terms_str" ], "output_fields": [ "nlp" ], "exclude_spacy_pipes": [], "spacy_model_mapping": { "en": "en_core_web_sm", "de": "de_core_news_sm" } }, { "step": "custom", "type": "enrich", "name": "pos_booster", "strict_filter": true, "analyzed_input_field": "nlp", "phrase_proximity_distance" : 15, "pos_weight_map": { "PROPN": 10, "NOUN": 10, "VERB": 2, "ADJ": 5, "X": "-", "NUM": "-", "SYM": "-" } }, { "step": "custom", "type": "enrich", "name": "query_modifier", "raw_input_field": "query", "term_mutations_metadata": ["term_expansion_mutations","pos_mutations"], "output_field": "enriched_query" }, { "step": "debugger", "type": "log_fields", "fields": [ "user_terms", "facet_filters", "pos_mutations", "term_expansion_mutations", "enriched_query" ], "log_level": "info" } ] }


This workflow is set up to boost important terms based on their POS tags. Nouns (tags NOUN and PROPN) are boosted by assigning higher weights in the pos_weight_map, and for example the impact of verbs (VERB) is reduced by assigning a lower weight. Terms like determiners or conjunctions are removed from the query.

You can configure the steps of the query processing workflow in the UI in the ML Workflows plugin under the AI STUDIO tab.








Custom parse step named syntax_parser.

Parses the raw query-string into terms and filters. Terms are modified in the query processing workflow. Filters are (like facet filters) not.


type (str): `parse`

input_field (str,"query")

Input query.

output_fields (list, ["user_terms", "user_terms_str", "facet_filters", "query_length"])

Raw query string parsed into terms and filters.


1 2 3 4 5 6 7 8 9 { "step": "custom", "type": "parse", "name": "syntax_parser", "output_fields": [ "user_terms", "facet_filters" ] }


Example query: country:us 2020-10 covid-19 cases in new york

  • filters: "facet_filters": ["country:us"]

  • user query terms: "user_terms": ["2020-10", "covid-19", "cases", "in", "new", "york"]


Custom analysis step named lang_detection.

Runs the query through Annotates the query item with the language (facet).


type (str): `analysis`

input_field (str,"query")

Input query.

output_field (str,"language")

Detected language as ISO code

fallback_language (str, "en")

Default language to use.


1 2 3 4 5 6 { "step": "custom", "type": "analysis", "name": "lang_detection", "input_field": "user_terms_str" }


Example query: country:us 2020-10 covid-19 cases in new york

Input: "user_terms_str": "2020-10 covid-19 cases in new york"

Annotation: "language": "en"


Normalizer step of type spacy.

Loads the corresponding language model and runs the configured spaCy pipeline components on the text.

The output of the step is an analyzed spaCy document stored under the specified output_fields and contains the tokens, POS tags and NER tags.

ADJ: adjectives ADP: adpositions (prepositions and postpositions) ADV: adverbs CONJ: conjunctions DET: determiners INTJ: interjection NOUN: nouns NUM: numeral PART: particles PRON: pronouns PROPN: proper nouns PUNCT: punctuations SPACE: spaces SYM: symbols VERB: verbs (all tenses and modes) X: other: foreign words, typos, abbreviations


type (str): `spacy`

input_field (list)

Input fields on which the normalizer is applied.

output_fields (list)

Output fields to save the analyzed spaCy document.

spacy_model_mapping (dict)

Model to use per language-code.

infix_split_hyphen (bool)

Don't split tokens by intra-word-hyphens “covid-19”

merge_entities (bool)

Recognize and merge Named Entities into one SpaCy token, for example, "new york"

merge_noun_chunks (bool)

Merge relevant chunks into one SpaCy token, for example:
extend brexit deadline → extend “brexit deadline”

cacheable (bool)

Cache the selected models.


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 { "step": "normalizer", "type": "spacy", "cacheable": true, "infix_split_hyphen": false, "merge_entities": true, "merge_noun_chunks": false, "input_fields": [ "user_terms_str" ], "output_fields": [ "nlp" ], "exclude_spacy_pipes": [], "spacy_model_mapping": { "en": "en_core_web_sm", "de": "de_core_news_sm" } }


Example query: country:us 2020-10 covid-19 cases in new york

Input: "user_terms_str": "2020-10 covid-19 cases in new york"


  • Tokenisation: ["2020-10", "covid-19", "cases", "in", "new york"]

  • POS tagging: [["2020-10","NUM"],["covid-19","NOUN"],["cases","NOUN"],["in","ADP"],["new york","PROPN"]]

  • Named Entity recognition : [('2020-10', 'DATE'), ('covid-19', 'ORDINAL'), ('new york', 'GPE')]

The analyzed spaCy document is stored under the output_fields as nlp field for further usage in succeeding steps.


Custom enrich step named pos_booster.

Annotates the document with pos_mutations dictionary that contains the boosting weights for each token.
The weight is chosen from the terms POS tag (as defined in pos_weight_map).


type (str, "enrich"): `enrich`

pos_weight_map (dict, {"PROPN":10,"NOUN":10,"VERB":2,"ADJ":5,"X":-,"NUM":-,"SYM":-})

Dictionary mapping between SpaCy POS tag to weight used for term boosting.

  • Boost term relevancy: A higher number boosts the relevancy of matched terms.

  • Skip term boosting: Don't change tokens per POS type by setting the corresponding weight to -

strict_filter (bool, False)

Remove terms with a POS tag not included in pos_weight_map

phrase_proximity_distance (int, 15)

Merged SpaCy tokens are converted into loose phrases:

  • All terms need to be matched within close proximity to each other. An example would be "brexit deadline"~15

output_field (str, "pos_mutations")

Map of term => replacement

fallback_language (str, "en")

Default language to use


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 { "step": "custom", "type": "enrich", "name": "pos_booster", "strict_filter": true, "phrase_proximity_distance": 3, "analyzed_input_field": "nlp", "pos_weight_map": { "PROPN": 10, "NOUN": 10, "VERB": 2, "ADJ": 5, "X": -, "NUM": -, "SYM": - } }


Example query: country:us 2020-10 covid-19 cases in new york


1 2 3 4 5 6 "pos_mutations": [ {"covid-19": "\"covid-19\"~3"}, # `covid-19` needs to be matched as phrase; contains hyphen {"cases": "cases^10"}, # `cases` gets 10 times boosted {"in": ""}, # `in` gets removed because ADP is not defined in the `pos_weight_map` {"new york": "\"new york\"~3"} # `new york` needs to be matched as phrase; merged entity ],


Custom enrich step named query_modifier.

The query-modifier applies all mutations collected from prior steps to the initial query and outputs the enriched_query.

type (str, "enrich"):


raw_input_field (str, "query")

Raw user query to modify.

term_mutations_metadata (list,["pos_mutations"])

Mutations applied, order matters.

output_field (str, "enriched_query")

The modified query string.


1 2 3 4 5 6 7 8 { "step": "custom", "type": "enrich", "name": "query_modifier", "raw_input_field": "query", "term_mutations_metadata": ["pos_mutations"], "output_field": "enriched_query" }


Example query: country:us 2020-10 covid-19 cases in new york


"enriched_query": "country:us \"2020-10\"~3 \"covid-19\"~3 cases^10 \"new york\"~3"

How-to Guides