Squirro 2.3.3 - Release Notes
Released on April 14, 2016.
Improvements
Smooth the dashboard switch transitions
- Added additional data sources and enrichments
Bug Fixes
Size the grid according to actual widgets dimensions
- Fix dashboard editing permission check
- Fix access token expiration on multi-node clusters
- Fixes related to elasticsearch upgrade and clean installation of Squirro
- Fixes related to custom widget setup
Fresh Installation Instructions
Please follow the regular installation steps.
Upgrade Instructions
To upgrade to version 2.3.3 of Squirro, please ensure that your current version is at least version 2.3.0 or higher. This is because this version contains a new major version of ElasticSearch. If you are on a version older than 2.3.0, please contact support.
This is not the latest version of Squirro. To upgrade to this version, please ensure that you point the squirro yum repository to version '2.3.3' and not to 'latest'.
Additionally if you are using Squirro in a Box, additional steps are involved. In this case we also ask you to contact support.
From version 2.3.2
1. Upgrade Storage Nodes
If your storage node runs in the same Virtual Machine or Operating System as your cluster node, skip this step. Otherwise upgrade all storage nodes one at a time by running:
[squirro@storagenode01 ~] sudo yum update
2. Upgrade Cluster Nodes
[squirro@clusternode01 ~] sudo yum update
From version 2.3.0 or 2.3.1
Step 1: Prepare the upgrade
It is not possible to update to this version without a service interruption.
Please note that the order of the following steps is important. Do not skip any step. If a step fails, do not continue before resolving the issue.
Step 2: Stop Squirro
On each cluster node, run:
root$ monit stop sqclusterd
Step 3: Update the Elasticsearch templates
On each storage node, one after the other, run:
root$ rpm -Uvh --nodeps $(repoquery --location squirro-elasticsearch-templates)
This will update the templates and re-index all indices with the new mapping v6. This may take a while.
Important note: Do not use yum update squirro-elasticsearch-templates as that would attempt to update Elasticsearch at the same time for which we are not ready until we first fix the configuration at the beginning of step 4.
Step 4: Update Elasticsearch
root$ rpm -Uvh --nodeps $(repoquery --location elasticsearch)
Elasticsearch will not start as there are configuration conflicts between the two versions that need to be fixed manually:
warning: /etc/elasticsearch/elasticsearch.yml created as /etc/elasticsearch/elasticsearch.yml.rpmnew warning: /etc/init.d/elasticsearch created as /etc/init.d/elasticsearch.rpmnew warning: /etc/sysconfig/elasticsearch created as /etc/sysconfig/elasticsearch.rpmnew warning: /usr/lib/systemd/system/elasticsearch.service created as /usr/lib/systemd/system/elasticsearch.service.rpmnew
Networking behaviour changed with Elasticsearch 2. In a multi-storagenode setup, make sure that the elasticsearch cluster network configuration is set up correctly. In a standard setup, adding the following to the /etc/elasticsearch/elasticsearch.yml
file should work:
network.bind_host: _site_,_local_ network.publish_host: _site_
For further help, please consult https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html\
Remove the ES plugins as they are no longer compatible (new ones will be installed by yum update):
root$ rm -rf /usr/share/elasticsearch/plugins/*
And last, index the percolation queries that could not be migrated automatically (only required if the file /tmp/elasticsearch/squirro_v5_filter.out exists):
root$ curl -s -XPOST http://localhost:9200/_bulk --data-binary "@/tmp/elasticsearch/squirro_v5_filter.out"
Step 5: Update Elasticsearch plugins and all other packages
root$ yum update
Depending on your configuration, Elasticsearch might not come up until you updated a second storage node as it requires at least two nodes to be present.
Depending on your index size, the initial start of elasticsearch service will take a while as it migrates the index internally.
Step 6: Upgrade Cluster Nodes
On each cluster node, run:
root$ yum update root$ yum reinstall squirro-python-squirro.api.topic
Resolve /etc/squirro/topic.ini.rpmnew
If you run Squirro in a multi-cluster-node environment you need to do the following additional steps on each cluster node: |
---|
root$ mkdir -p /mnt/gv0/widgets root$ chown -R sqtopic:squirro /mnt/gv0/widgets root$ sed -e 's|^custom_widgets_directory = .*|custom_widgets_directory = /mnt/gv0/widgets/|' -i /etc/squirro/topic.ini root$ sed -i -e 's/db_endpoint_discovery = false/db_endpoint_discovery = true/' /etc/squirro/topic.ini root$ if [[ ! -L "/var/lib/squirro/topic/widgets" && -d "/var/lib/squirro/topic/widgets" ]]; then rm -ir /var/lib/squirro/topic/widgets; fi root$ ln -s /mnt/gv0/widgets /var/lib/squirro/topic/widgets root$ service sqtopicd restart |
Then start Squirro again:
root$ monit start sqclusterd root$ monit start sqfrontendd