Released on October 17th, 2017

Introducing Squirro 2.5.2 - Birch

We're happy to release Squirro version 2.5.2, using elasticsearch 5.6.0!

Updates

Improvements

Bug Fixes

Fresh Installation Instructions

Please follow the regular installation steps

Upgrade Instructions

Please ensure that your current version is 2.5.1. If you are on a version older than 2.5.1, please contact support.

Upgrading will need to reindex old elasticsearch indexes to new indexes with template version v8. This will need few hours in case you have an old big index in template v7 (A reindex on an index of 1'337'000 documents , 16GB in size took 57 minutes)


Additionally if you are using Squirro in a Box, additional steps are involved. In this case we also ask you to contact support.

From earlier versions 2.5.2-4031 or lower

If you have already upgraded to 2.5.2 before 9th November 2017 and you are intending to apply a patch release of storage node, after normal steps of upgrading storage node, you please run this command:

bash /opt/squirro/elasticsearch/update/migrate-storage-node.sh

From version 2.5.1

1. Upgrade Storage Nodes and Cluster Nodes collocated on the same machine/VM

CentOS 6 / RHEL 6CentOS 7


for service in `ls /etc/monit.d/sq*d | sed -e "s|^.*/||" | grep -v "sqclusterd"`; do monit stop $service; done
# wait for `monit summary` to indicate that all but 5 services are stopped
yum update squirro-storage-node-users
yum update elasticsearch
yum update squirro-storage-node
bash /opt/squirro/elasticsearch/update/migrate-storage-node.sh
# wait until migrate-storage-node.sh finished
yum update squirro-cluster-node-users
yum update squirro-*
monit monitor all



cd /lib/systemd/system
for service in $(ls sq*d.service); do echo "Stopping $service"; systemctl stop $service; done
yum update squirro-storage-node-users
yum update elasticsearch
yum update squirro-storage-node
systemctl daemon-reload
bash /opt/squirro/elasticsearch/update/migrate-storage-node.sh
# wait until migrate-storage-node.sh finished
yum update squirro-cluster-node-users
yum update squirro-*
for service in $(ls sq*d.service); do echo "Starting $service"; systemctl start $service; done



2. Upgrade Storage and Cluster Nodes when they are on different servers (and there is only one storage node and one cluster node)

On the one cluster node, shut down most of the Squirro services like so:

CentOS 6 / RHEL 6CentOS 7


for service in `ls /etc/monit.d/sq*d | sed -e "s|^.*/||" | grep -v "sqclusterd"`; do monit stop $service; done
# wait for `monit summary` to indicate that all but 5 services are stopped



cd /lib/systemd/system
for service in $(ls sq*d.service); do echo "Stopping $service"; systemctl stop $service; done
# the output of following statement should indicate that all sq*d services are stopped:
for service in $(ls sq*d.service); do echo "Status of $service"; systemctl status $service; done


Upgrade the one storage nodes by running:

CentOS 6 / RHEL 6CentOS 7


yum update squirro-storage-node-users
yum update elasticsearch
yum update squirro-storage-node
bash /opt/squirro/elasticsearch/update/migrate-storage-node.sh
# wait until update-storage-node.sh finished



yum update squirro-storage-node-users
yum update elasticsearch
yum update squirro-storage-node
systemctl daemon-reload
bash /opt/squirro/elasticsearch/update/migrate-storage-node.sh
# wait until migrate-storage-node.sh finished


Upgrade the one cluster node by running:

CentOS 6 / RHEL 6CentOS 7


yum update squirro-cluster-node-users
yum update squirro-*
monit monitor all



yum update squirro-cluster-node-users
yum update squirro-*
cd /lib/systemd/system
for service in $(ls sq*d.service); do echo "Starting $service"; systemctl start $service; done
# wait for the following statement to indicate that all sq*d services are started
for service in $(ls sq*d.service); do echo "Status of $service"; systemctl status $service; done


3. Upgrade multi-node clusters (multiple Storage Nodes and/or multiple Cluster Nodes)

Upgrading clusters of Squirro nodes to release 2.5.2 is very involved. Please contact Squirro support for assistance.

On each cluster node, shut down most of the Squirro services like so:

CentOS 6 / RHEL 6CentOS 7


for service in `ls /etc/monit.d/sq*d | sed -e "s|^.*/||" | grep -v "sqclusterd"`; do monit stop $service; done
# wait for `monit summary` to indicate that all but 5 services are stopped



cd /lib/systemd/system
for service in $(ls sq*d.service); do echo "Stopping $service"; systemctl stop $service; done
# wait for the following statement to indicate that all but 5 services are stopped
for service in $(ls sq*d.service); do echo "Status of $service"; systemctl status $service; done


Stop elasticsearch service on every storage nodes, check that no elasticsearch service running:


CentOS 6 / RHEL 6CentOS 7


service elasticsearch stop
ps -ef | grep elasticsearch



systemctl stop elasticsearch
ps -ef | grep elasticsearch


Upgrade every storage nodes by running:

CentOS 6 / RHEL 6CentOS 7


yum update squirro-storage-node-users
yum update elasticsearch



yum update squirro-storage-node-users
yum update elasticsearch


and then on every storage node:

CentOS 6 / RHEL 6CentOS 7


yum update squirro-storage-node



systemctl daemon-reload
yum update squirro-storage-node




Migrate data from old elasticsearch template v7 index to new elasticsearch index template v8. Run this on ONLY ONE storage node.

CentOS 6 / RHEL 6CentOS 7


bash /opt/squirro/elasticsearch/update/migrate-storage-node.sh
# wait until update-storage-node.sh finished



bash /opt/squirro/elasticsearch/update/migrate-storage-node.sh
# wait until migrate-storage-node.sh finished


New ES does not allow to run some plugins likes head or kopf anymore, therefore you need to start them on your local machine and use port forward to a storage node to connect  to ES. We recommend you to use an addon to see status of ES cluster, index and shards during upgrade when you have multiple nodes setup. For example if you use  cerebro:

Installation:

If you use port 9500 as port forward to ES node, then use: ssh -L 9500:storagehost:9200. After that you can connect cerebro to http://localhost:9500

Upgrade each cluster nodes by running:

CentOS 6 / RHEL 6CentOS 7

First run the following on all cluster nodes one at a time:

yum update squirro-cluster-node-users
yum update squirro-python-squirro.service.cluster

Followed by running the following on all cluster nodes one at a time:

yum update squirro-*
monit monitor all



yum update squirro-cluster-node-users
yum update squirro-*
for service in $(ls sq*d.service); do echo "Starting $service"; systemctl start $service; done


4. Elasticsearch settings

 # edit /etc/elasticsearch/templates/squirro_v8.json, change values:

"settings": {
   "index": {
       "number_of_shards": 6,
       "number_of_replicas": 0,
       ...
   }
}

$ ./etc/elasticsearch/templates/ensure_templates.sh
# check that templates is set correctly:
$ curl 'http://localhost:9200/_template?pretty'

# after finishing running 
# check that templates is set correctly:
$ curl 'http://localhost:9200/_template?pretty'