Released on March 2, 2016.

New Features

Improvements

Bug Fixes

 

Upgrade Instructions

These are the upgrade instructions if you upgrade from version 2.2.6. If you are upgrading from an older version (e.g. v2.1.5) please follow the upgrade instructions on Squirro 2.2.0 - Release Notes

If you are using Squirro in a Box, then there are additional steps involved. In this case, please contact support.

 

1. Upgrade Storage Nodes

If your storage node runs in the same Virtual Machine or Operating System as your cluster node, skip this step. Otherwise upgrade all storage nodes one at a time by running:

[squirro@storagenode01 ~] sudo yum update

2. Upgrade Cluster Nodes

[squirro@clusternode01 ~] sudo yum update squirro-cluster-node-users
[squirro@clusternode01 ~] sudo yum update

As with this release a new service is added, some additional steps are required.

On a single-cluster-node setup you need to do the following steps on the cluster node:

[squirro@clusternode01 ~] sudo service monit restart
[squirro@clusternode01 ~] sudo service nginx restart

If you run Squirro in a multi-cluster-node environment you need to do the following additional steps on each cluster node:

[squirro@clusternode01 ~] sudo sed -i -e 's/db_endpoint_discovery = false/db_endpoint_discovery = true/' /etc/squirro/trends.ini
[squirro@clusternode01 ~] sudo mkdir /mnt/gv0/trends_data
[squirro@clusternode01 ~] sudo chown sqtrends:squirro /mnt/gv0/trends_data
[squirro@clusternode01 ~] sudo sed '$ a [offline_processing]\ndata_directory = /mnt/gv0/trends_data' -i /etc/squirro/trends.ini
[squirro@clusternode01 ~] sudo service monit restart
[squirro@clusternode01 ~] sudo service nginx restart

 

3. Resolve configurations (if required)

The Squirro packages attempt to upgrade *.ini and *.conf configuration files automatically. However if you have made local modifications, the upgrade results in *.rpmnew files that you would need to merge manually. We recommend

  1. backing up the previous *.ini and *.conf files to *.ini.orig and *.conf.orig,
  2. renaming the *.ini.rpmnew and *.conf.rpmnew files to *.ini and *.conf respectively
  3. inspecting all the *.orig files individually and porting any settings manually

Use the following scripts to look for and resolve unresolved configuration files on each cluster node:

 

[squirro@clusternode01 ~] sudo su
[root@clusternode01 ~] FILES_TO_RESOLVE=`ls /etc/squirro/*.ini.rpmnew /etc/nginx/conf.d/*.conf.rpmnew /etc/monit.d/*.rpmnew 2> /dev/null | sed -e "s/\.rpmnew//"`

 

 

[root@clusternode01 ~] for CONFIG_FILE in ${FILES_TO_RESOLVE}; do cp ${CONFIG_FILE} ${CONFIG_FILE}.orig; cp ${CONFIG_FILE}.rpmnew ${CONFIG_FILE}; rm ${CONFIG_FILE}.rpmnew; done

 

 

[root@clusternode01 ~] for CONFIG_FILE in ${FILES_TO_RESOLVE}; do vim -O ${CONFIG_FILE}.orig ${CONFIG_FILE}; done