These upgrade instructions explain how to update an existing Linux-based Squirro installation to the current version.
Only follow these instructions if a specific Squirro release notes document references these as the correct upgrade instructions.
Table of Contents
Before Upgrading
Maintenance window
A Squirro upgrade requires a downtime. This is typically quite short, but will still affect user experience for a moment. As a result, upgrades to production systems should only be run in a maintenance window.
Storage vs. Cluster Nodes
Squirro distinguishes between storage and cluster nodes. These can be installed on the same server or split up. Both can also be horizontally scaled and have multiple instances. For full details see How Squirro Scales.
Independent of the setup, in the instructions below always execute all the storage node updates first, and only then do the cluster node updates. This applies independent of whether cluster and storage node are on the same system, whether they are split into one server each, or if they have multiple instances each.
Offline Repository
If you have used the offline installation, or have an internal mirror of the Squirro yum repository, please make sure this is updated prior to starting the Squirro update.
Upgrading
Storage nodes
Run these instructions on every single server that has storage nodes installed before moving on to the cluster nodes.
# Clear all cached metadata yum clean all # Update Java JDK yum update java-1.8.0-openjdk # Update Elasticsearch yum update elasticsearch # Update storage node yum update squirro-storage-node-users yum update squirro-storage-node |
Cluster nodes
Please note that user-provided custom assets which include a requirements.txt
file (e.g., a custom data loader plugin) might need to re-uploaded after the upgrade if the packages they depend on (if any) do not work on Python 3.8.
# Clear all cached metadata yum clean all # Update Java JDK yum update java-1.8.0-openjdk # Update cluster node yum update squirro-cluster-node-users yum update squirro-cluster-node # Cleanup yum autoremove mv /opt/squirro/virtualenv3 /opt/squirro/virtualenv3.old # /opt/squirro/virtualenv3.old is not needed for Squirro platform operation # Make sure that all the Squirro packages have been updated yum update squirro-* |
At this point you will have to check if any *.rpmnew
files have been created in /etc/squirro
. See Configuration Conflicts for how to handle those conflicts.
For example, the /etc/squirro/common.ini
would need resolution if you are upgrading from a version that does not have the new pdfconversion
service.
Finally, restart the services:
systemctl restart mariadb systemctl reload nginx squirro_restart |
And validate if everything is coming up (this may have to be run a few times until everything stabilizes):
squirro_status |
The yum autoremove
command can cause the Squirro systemd services not to start during a reboot. We recommend that you issue the following commands to fix this issue and reboot the system to ensure it works. This only is an issue with upgrades from 3.4 and earlier.
systemctl enable sqconfigurationd.service systemctl enable sqcontentd.service systemctl enable sqdatasourced.service systemctl enable sqdigestmailerd.service systemctl enable sqemailsenderd.service systemctl enable sqfilteringd.service systemctl enable sqfrontendd.service systemctl enable sqingesterd.service systemctl enable sqmachinelearningd.service systemctl enable sqpdfconversiond.service systemctl enable sqplumberd.service systemctl enable sqproviderd.service systemctl enable sqrelatedstoryd.service systemctl enable sqschedulerd.service systemctl enable sqthumblerd.service systemctl enable sqtopicd.service systemctl enable sqtopicproxyd.service systemctl enable sqtrendsd.service systemctl enable squserd.service systemctl enable squserproxyd.service systemctl enable sqwebshotd.service