This section covers adding cluster nodes to a Squirro installation. Refer to the Setup on Linux documentation on the base installation.
Table of Contents
Table of Contents | ||||
---|---|---|---|---|
|
Overview
For background on Squirro cluster setups, refer to the How Squirro Scales document. It covers in detail what the components of Squirro are and what their scaling considerations are.
Prerequisites
Please refer to the prerequisites in the Setup on Linux document. In summary, ensure that:
- The Linux machines are set up. Red Hat® Enterprise Linux® (RHEL) or its open source derivative CentOS Linux are both supported.
- Network is set up and all the machines that are to become part of the Squirro cluster can talk to each other.
- The firewalls are open between those machines with the documented ports accessible.
- The Squirro YUM repository is configured and accessible. In enterprise environment, where this poses a problem, offline installation is available. Contact support in this case.
Overview
Any expansion of the cluster requires some work on the old and new nodes. This is outlined in the processes below by splitting the work up into sections, based on where the work is to be executed.
The process as described here involves some cluster downtime. It is possible to expand a Squirro cluster without any downtime involved - but that process requires a bit more planning and orchestration. If you need a downtime-free expansion, contact Squirro support.
Storage Node Installation
The Squirro storage nodes are based on Elasticsearch. Some of the configuration for adding a storage node is thus in the Elasticsearch configuration files.
Process on the new server
The following process should be applied on the new storage node.
...
Note |
---|
Make sure that these setting values are copied from the previous storage nodes to the new one - and not the other way around. |
...
- Set
discovery.zen.ping.multicast.enabled
tofalse
Set
discovery.zen.ping.unicast.hosts
to a list of all the storage nodes that have been set up. For example:Code Block language text discovery.zen.ping.unicast.hosts: ["10.1.87.20", "10.1.87.22"]
This is the easiest way to set up discovery and making sure all the Elasticsearch nodes can see each other. There are other ways of configuring discovery of the Elasticsearch nodes. This is documented by Elasticsearch in the Discovery section of the manual.
Restart the service for the settings to take effect.
Code Block |
---|
service elasticsearch restart |
...
Put new templates to Elasticsearch:
Code Block |
---|
cd /etc/elasticsearch/templates
./ensure_templates.sh |
Process on all the other storage nodes
On all the previously set up storage nodes, execute these changes.
...
Set
discovery.zen.ping.unicast.hosts
to a list of all the storage nodes that have been set up. For example:Code Block language text discovery.zen.ping.unicast.hosts: ["10.1.87.20", "10.1.87.22"]
This list should be the same on all the storage nodes.
Restart the service for the settings to take effect.
Code Block |
---|
service elasticsearch restart |
Cluster Node Installation
Process on the new server
The following process should be applied on the new cluster node.
...
Stop all the services by executing the following command:
Code Block | ||
---|---|---|
| ||
monit -g all stop |
...
- Edit the
/etc/squirro/cluster.ini
configuration file as follows (all settings are in the[cluster]
section of the ini file):
...
id
: change this to the same value as on the previous cluster nodes - ensuring it's the same value for all cluster nodes.redis_controller
: set this totrue
so that Redis replication is managed by the Squirro cluster service.mysql_controller
: set this totrue
so that MySQL replication is managed by the Squirro cluster service.
...
Code Block |
---|
sed -i -e 's/^endpoint_discovery = false/endpoint_discovery = true/' /etc/squirro/*.ini
sed -i -e 's/^db_endpoint_discovery = false/db_endpoint_discovery = true/' /etc/squirro/*.ini |
...
- Enable MySQL replication. This requires two changes in
/etc/mysql/conf.d/replication.cnf
- both of these values are commented out by default:server_id
: this integer value needs to be a unique value over the whole cluster. For example use10
for the first server in the cluster,11
for the second, etc.report_host
: set this to the human-readable name of the server, as it should be reported to the other hosts - for examplenode01
.
Raise the MySQL limits on open files and maximum connections.
Code Block language text title /etc/mysql/conf.d/maxconnnections.cnf [mysqld] open_files_limit = 8192 max_connections = 500
The
max_connections
setting should be set higher depending on number of cluster nodes. We recommend at least 150 connections for each cluster node.
Zookeeper
...
Add a list of all cluster nodes to Zookeeper. Edit /etc/zookeeper/zoo.cfg
and list all the cluster nodes (including this new server):
Code Block | ||||
---|---|---|---|---|
| ||||
server.1=10.1.87.10:2888:3888
server.2=10.1.87.11:2888:3888
… |
Start Zookeeper:
Code Block |
---|
service zookeeper start |
...
At this point follow the Process on all the other cluster nodes section and make sure a cluster leader is elected.
...
Start the cluster node:
Code Block |
---|
service sqclusterd start |
...
Wait for the cluster node to come up. Make sure the election leader is the same one as on the previous nodes.
Code Block |
---|
curl -s http://127.0.0.1:81/service/cluster/v0/leader/cluster.json | python -mjson.tool | grep electionLeader |
This command may have be repeated a few times until a result is returned.
Start all other services:
Code Block |
---|
monit -g all-active start |
Process on all the other cluster nodes
This process needs to happen together with the Zookeeper configuration on the new cluster node.
Add the new servers to the Zookeeper configuration. Edit /etc/zookeeper/zoo.cfg
and list all the cluster nodes (including this new server):
Code Block | ||||
---|---|---|---|---|
| ||||
server.1=10.1.87.10:2888:3888
server.2=10.1.87.11:2888:3888
… |
...
Restart Zookeeper:
Code Block |
---|
service zookeeper restart |
...
Check that the election leader points to one of the existing nodes:
Code Block |
---|
curl -s http://127.0.0.1:81/service/cluster/v0/leader/cluster.json | python -mjson.tool | grep electionLeader |
This will output a line containing the node that has currently been selected as the leader by the Squirro cluster node.
Setting up Cluster Node Storage
Some parts of Squirro require a shared file system. This is used for:
- Uploading data loader plugins, pipelets and custom widgets to a cluster
- Handling of the trend detection training data
- Uploading of files through the Squirro frontend and handling of crawling output
- Indexing binary data files
This share file system can be provided through any means, such as a NAS or an existing clustering file system.
The following instructions show how to set up such a shared file system with GlusterFS, a clustered file system.
All of the following commands - except where otherwise stated - are executed on the new node being set up.
Install GlusterFS server
Code Block |
---|
yum install -y glusterfs-server
restorecon /var/run/glusterd.socket
service glusterd start |
...
On the new node, connect to all the previous cluster nodes with the following commands (repeated once for every node):
Code Block |
---|
gluster peer probe 10.1.87.10
gluster peer probe 10.1.87.11 |
On all previous nodes execute the following command (with the IP of the new server that is being added):
Code Block |
---|
gluster peer probe 10.1.87.12 |
Create or extend the volume
If this is the first installation, then create the cluster file system. For fresh installations steps 1-2 can be executed on all the servers first, to then execute the "volume create" command here just once.
Code Block |
---|
gluster volume create gv0 replica 2 10.1.87.10:/var/lib/squirro/storage/gv0/brick0 10.1.87.11:/var/lib/squirro/storage/gv0/brick0 10.1.87.12:/var/lib/squirro/storage/gv0/brick0 force
gluster volume start gv0 |
...
If this is a new server, being added to an existing GlusterFS installation, then execute this command to use this server for the volume:
Code Block |
---|
gluster volume add-brick gv0 10.1.87.12:/var/lib/squirro/storage/gv0/brick0 force |
...
Configure the cluster storage volume to be mounted. Add the following line to /etc/fstab
:
Code Block | ||||
---|---|---|---|---|
| ||||
127.0.0.1:/gv0 /mnt/gv0 glusterfs defaults 0 0 |
Then create the mount-point and mount the new file system:
Code Block |
---|
mkdir -p /mnt/gv0
mount /mnt/gv0 |
Set up all the required directories in the shared file system:
Code Block |
---|
install -o sqprovid -g squirro -d /mnt/gv0/storage
install -o sqsqirrl -g squirro -d /mnt/gv0/squirrel_log
install -o sqplumbr -g squirro -d /mnt/gv0/pipelets
install -o sqfile -g squirro -d /mnt/gv0/fileimport
install -o sqfile -g squirro -d /mnt/gv0/fileimport/uploads
install -o sqtrends -g squirro -d /mnt/gv0/trends_data
install -o sqtopic -g squirro -d /mnt/gv0/assets
install -o sqtopic -g squirro -d /mnt/gv0/widgets |
Change the configuration of the various Squirro services to point to the right folder. Below you see for each config file the desired sections and values - all the values that have been left out here should be left unmodified.
Code Block | ||||
---|---|---|---|---|
| ||||
[storage]
default_bucket = cluster |
Code Block | ||||
---|---|---|---|---|
| ||||
[storage]
directory = /mnt/gv0/squirrel_log/
url_prefix = /storage/cluster/squirrel_log/ |
Code Block | ||||
---|---|---|---|---|
| ||||
[storage]
directory = /mnt/gv0/pipelets/ |
Code Block | ||||
---|---|---|---|---|
| ||||
[topic]
custom_assets_directory = /mnt/gv0/assets/
custom_widgets_directory = /mnt/gv0/widgets/ |
Code Block | ||||
---|---|---|---|---|
| ||||
[offline_processing]
data_directory = /mnt/gv0/trends_data |
Replace the previous assets and widgets folders with symlinks:
Code Block |
---|
rm -ir /var/lib/squirro/topic/assets
rm -ir /var/lib/squirro/topic/widgets
ln -s /mnt/gv0/assets /var/lib/squirro/topic/assets
ln -s /mnt/gv0/widgets /var/lib/squirro/topic/widgets |
...
Verify that the configuration is still valid:
Code Block |
---|
nginx -t |
Reload the nginx configuration:
Code Block |
---|
service nginx reload |
Restart all services:
...
page can now be found at Squirro Cluster Expansion on the Squirro Docs site.