This section covers adding cluster nodes to a Squirro installation. Refer to the Setup on Linux documentation on the base installation.
Table of Contents
For background on Squirro cluster setups, refer to the How Squirro Scales document. It covers in detail what the components of Squirro are and what their scaling considerations are.
Please refer to the prerequisites in the Setup on Linux document. In summary, ensure that:
Any expansion of the cluster requires some work on the old and new nodes. This is outlined in the processes below by splitting the work up into sections, based on where the work is to be executed.
The process as described here involves some cluster downtime. It is possible to expand a Squirro cluster without any downtime involved - but that process requires a bit more planning and orchestration. If you need a downtime-free expansion, contact Squirro support.
The Squirro storage nodes are based on Elasticsearch. Some of the configuration for adding a storage node is thus in the Elasticsearch configuration files.
Process on the new server
The following process should be applied on the new storage node.
/etc/elasticsearch/elasticsearch.yml
to the new server:cluster.name
discovery.zen.minimum_master_nodes
Make sure that these setting values are copied from the previous storage nodes to the new one - and not the other way around. |
/etc/elasticsearch/elasticsearch.yml
change the following settings:Set discovery.zen.ping.unicast.hosts
to a list of all the storage nodes that have been set up. For example:
discovery.zen.ping.unicast.hosts: ["10.1.87.20", "10.1.87.22"] |
This is the easiest way to set up discovery and making sure all the Elasticsearch nodes can see each other. There are other ways of configuring discovery of the Elasticsearch nodes. This is documented by Elasticsearch in the Discovery section of the manual.
Restart the service for the settings to take effect.
service elasticsearch restart |
number_of_shards
and number_of_replicas
in /etc/elasticsearch/templates/squirro_v9.json
, usually if we have multi storage node then set number_of_replicas
to 1Put new templates to Elasticsearch:
cd /etc/elasticsearch/templates ./ensure_templates.sh |
On all the previously set up storage nodes, execute these changes.
/etc/elasticsearch/elasticsearch.yml
change the following settings:Set discovery.zen.ping.unicast.hosts
to a list of all the storage nodes that have been set up. For example:
discovery.zen.ping.unicast.hosts: ["10.1.87.20", "10.1.87.22"] |
This list should be the same on all the storage nodes.
Restart the service for the settings to take effect.
service elasticsearch restart |
The following process should be applied on the new cluster node.
Stop all the services by executing the following command:
monit -g all stop |
/etc/squirro/cluster.ini
configuration file as follows (all settings are in the [cluster]
section of the ini file):id
: change this to the same value as on the previous cluster nodes - ensuring it's the same value for all cluster nodes.redis_controller
: set this to true
so that Redis replication is managed by the Squirro cluster service.mysql_controller
: set this to true
so that MySQL replication is managed by the Squirro cluster service.Turn on endpoint discovery for all Redis and database connections. This ensures that the services consult the cluster service to know which cluster node is currently the master.
Changing this requires all the endpoint_discovery
(for Redis) db_endpoint_discovery
(for MySQL) configuration entries and in every /etc/squirro/*.ini
file to be set to true
. This can be automated with the following sed
commands:
sed -i -e 's/^endpoint_discovery = false/endpoint_discovery = true/' /etc/squirro/*.ini sed -i -e 's/^db_endpoint_discovery = false/db_endpoint_discovery = true/' /etc/squirro/*.ini |
/etc/mysql/conf.d/replication.cnf
- both of these values are commented out by default:server_id
: this integer value needs to be a unique value over the whole cluster. For example use 10
for the first server in the cluster, 11
for the second, etc.report_host
: set this to the human-readable name of the server, as it should be reported to the other hosts - for example node01
.Raise the MySQL limits on open files and maximum connections.
[mysqld] open_files_limit = 8192 max_connections = 500 |
The max_connections
setting should be set higher depending on number of cluster nodes. We recommend at least 150 connections for each cluster node.
Zookeeper
/var/lib/zookeeper/data/myid
.Add a list of all cluster nodes to Zookeeper. Edit /etc/zookeeper/zoo.cfg
and list all the cluster nodes (including this new server):
server.1=10.1.87.10:2888:3888 server.2=10.1.87.11:2888:3888 … |
Start Zookeeper:
service zookeeper start |
At this point follow the Process on all the other cluster nodes section and make sure a cluster leader is elected.
Start the cluster node:
service sqclusterd start |
Wait for the cluster node to come up. Make sure the election leader is the same one as on the previous nodes.
curl -s http://127.0.0.1:81/service/cluster/v0/leader/cluster.json | python -mjson.tool | grep electionLeader |
This command may have be repeated a few times until a result is returned.
Start all other services:
monit -g all-active start |
This process needs to happen together with the Zookeeper configuration on the new cluster node.
Add the new servers to the Zookeeper configuration. Edit /etc/zookeeper/zoo.cfg
and list all the cluster nodes (including this new server):
server.1=10.1.87.10:2888:3888 server.2=10.1.87.11:2888:3888 … |
This list should be identical on every cluster node.
Restart Zookeeper:
service zookeeper restart |
Check that the election leader points to one of the existing nodes:
curl -s http://127.0.0.1:81/service/cluster/v0/leader/cluster.json | python -mjson.tool | grep electionLeader |
This will output a line containing the node that has currently been selected as the leader by the Squirro cluster node.
Some parts of Squirro require a shared file system. This is used for:
This share file system can be provided through any means, such as a NAS or an existing clustering file system.
The following instructions show how to set up such a shared file system with GlusterFS, a clustered file system.
All of the following commands - except where otherwise stated - are executed on the new node being set up.
Install GlusterFS server
yum install -y glusterfs-server restorecon /var/run/glusterd.socket service glusterd start |
On the new node, connect to all the previous cluster nodes with the following commands (repeated once for every node):
gluster peer probe 10.1.87.10 gluster peer probe 10.1.87.11 |
On all previous nodes execute the following command (with the IP of the new server that is being added):
gluster peer probe 10.1.87.12 |
Create or extend the volume
If this is the first installation, then create the cluster file system. For fresh installations steps 1-2 can be executed on all the servers first, to then execute the "volume create" command here just once.
gluster volume create gv0 replica 2 10.1.87.10:/var/lib/squirro/storage/gv0/brick0 10.1.87.11:/var/lib/squirro/storage/gv0/brick0 10.1.87.12:/var/lib/squirro/storage/gv0/brick0 force gluster volume start gv0 |
Repeat the IP/directory part of the command for every cluster node, both the previous ones as well as the new one.
The replica 2
option can be modified to indicate how many replicas should be kept of each file. The force option at the end confirms that we are okay to create the volume on the linux system's root file system partition.
If this is a new server, being added to an existing GlusterFS installation, then execute this command to use this server for the volume:
gluster volume add-brick gv0 10.1.87.12:/var/lib/squirro/storage/gv0/brick0 force |
The force option at the end confirms that we are okay to create the volume on the linux system's root file system partition.
Configure the cluster storage volume to be mounted. Add the following line to /etc/fstab
:
127.0.0.1:/gv0 /mnt/gv0 glusterfs defaults 0 0 |
Then create the mount-point and mount the new file system:
mkdir -p /mnt/gv0 mount /mnt/gv0 |
Set up all the required directories in the shared file system:
install -o sqprovid -g squirro -d /mnt/gv0/storage install -o sqsqirrl -g squirro -d /mnt/gv0/squirrel_log install -o sqplumbr -g squirro -d /mnt/gv0/pipelets install -o sqfile -g squirro -d /mnt/gv0/fileimport install -o sqfile -g squirro -d /mnt/gv0/fileimport/uploads install -o sqtrends -g squirro -d /mnt/gv0/trends_data install -o sqtopic -g squirro -d /mnt/gv0/assets install -o sqtopic -g squirro -d /mnt/gv0/widgets |
Change the configuration of the various Squirro services to point to the right folder. Below you see for each config file the desired sections and values - all the values that have been left out here should be left unmodified.
[storage] default_bucket = cluster |
[storage] directory = /mnt/gv0/squirrel_log/ url_prefix = /storage/cluster/squirrel_log/ |
[storage] directory = /mnt/gv0/pipelets/ |
[topic] custom_assets_directory = /mnt/gv0/assets/ custom_widgets_directory = /mnt/gv0/widgets/ |
[offline_processing] data_directory = /mnt/gv0/trends_data |
Replace the previous assets and widgets folders with symlinks:
rm -ir /var/lib/squirro/topic/assets rm -ir /var/lib/squirro/topic/widgets ln -s /mnt/gv0/assets /var/lib/squirro/topic/assets ln -s /mnt/gv0/widgets /var/lib/squirro/topic/widgets |
/etc/nginx/conf.d/frontend.conf
change a few of the alias
declarations:location /storage/localfile/
block change the alias from its default to alias /mnt/gv0/storage/
location /storage/squirrel_log
block change the alias from its default to alias /mnt/gv0/squirrel_log/
Verify that the configuration is still valid:
nginx -t |
Reload the nginx configuration:
service nginx reload |
Restart all services:
monit -g all-active restart |
This could be caused by a network monitoring tool closing all idle connections at periodic interval. In this cases, try lowering the TCP keep-alive used by the system and services:
Example, setting the value to 600s:
# echo 600 > /proc/sys/net/ipv4/tcp_keepalive_time
tcpKeepAlive=true
# lower keepalive settings to avoid elasticsearch cluster disconnects
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 60
net.ipv4.tcp_keepalive_probes = 20