Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

This section covers adding cluster nodes to a Squirro installation. Refer to the Setup on Linux documentation on the base installation.

Table of Contents

Overview

For background on Squirro cluster setups, refer to the How Squirro Scales document. It covers in detail what the components of Squirro are and what their scaling considerations are.

Prerequisites

Please refer to the prerequisites in the Setup on Linux document. In summary, ensure that:

  • The Linux machines are set up. Red Hat® Enterprise Linux® (RHEL) or its open source derivative CentOS Linux are both supported.
  • Network is set up and all the machines that are to become part of the Squirro cluster can talk to each other.
  • The firewalls are open between those machines with the documented ports accessible.
  • The Squirro YUM repository is configured and accessible. In enterprise environment, where this poses a problem, offline installation is available. Contact support in this case.

Overview

Any expansion of the cluster requires some work on the old and new nodes. This is outlined in the processes below by splitting the work up into sections, based on where the work is to be executed.

The process as described here involves some cluster downtime. It is possible to expand a Squirro cluster without any downtime involved - but that process requires a bit more planning and orchestration. If you need a downtime-free expansion, contact Squirro support.

Storage Node Installation

The Squirro storage nodes are based on Elasticsearch. Some of the configuration for adding a storage node is thus in the Elasticsearch configuration files.

Process on the new server

The following process should be applied on the new storage node.

  1. Install the storage node package, as described in the section Storage Node Installation of Setup on Linux.
  2. Apply some of the configuration from the previous storage nodes to the new one. Copy over the following settings from /etc/elasticsearch/elasticsearch.yml to the new server:
    • cluster.name 
    • discovery.zen.minimum_master_nodes

      Make sure that these setting values are copied from the previous storage nodes to the new one - and not the other way around.

  3. Allow the hosts to discover each other. Again in/etc/elasticsearch/elasticsearch.yml change the following settings:
    • Set discovery.zen.ping.unicast.hosts to a list of all the storage nodes that have been set up. For example:

      discovery.zen.ping.unicast.hosts: ["10.1.87.20", "10.1.87.22"]

      This is the easiest way to set up discovery and making sure all the Elasticsearch nodes can see each other. There are other ways of configuring discovery of the Elasticsearch nodes. This is documented by Elasticsearch in the Discovery section of the manual.

  4. Restart the service for the settings to take effect. 

    service elasticsearch restart
  5. Setup number of shards and number of replicas
    • Modify number_of_shards and number_of_replicas in /etc/elasticsearch/templates/squirro_v9.json, usually if we have multi storage node then set number_of_replicas to 1
    • Put new templates to Elasticsearch:

      cd /etc/elasticsearch/templates
      ./ensure_templates.sh

Process on all the other storage nodes

On all the previously set up storage nodes, execute these changes.

  1. Allow the hosts to discover each other. In/etc/elasticsearch/elasticsearch.yml change the following settings:
    • Set discovery.zen.ping.unicast.hosts to a list of all the storage nodes that have been set up. For example:

      discovery.zen.ping.unicast.hosts: ["10.1.87.20", "10.1.87.22"]

      This list should be the same on all the storage nodes.

  2. Restart the service for the settings to take effect. 

    service elasticsearch restart

Cluster Node Installation

Process on the new server

The following process should be applied on the new cluster node.

  1. Install the cluster node package, as described in the section Cluster Node Installation of Setup on Linux.
  2. Stop all the services by executing the following command:

    monit -g all stop
  3. Squirro cluster service
    1. Edit the /etc/squirro/cluster.ini configuration file as follows (all settings are in the [cluster] section of the ini file):
      1. id: change this to the same value as on the previous cluster nodes - ensuring it's the same value for all cluster nodes.
      2. redis_controller: set this to true so that Redis replication is managed by the Squirro cluster service.
      3. mysql_controller: set this to true so that MySQL replication is managed by the Squirro cluster service.
    1. Turn on endpoint discovery for all Redis and database connections. This ensures that the services consult the cluster service to know which cluster node is currently the master.

      Changing this requires all the endpoint_discovery (for Redis) db_endpoint_discovery (for MySQL) configuration entries and in every /etc/squirro/*.ini file to be set to true. This can be automated with the following sed commands:

      sed -i -e 's/^endpoint_discovery = false/endpoint_discovery = true/' /etc/squirro/*.ini
      sed -i -e 's/^db_endpoint_discovery = false/db_endpoint_discovery = true/' /etc/squirro/*.ini
  4. MySQL
    1. Enable MySQL replication. This requires two changes in /etc/mysql/conf.d/replication.cnf - both of these values are commented out by default:
      1. server_id: this integer value needs to be a unique value over the whole cluster. For example use 10 for the first server in the cluster, 11 for the second, etc.
      2. report_host: set this to the human-readable name of the server, as it should be reported to the other hosts - for example node01.
    2. Raise the MySQL limits on open files and maximum connections.

      /etc/mysql/conf.d/maxconnnections.cnf
      [mysqld]
      open_files_limit = 8192
      max_connections = 500

      The max_connections setting should be set higher depending on number of cluster nodes. We recommend at least 150 connections for each cluster node.

  5. Zookeeper

    1. Set the unique Zookeeper node identifier. This ID needs to start at 1, and then for each node incremented by 1. Write this identifier to /var/lib/zookeeper/data/myid.
    2. Add a list of all cluster nodes to Zookeeper. Edit /etc/zookeeper/zoo.cfg and list all the cluster nodes (including this new server):

      /etc/zookeeper/zoo.cfg
      server.1=10.1.87.10:2888:3888
      server.2=10.1.87.11:2888:3888
      …
    3. Start Zookeeper:

      service zookeeper start
    4. At this point follow the Process on all the other cluster nodes section and make sure a cluster leader is elected.

  6. Starting
    1. Start the cluster node:

      service sqclusterd start
    2. Wait for the cluster node to come up. Make sure the election leader is the same one as on the previous nodes.

      curl -s http://127.0.0.1:81/service/cluster/v0/leader/cluster.json | python -mjson.tool | grep electionLeader

      This command may have be repeated a few times until a result is returned.
       

    3. Start all other services:

      monit -g all-active start

Process on all the other cluster nodes

This process needs to happen together with the Zookeeper configuration on the new cluster node.

  1. Add the new servers to the Zookeeper configuration. Edit /etc/zookeeper/zoo.cfg and list all the cluster nodes (including this new server):

    /etc/zookeeper/zoo.cfg
    server.1=10.1.87.10:2888:3888
    server.2=10.1.87.11:2888:3888
    …

    This list should be identical on every cluster node.

  2. Restart Zookeeper:

    service zookeeper restart
  3. Check that the election leader points to one of the existing nodes:

    curl -s http://127.0.0.1:81/service/cluster/v0/leader/cluster.json | python -mjson.tool | grep electionLeader

    This will output a line containing the node that has currently been selected as the leader by the Squirro cluster node.

Setting up Cluster Node Storage

Some parts of Squirro require a shared file system. This is used for:

  • Uploading data loader plugins, pipelets and custom widgets to a cluster
  • Handling of the trend detection training data
  • Uploading of files through the Squirro frontend and handling of crawling output
  • Indexing binary data files

This share file system can be provided through any means, such as a NAS or an existing clustering file system.

The following instructions show how to set up such a shared file system with GlusterFS, a clustered file system.

All of the following commands - except where otherwise stated - are executed on the new node being set up.

  1. Install GlusterFS server

    yum install -y glusterfs-server
    restorecon /var/run/glusterd.socket
    service glusterd start
  2. Set up connectivity
    • On the new node, connect to all the previous cluster nodes with the following commands (repeated once for every node):

      gluster peer probe 10.1.87.10
      gluster peer probe 10.1.87.11
    • On all previous nodes execute the following command (with the IP of the new server that is being added):

      gluster peer probe 10.1.87.12
  3. Create or extend the volume

    • If this is the first installation, then create the cluster file system. For fresh installations steps 1-2 can be executed on all the servers first, to then execute the "volume create" command here just once.

      gluster volume create gv0 replica 2 10.1.87.10:/var/lib/squirro/storage/gv0/brick0 10.1.87.11:/var/lib/squirro/storage/gv0/brick0 10.1.87.12:/var/lib/squirro/storage/gv0/brick0 force
      gluster volume start gv0

      Repeat the IP/directory part of the command for every cluster node, both the previous ones as well as the new one.

      The replica 2 option can be modified to indicate how many replicas should be kept of each file. The force option at the end confirms that we are okay to create the volume on the linux system's root file system partition.
       

    • If this is a new server, being added to an existing GlusterFS installation, then execute this command to use this server for the volume:

      gluster volume add-brick gv0 10.1.87.12:/var/lib/squirro/storage/gv0/brick0 force

       The force option at the end confirms that we are okay to create the volume on the linux system's root file system partition.

  4. Configure the cluster storage volume to be mounted. Add the following line to /etc/fstab:

    /etc/fstab (excerpt)
    127.0.0.1:/gv0 /mnt/gv0 glusterfs defaults 0 0
  5. Then create the mount-point and mount the new file system:

    mkdir -p /mnt/gv0
    mount /mnt/gv0
  6. Set up all the required directories in the shared file system:

    install -o sqprovid -g squirro -d /mnt/gv0/storage
    install -o sqsqirrl -g squirro -d /mnt/gv0/squirrel_log
    install -o sqplumbr -g squirro -d /mnt/gv0/pipelets
    install -o sqfile -g squirro -d /mnt/gv0/fileimport
    install -o sqfile -g squirro -d /mnt/gv0/fileimport/uploads
    install -o sqtrends -g squirro -d /mnt/gv0/trends_data
    install -o sqtopic -g squirro -d /mnt/gv0/assets
    install -o sqtopic -g squirro -d /mnt/gv0/widgets
  7. Change the configuration of the various Squirro services to point to the right folder. Below you see for each config file the desired sections and values - all the values that have been left out here should be left unmodified.

    /etc/squirro/storage.ini (excerpt)
    [storage]
    default_bucket = cluster 
    /etc/squirro/squirrel.ini (excerpt)
    [storage]
    directory = /mnt/gv0/squirrel_log/
    url_prefix = /storage/cluster/squirrel_log/
    /etc/squirro/plumber.ini (excerpt)
    [storage]
    directory = /mnt/gv0/pipelets/
    /etc/squirro/topic.ini (excerpt)
    [topic]
    custom_assets_directory = /mnt/gv0/assets/
    custom_widgets_directory = /mnt/gv0/widgets/
    /etc/squirro/trends.ini (excerpt)
    [offline_processing]
    data_directory = /mnt/gv0/trends_data
  8. Replace the previous assets and widgets folders with symlinks:

    rm -ir /var/lib/squirro/topic/assets
    rm -ir /var/lib/squirro/topic/widgets
    ln -s /mnt/gv0/assets /var/lib/squirro/topic/assets
    ln -s /mnt/gv0/widgets /var/lib/squirro/topic/widgets
  9. In the nginx config file /etc/nginx/conf.d/frontend.conf change a few of the alias declarations:
    1. Inside the location /storage/localfile/ block change the alias from its default to alias /mnt/gv0/storage/
    2. Inside the location /storage/squirrel_log block change the alias from its default to alias /mnt/gv0/squirrel_log/
    3. Verify that the configuration is still valid:

      nginx -t
    4. Reload the nginx configuration:

      service nginx reload
  10. Restart all services:

    monit -g all-active restart
  • No labels