Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

While it is technically possible to scale a single Squirro cluster across multiple datacenterdata centers, it is not recommended. Both the Squirro cluster service as well as the Elasticsearch cluster need stable and low latency network connections. Both can usually not be guaranteed across multiple locations.

The recommended scenario for BCP is to setup To support BCP scenarios two fully independent Squirro  Squirro clusters can be setup. The two clusters are operated in an Active-Standby setup. All incoming data and query traffic should be directed to the active cluster, all data gets replicated to the standby frequently (e.g. every 5 or 15 or 60 minutes).

The replication between the two cluster clusters is done using a command line (CLI) utility provided by Squirro. We plan to fully integrate BCP replication support into the Squirro cluster service and UI in a future release of Squirro.

Overview

 

Core Concepts

  • Squirro Configuration is stored in .ini files under /etc/squirro
  • Project and User Metadata is stored in MySQL (will not grow based on data ingested into Squirro)
  • Text Documents are Stored in Elasticsearch (grows with data ingested into Squirro)
  • Binary Documents and additional assets such as custom CSS , or Pipelets are stored in the filesystem, which is distributed within the cluster using GlusterFS. (will grow based on data ingested into Squirro, but only if binary documents such as Office/PDF are indexed)
  • Caching is done in Redis, but the cache is volatile and there is no need to consider it for BCP.

...

Requirements

The replication is triggered and run from a single host. This usually By default this is the primary app server on the production environment but can also run . But this could be done also from a dedicated host, not part of any of the two clusters.
For added resilience the script and configuration is deployed to all Squirro nodes, but only actively run on the leader node.

The framework Fabric is used to communicate with the various host roles across both datacenterdata centers. (Fabric relies on key based SSH connections) 

...

Note
titleNote 1

 SSH connection connections to all production nodes is are not mandatory. Alternatively access on TCP Port port 443 and TCP Port port 3306 (MySQL) is sufficient too.

Note
titleNote 2

 If If SSH connection connections from the Production to the BCP Datacenter are not possible, then the solution is to run the replication in three independent stages:

  • Stage 1: Backup Production to NFS
  • Stage 2: Replicate the NFS folder to BCP
  • Stage 3: Restore BCP from NFS

The main disadvantage of this approach is that there is no longer a single script that is aware of success or failure of the replication process. On the BCP side it can also be challenging to identify if the replication Prod -> BCP of the NFS folder has concluded and is stable.

Replication Workflow

These are the stages wich which the Replication script uses:

...

Before the replication commences fabric , Fabric is used to connect to the production cluster nodes to validate that the cluster is fully operational and reachable.

If any of these step reveal any issues, the replication job is aborted with verbose debug output.

...

Stage 3: MySQL Backup

  • The host running the replication script connects to the leader of the production cluster . (using SSH or MySQL).
  • A full backup of the MySQL Database is created, using mysqldump).
  • The mysql MySQL backup is compressed and stored onto the shared NFS mount.

...

  • The cluster filesystem used by the Squirro cluster is replicated to the shared NFS mount using rsyncRsync. (incremental)
  • Optional: All Also replicate all (or some) configuration files are synced to the NFS mount. This is recommended ideal if both Production and BCP Cluster are setup identical.

...

With all data stored on the NFS mount, the contents of the entire mount are replicated to the BCP datacenter.
This can be done using Rsync via SSH or using a storage vendor related replication technology (e.g. Netapp SnapMirror)

 While during While the initial replication the volume can be big, subsequent replication runs should be small since with the exception of the MySQL export all methods are incremental. The higher the replication frequency, the lower the replicated data volume should be.

Stage 6: Elasticsearch Snapshot Restore

From the BCP NFS mount, the latest Elasticsearch Snapshot is restored into the ES cluster using the official ES Snapshot module. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html. There will be a service interuption during restore, but since the restore
During the restore ES will not serve traffic.

Stage 7: MySQL Restore

From the BCP NFS mount, the latest MySQL backup will be restored to the Squirro leader. The followers will replicate immediately to the same state.

...

From the BCP NFS mount the contents of the cluster fs file system is synced with the Squirro cluster master. Optional: If both cluster are setup identical, the config files under /etc/squirro are also synced

...

The same mechanism is used to replicate from BCP to productionProduction. The
The best practice approach is to setup and test this scenario, but to not execute the script using e.g. cron automatically.

Once BCP becomes become active, the replication cron job on production Production is stopped and the script on BCP enabled.

For maximum safety, we recommend to separate Production -> BCP and BCP -> Production into dedicated folders both scenarios in the NFS mount. This way an accidental reversal of the direction cannot lead to unwanted and permanent data loss.

Reduced number of nodes in BCP

The ideal setup is to run production and BCP with the exact same setup. This way the user experience will not degrade when a failover to BCP occurs.
It is however possible to run a reduced setup in BCP. E.g. instead of 3 nodes, only 1 node can be used.

Note that you should never run an even number of Squirro application and elasticsearch nodes since both system benefit from the ability to build quorums to detect and handle network segmentation events.

Backup the NFS mount

It is highly recommended that  the NFS mount is regularly backed up or protected by a vendor specific snapshotting technology.
The NFS mount can be easily used to restore previous cluster states, and is ideal for disaster recovery. 

...