Kantree’s architecture makes it very easy to deploy on multiple machines.
You can deploy as many instances of Kantree processes as you want on as many machine as long as they connect to the same PostgreSQL and Redis instances.
IMPORTANT: Only 1 scheduler instance can be started at a time across all your machines
If you need to scale your Redis or PostgreSQL instance, there are many guides available to do so. Redis can be clusterized using Redis-cluster and PostgreSQL supports replication.
Installing a node
First, generate a configuration file that you can use on all your nodes:
$ ./platform gen_config config-prod.yml
Then you can unpack the kantree archive on each node, add your configuration file and your license file and then run:
$ ./platform init_node
Upgrade your database from one of the node using:
$ ./platform upgrade_db
To start only a specific process, use the run command with the process name as argument:
$ ./platform run web $ ./platform run worker $ ./platform run scheduler $ ./platform run push
You don’t have to start all the processes together. You can for example start 4 worker processes on one machine and 4 web processes and 1 push process on another.
We also provide an example Supervisord configuration. Supervisord is a popular process runner for Python applications. It assumes that Kantree is installed under /opt/kantree (feel free to change). Check out the Installation chapter for more information regarding this.
Load balacing the nodes
Finally, you’ll need to install a load-balancer to serve Kantree using all the nodes. We recommend Nginx (which we use ourselves). You will find an nginx.conf.example file at the root of your installation.
Add the ip addresses for all your nodes running web processes in the
upstream app section. Same of the push processes under the
upstream push section.
You will also need to replace the
IMPORTANT: for the push server, the session needs to be sticky (ie. the requests coming from a client needs to always reach the same server).