on programming and related topics
Clustering RabbitMQ is very easy - if you know how. Unfortunately, the documentation on this topic is good but not good enough (cf. RabbitMQ Clustering). If you try to do it, you may get lost on the track until you find some insightful posts on the mailing list. This is why I summarize here how I got it to work.
Say, you want to create a cluster having two disc nodes and two ram nodes. If you do this on at least two machines, each having a disc and a ram node you achieve good fault tolerance and good scalability both with one setup. Your clients may connect to the ram nodes only or these are balanced by an additional load balancer.
But, how do I make a node a disc node and another node a ram node?
There’s no such command like “rabbitmqctl mkdisc” and there is no related configuration option. On one hand, this is a little counter intuitive, on the other hand this adds a lot of flexibility since you may alter the roles of nodes and restructure your cluster on the fly whenever necessary.
The rules are assigned by the way you call the “rabbitmqctl cluster” command. In our scenario, we have multiple nodes on the same host, so we need to wrap the calls to “rabbitmqctl” into shellscripts setting some environment variables (cf. RabbitMQ Configuration). If this has been done, you ensure all nodes of the cluster are running. Afterwards you execute a sequence of “stop_app”, “reset”, “cluster”, “start_app” commands for all nodes. If it comes to the “cluster” command, you add a space separated list of all disc nodes you want to create to the “cluster” command executed for each node. My mnemonic for this is that you copy the current node to all disc nodes. The whole sequence may look like this, with “rbctl.*” being your wrapper scripts:
host-of-disc1$ rbctl.disc1 stop_app
host-of-disc1$ rbctl.disc1 reset
host-of-dics1$ rbctl.disc1 cluster disc1@host-of-disc1 disc2@host-of-disc2
host-of-disc1$ rbctl.disc1 start_app
host-of-ram1$ rbctl.ram1 stop_app
host-of-ram1$ rbctl.ram1 reset
host-of-ram1$ rbctl.ram1 cluster disc1@host-of-disc1 disc2@host-of-disc2
host-of-ram1$ rbctl.ram1 start_app
host-of-ram2$ rbctl.ram1 stop_app
host-of-ram2$ rbctl.ram1 reset
host-of-ram2$ rbctl.ram1 cluster disc1@host-of-disc1 disc2@host-of-disc2
host-of-ram2$ rbctl.ram1 start_app
host-of-disc2$ rbctl.disc2 stop_app
host-of-disc2$ rbctl.disc2 reset
host-of-disc2$ rbctl.disc2 cluster disc1@host-of-disc1 disc2@host-of-disc2
host-of-disc2$ rbctl.disc2 start_app
If you have to add users, vhost and permissions, you better do it at the end of this procedure, otherwise the “reset” will delete all of this information. Also, if you want to change the cluster setup later, you should be careful with “reset”, omitting it for one disc node at least.
Another weak point with the whole clustering stuff is the location of the “.erlang.cookie” file. This file is essential for clustering and must have the same content for all nodes in the cluster. Documentation says RabbitMQ looks at “/var/lib/rabbitmq/.erlang.cookie” but I found this not always true. Supposed RABBIT_HOME points to the directory where the rabbit distribution is located, I copied the file to “$RABBIT_HOME/../.erlang.cookie” and RabbitMQ used this one. I’m not quite sure if this is a general rule.