This section is a quick start guide for installing and running an SSI cluster of virtual UML machines. The most time-consuming part of this procedure is downloading the root image.
First you need to download a SSI-ready root image. The compressed image weighs in at over 150MB, which will take more than six hours to download over a 56K modem, or about 45 minutes over a 500K broadband connection.
The image is based on Red Hat 7.2. This means the virtual SSI cluster will be running Red Hat, but it does not matter which distribution you run on the host system. A more advanced user can make a new root image based on another distribution. This is described in Section 5.
After downloading the root image, extract and install it.
host$ tar jxvf ~/ssiuml-root-rh72-0.6.5-1.tar.bz2 host$ su host# cd ssiuml-root-rh72 host# make install host# Ctrl-D |
Download the UML utilities. Extract, build, and install them.
host$ tar jxvf ~/uml_utilities_20020428.tar.bz2 host$ su host# cd tools host# make install host# Ctrl-D |
Download the SSI/UML utilities. Extract, build, and install them.
host$ tar jxvf ~/ssiuml-utils-0.6.5-1.tar.bz2 host$ su host# cd ssiuml-utils host# make install host# Ctrl-D |
Assuming X Windows is running or the DISPLAY variable is set to an available X server, start a two node cluster with
host$ ssi-start 2 |
This command boots nodes 1 and 2. It displays each console in a new xterm. The nodes run through their early kernel initialization, then seek each other out and form an SSI cluster before booting the rest of the way. If you're anxious to see what an SSI cluster can do, skip ahead to Section 3.
You'll probably notice that two other consoles are started. One is the lock server node, which is an artefact of how the GFS shared root is implemented at this time. The console is not a node in the cluster, and it won't give you a login prompt. For more information about the lock server, see Section 7.3. The other console is for the UML virtual networking switch daemon. It won't give you a prompt, either.
Note that only one SSI/UML cluster can be running at a time, although it can be run as a non-root user.
The argument to ssi-start is the number of nodes that should be in the cluster. It must be a number between 1 and 15. If this argument is omitted, it defaults to 3. The fifteen node limit is arbitrary, and can be easily increased in future releases.
To substitute your own SSI/UML files for the ones in /usr/local/lib and /usr/local/bin, provide your pathnames in ~/.ssiuml/ssiuml.conf. Values to override are KERNEL, ROOT, CIDEV, INITRD, and INITRD_MEMEXP. This feature is only needed by an advanced user.
Add nodes 3 and 5 to the cluster with
host$ ssi-add 3 5 |
The arguments taken by ssi-add are an arbitrary list of node numbers. The node numbers must be between 1 and 15. At least one node number must be provided. For any node that is already up, ssi-add ignores it and moves on to the next argument in the list.
Simulate a crash of node 3 with
host$ ssi-rm 3 |
Note that this command does not inform the other nodes about the crash. They must discover it through the cluster's node monitoring mechanism.
The arguments taken by ssi-rm are an arbitrary list of node numbers. At least one node number must be provided.
You can take down the entire cluster at once with
host$ ssi-stop |
If ssi-stop hangs, interrupt it and shoot all the linux-ssi processes before trying again.
host$ killall -9 linux-ssi host$ ssi-stop |
Eventually, it should be possible to take down the cluster by running shutdown as root on any one of its consoles. This does not work just yet.