Tool to manage rundeck backups

Another rundeck blog post. In the latest post we talked about how to move rundeck from one server to another, and we found out there are quite a lot of files to handle (ssh key files, rundeck’s configuration, project definitions, job definitions…). Well, with that info, and with the intention of simplify backup tasks, I’ve written a shell-script to manage rundeck backups, backup and recovery. You can find it at github: https://github.com/ersiko/rundeck-backup and right here in in the page project

It works plain simple, you just need to backup like this

[[email protected] ~]# ./rundeck-backup.sh backup rundeck.tar.gz
OK - backup finished successfully using /root/rundeck-backup.tar.gz

If we don’t type the file name, the backup will be written with today’s data:

[[email protected] ~]# ./rundeck-backup.sh backup
OK - backup finished successfully using /root/rundeck-backup-20130327.tar.gz

And for the recovery, as easy as:

[[email protected] ~]# ./rundeck-backup.sh restore
Rundeck service is not running, so jobs can't be restored. Do you want to start rundeck? (y/N) y
Starting rundeckd: [ DONE ]
OK - restore finished successfully using /root/rundeck-backup-20130327.tar.gz

There are also other options, to cover all scenarios I’ve thought of:

[[email protected] ~]# ./rundeck-backup.sh -h
rundeck_backup - v1.00
Copyleft (c) 2013 Tomàs Núñez Lirola under GPL License
This script deals with rundeck backup/recovery.

Usage: ./rundeck-backup.sh [OPTIONS...] {backup|restore} [backup_file] | -h --help

Options:
-h | --help
Print detailed help
--exclude-config
Don't backup / restore config files
--exclude-projects
Don't backup / restore project definitions
--exclude-keys
Don't backup / restore ssh key files
--exclude-jobs
Don't backup / restore job definitions
--exclude-hosts
Don't backup / restore .ssh/known_hosts file
--include-logs
Include execution logs in the backup / restore procedure (they are excluded by default)
-c | --configdir
Change default rundeck config directory (/etc/rundeck)
-u | --user
Change default rundeck user (rundeck)
-s | --service

I've done my best (and the best I felt like :P), so there is probably room for improvements. I long for critics and suggestions to improve it!

Thank you for your attention :)

Moving rundeck from one server to another

Rundeck people just released an new version, la 1.5. Upgrading to this version is not as simple as usual (yum update or apt-get upgrade) because they’ve changed the database schema and that’s why they recommend following the backup/recovery protocol to upgrade.

I’ve been trying to move our rundeck service to another server with more resources for a while, but never found the moment. This upgrade was the perfect excuse, so I moved it and this post explains the steps for moving rundeck.

First we find all the 4 parts we want to move:
– Rundeck configuration
– Rundeck user keys
– Project definitions
– Job definitions
– Execution logs

Rundeck configuration is in /etc/rundeck/. To find the project definitions we should look in the file /etc/rundeck/project.properties for the project.dir value (default is /var/rundeck/projects/). We will find the path to project ssh keys in the file etc/project. properties of each project directory, in the project.ssh-keypath value. The job definitions is in the database, and we can see the path to the executions logs in the file /etc/rundeck/framework.properties, with the framework.logs.dir value (usually /var/lib/rundeck/logs).

Once we’ve located everything, we can make “the package” we will move from server to server. We start with the text files (rundeck config, project definition and execution logs):


mkdir rundeck-backup
cp -a /etc/rundeck/ rundeck-backup/
cp -a /var/rundeck/projects/ rundeck-backup
cp -a /var/lib/rundeck/logs/ rundeck-backup

To copy the projects ssh keys we should check inside each project directory for its project.properties and copy that file. The projects may share the key or may not, and the keys may have the same filename or not. That’s why we’ll save them inside each project directory:


for project in rundeck-backup/projects/*;do cp grep project.ssh-keypath $project/etc/project.properties|cut -d"=" -f 2 $project;done

For the jobs definition extraction, we need to call rd-jobs list for each project, exporting this way the xml definition:


for project in rundeck-backup/projects/*;do rd-jobs list -f rundeck-backup/basename $project.xml -p basename $project;done

And it would be fine to keep the “know_hosts” file for rundeck user:

cp getent passwd rundeck|cut -d":" -f6/.ssh/known_hosts rundeck-backup

Now we have a package with a full backup of our installation. Now we send this rundeck-backup directory to the new server (I know, it’s obvious, but there you go :P)

scp -r rundeck-backup [email protected]:.

Now we ssh to the new server. We assume rundeck is installed in the new server (if not, we talk about that
in an older post), so we just need to put the files where they belong. First the keys:


for project in rundeck-backup/projects/*;do filename=grep project.ssh-keypath $project/etc/project.properties|cut -d"=" -f 2;cp $project/basename $filename $filename;done

Then the rest of the files:

cp -a rundeck-backup/rundeck/ /etc/
cp -a rundeck-backup/projects/ /var/rundeck/
cp -a rundeck-backup/logs/ /var/lib/rundeck/
cp rundeck-backup/known_hosts getent passwd rundeck|cut -d":" -f6/.ssh/known_hosts

Now we have rundeck configuration and projects definition, but the jobs are still missing. We shold keep in mind the old server is still running, and we don’t want our jobs executed twice at the same time. We don’t want to disable in the old server until we make sure the new one is running ok, because we don’t want any of the executions to be missed. To achieve both we will make the new server to fake the executions, not really running anything, changing service.NodeExecutor.default.provider value in the file /var/rundeck/projects/$PROJECT/etc/project.properties, from jsch-ssh to stub. In a single line, it would be:

sed /var/rundeck/projects/*/etc/project.properties -e 's/jsch-ssh/stub/g' -i

Now we are sure no job will be executed until we say so, so we can riskless import the jobs:

for project in rundeck-backups/projects/*;do rd-jobs load -f rundeck-backup/basename $project.xml -p basename $project;done

With the jobs loaded we have everything we need. Now we can log in the web interface and check everything is ok: users can acess to their projects, jobs are correctly configured, etc, etc. When we are sure, we can move one project at a time (or all of them at the same time, as you wish) just changing the former value (service.NodeExecutor.default.provider). In the old server we change “jsch-ssh” to “stub” and the other way arounf in the new server, from “stub” to “jsch-ssh”. Playing with those values we are confident if we find any problem with some project, we can move this project (or all of them, just to be sure) back to the old server while we solve it.

And that’s it! Now we could change the DNS to keep the old rundeck URL, but that’s your choice.

Basic rundeck installation in RedHat, using apache as a proxy and mysql as a database

When we have lots of servers and we need to execute jobs regularly, we rapidly outgrow cron, because the information is spreaded along all the servers and you don’t have an easy way to, for instance, check the execution result of a task in all servers, or what tasks were running between 16:33 and 16:36, or to find the less busy spot to schedule a new job in your architecture. And many other things.

To centralize this information there are some alternatives. Recently, the nerds in airbnb have released chronos and it seems a good way to go, but I’ve been using rundeck for a while and I’m very happy with it.

It works in a simple way: it’s a java daemon with a grails interface for the web access, and a quartz scheduler for event scheduling. This server makes ssh connections to the remote servers to execute the configured tasks. This allows us to have a centralized cron (our original intent with this article), but we can also use it as a centralized sudo (we can decide which user can run which command in which servers, all from the web console, without giving away ssh access at all), and also allows us to have a centralized shell, so we can run a command in several servers at the same time, like terminator or more like fabric .

Now that we’ve introduced rundeck, let’s start installing in our RedHat. We must have in mind that rundeck runs with the rundeck user, so it’s unprivileged, so it can’t use port 80. To make it work for this example, we will proxypass with apache. First of all we install apache (obvious):


# yum install httpd

Then we edit /etc/httpd/conf/httpd.conf file and add two lines:


ProxyPass / http://localhost:4440/
ProxyPassReverse / http://localhost:4440/

This way apache will forward al the connexions in port 80 to port 4440, where rundeck is awaiting.
Now for the data. Rundeck uses a database file by default (formerly it used hsql, now it uses h2). This is fine, but at some point we will outgrow it. To avoid that, we will use a mysql database. First we install it (obvious, again):


yum install mysql mysqld
chkconfig mysqld on

We can tune it editing my.cnf with the usual (default-storage-engine=innodb, innodb_file_per_table, etc, etc). After that we need to create a database for rundeck, and a user with permissions:

[[email protected] rundeck]# mysql -p
Enter password: Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 18536
Server version: 5.5.30 MySQL Community Server (GPL) by Remi

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> create database rundeck;
Query OK, 1 row affected (0.00 sec)

mysql> grant all on rundeck.* to 'rundeck'@'localhost' identified by 'password';
Query OK, 0 rows affected (0.00 sec)

mysql> quit
Bye

Now we install rundeck, first the official application repo and then the program itself:


wget http://repo.rundeck.org/latest.rpm
rpm -Uvh latest.rpm
yum install rundeck

And we configure the database in the file /etc/rundeck/rundeck-config.properties, commenting out the existing line and adding three more:


#dataSource.url = jdbc:h2:file:/var/lib/rundeck/data/rundeckdb
dataSource.url = jdbc:mysql://localhost/rundeck
dataSource.username = rundeck
dataSource.password = password

Now we start it


/etc/init.d/rundeck start

We can check it’s using the database because it will create its tables:


[[email protected] rundeck]# mysql -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 31
Server version: 5.5.30 MySQL Community Server (GPL) by Remi

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> use rundeck
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+----------------------------+
| Tables_in_rundeck |
+----------------------------+
| auth_token |
| base_report |
| execution |
| node_filter |
| notification |
| rdoption |
| rdoption_values |
| rduser |
| report_filter |
| scheduled_execution |
| scheduled_execution_filter |
| workflow |
| workflow_step |
| workflow_workflow_step |
+----------------------------+
14 rows in set (0.00 sec)

We have our service running. Now we must export our public ssh key to gain access to run commands on the remote servers:


[[email protected] .ssh]# su - rundeck
[[email protected] ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/rundeck/.ssh/id_rsa): project1_rsa
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in project1_rsa.
Your public key has been saved in project1_rsa.pub.
The key fingerprint is:
f6:be:e5:0r:b2:zd:9b:89:1e:2c:6f:fc:od:e5:a5:00 [email protected]
[[email protected] ~]$ ssh-copy-id -i /var/lib/rundeck/.ssh/project1_rsa [email protected]
[email protected]'s password:
0
The authenticity of host 'server2 (222.333.444.555)' can't be established.
RSA key fingerprint is b6:6z:34:2o:04:2f:j1:71:1e:12:b3:fd:e2:f2:79:cf.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'server2-es,222.333.444.555' (RSA) to the list of known hosts.
Now try logging into the machine, with "ssh [email protected]'", and check in:

.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[[email protected] ~]$ ssh [email protected] whoami
user

Stay with me, we’re almost there. Now we can log in the web with user admin and password admin:
rundeck login

The first thing is to create a project, and here we put the former information

rundeck create project

When the project is generated, we will land on the project page, where we can run local commands:
rundeck home

Now for the last step, adding the remote servers. As we configured in the project creation, we will put them in the file /etc/rundeck/servers/project1 in xml format:

Once we add them, we can use them without restarting, just clicking “show all nodes” button:

rundeck home with new server

And that’s it. From this point on it’s very easy. In this console we can run remote commands, and in the “jobs” tab we can create jobs.

There are some more things we can configure. For instance, we can change the rundeck logo to put our company’s logo in the file /etc/rundeck/rundeck-config.properties

rundeck.gui.title = Programador de tareas de la nostra empresa
rundeck.gui.logo = logo.jpg
rundeck.gui.logo-width = 68
rundeck.gui.logo-heigh = 31

Or if we want to create more users, or to change admin password (you should change it!) we will add them to /etc/rundeck/realm.properties

admin: MD5:5a527f8fegf916h8485dj6681ff8d7a6a,user,admin,architect,deploy,build
newuser: MD5:0cddh73e3g6108a7fh5f3716a9jf97and4e56ff,user

And permissions are managed in the file /etc/rundeck/admin.aclpolicy.

With all this we are ready to start playing with rundeck.