Rodrigo Rosenfeld Rosas

A sample Ruby script to achieve fast incremental back-up on btrfs partition

Fri, 24 Jun 2016 15:31:00 +0000 (last updated at Mon, 04 Jul 2016 16:32:00 +0000)

For some years I have been using rsnapshot to back up our databases and documents using an incremental approach. We create a new back-up every hour and retain the last 24 hours backup, one back-up per day for the past 7 days and one back-up per week for the past 4 weeks.

Rsnapshot is great. It uses hard-links to achieve incremental back-up, saving up a lot of space. It's a combination of "cp -al" and rsync. But we were facing a problem related to free inodes count on our ext4 partition. By the way, NewRelic does not monitor the free inodes count (df -i) so I found this problem the hard way, after the back-up stopped working due to lack of free inodes.

I've created a custom check in our own monitoring system to alert about low free inodes available and then I tried to tweak some ext4 settings to avoid this problem again in the new partition. We have 26GB spread on 2.6 million of individually gzipped documents (they are served directly by nginx) which will create almost 100 million hard-links in that back-up partition. There are hardlinks around the original documents as well as part of a smart strategy to save space when the same document is used in multiple transactions (they are not changed). Otherwise they would take some extra Gigabytes.

Recently, my custom monitoring system sent me an alert that 75% of the inodes were used while about only 30% of disk space was being actually used. So, I decided to investigate a bit more about other filesystems which dealt with inodes dynamically.

The btrfs filesystem

That's how I found btrfs, a modern file-system which not only does not have a limit on inodes but, as I'll describe, has some very interesting features for dealing with incremental back-up in a faster and better way than rsnapshot.

Initially I wasn't thinking about replacing rsnapshot, but after reading about support for subvolumes and snapshots in btrfs I changed my mind and decided to replace rsnapshot with a custom script. I've tried to adapt rsnapshot for several hours to make the workflow I wanted work without success though. Here's an issue related to btrfs support.

Before I talk about how btrfs helps our back-up system, let me explain a few issues I had with rsnapshot.

Rsnapshot issues

I've been living with some issues with rsnapshot in the past years. I want the full back-up procedure to take less than an hour so that we would be able to run it every hour. I had to tweak its settings a few times in order to get the script to finish in less than an hour but in the past days it was taking already almost 40 minutes to complete. A while back, before the tweaks, I had to change the interval to back-up every two hours.

One of the slow parts of rsnapshot is removing the last back-up snapshot when rotating. It doesn't matter if you use "rm -rf" or whatever other method. Removing a big tree of files is slow. An alternative would be to move the latest snapshot to the first one (hourly.0), since this would save the "rm -rf" time and also the "cp -al" time, skipping to the rsync phase. But I wasn't able to figure out how to make that happens with rsnapshot.

Also, some of the procedures could be done in parallel to speed up the process but rsnapshot doesn't provide direct support to specify this and it's hard to write proper shell script to manage those cases.

The goal

After reading about btrfs I figured out that the back-up procedure could be made much faster and be simplified. Then I created a Ruby script, which I'll show in the next section, and integrated it in our automation tools in one day. I've replaced rsnapshot with it in our back-up server, with the new script and it's running pretty well for the last two days taking about 8 minutes to complete the procedure on each run.

So, let me explain the strategy I wanted to implement to help you understanding the script.

As I said, btrfs supports subvolumes. Btrfs implements copy-on-write (CoW), so basically, this allows to both create and delete snapshots from subvolumes instantly (constant time). That means we replace the slow "rm -rf hourly.23" with the instantaneous "btrfs subvolume delete hourly.23" and "cp -al ..." with the instantaneous "btrfs subvolume snapshot ...".

In order for a regular user to delete subvolumes with btrfs, the user_subvol_rm_allowed fs option must be used. Also, deleting a subvolume doesn't work if there are other subvolumes inside it, so they must be removed first. There's no switch or tool in the btrfs-progs package that allows you to delete them recursively. This is important to understand the script.

Our back-up procedure consists of getting a recent dump of two production PostgreSQL databases (the main database and the one used by Redmine) and syncing two directories containing files (the main application files and the files uploaded to Redmine).

The idea is to get them inside a static path as the first step. The main reason for that is that if something goes wrong in the process after syncing the documents (the slowest part), for example, we wouldn't lose the transferred files the next time we try to run the script. So, basically here's how I implemented it (there's a simpler strategy I'll explain next):

  • /var/backups/latest [regular directory]
  • /var/backups/latest/postgres [subvolume] - the main db dump is stored here
  • /var/backups/latest/tickets-db [subvolume] - the tickets db dump is stored here
  • /var/backups/latest/docmanager [subvolume] - the 2.6 million documents are rsynced here
  • /var/backups/latest/tickets-files [subvolume] - Redmine files go here

After the procedure is finished to get them in the latest state it creates a tmp directory and create a snapshot for each subvolume inside tmp and once everything works fine the back-ups are rotated and tmp is moved to hourly.0. Removing hourly.23 in the rotation phase has to remove the inner subvolumes first.

After implementing this (it was an iterative process) I realized it could be simplified to use a simpler infra-structure. "latest" would be a subvolume and everything inside it regular files and directories. Than the "tmp" directory wouldn't be used and after rotating a snapshot of "latest" would be used to create "hourly.0". I didn't update the script yet because I'm not sure if it worths changing, since the current layout is more modular, which is useful in case I want to take some snapshot of just part of the back-up for some reason. So the sample back-up script in the next section will use my current tested approach, which is the situation described first above.

The main database has over 500MB in PostgreSQL custom format, and it's much faster to rsync it than using scp. Initially those databases were not stored in the "latest" diretory and I used "scp" to copy them directly to the "tmp" directory, but I changed the strategy to save some time and bandwidth.

The script should exit with a message and non zero exit status code when something fails so that I would be notified if anything goes wrong by Cron (by setting the MAILTO=my@email.com in the beggining of the crontab file). It shouldn't affect the existing valid snapshots either in that case.

It shouldn't run in case the previous procedure hasn't finish, so there's a simple lock mechanism preventing that from happen in case it takes over an hour to complete. The second attempt will fail and I should get an e-mail telling me that happened.

It should also have a dry-run mode (which I call test mode) that will output the commands without running it, which is useful while designing the back-up steps. It should also allow for commands to run concurrently so it uses some indentation to show the order the commands are run.

Finally, it will report in the logs the issued commands and their status (finished or failed) as well as any commands output (STDOUT or STDERR) and the time each command took as well as the total time in the end of the procedure.

Finally, now that you understand what the script is supposed to do, here's the actual implementation.

The script

1#!/usr/bin/env ruby
2
3require 'open3'
4require 'thread'
5require 'logger'
6require 'time'
7
8class Backup
9 def run(args)
10 @start_time = Time.now
11 @backup_root_path = File.expand_path '/var/backups'
12 #@backup_root_path = File.expand_path '~/backups'
13 @log_path = "#{@backup_root_path}/backup.log"
14 @tmp_path = "#{@backup_root_path}/tmp"
15
16 @exiting = false
17 Thread.current[:indenting_level] = 0
18
19 setup_logger
20
21 lock_or_exit
22
23 log 'Starting back-up procedure'
24
25 parse_args args.clone
26
27 run_scripts if @action == 'hourly'
28
29 rotate
30 unlock
31 report_completed
32 end
33
34 private
35
36 def setup_logger
37 File.write @log_path, '' unless File.exist? @log_path
38 logfile = File.open(@log_path, File::WRONLY | File::APPEND)
39 logfile.sync = true
40 @logger = Logger.new logfile
41 @logger.level = Logger::INFO
42 @logger.datetime_format = '%Y-%m-%d %H:%M:%S'
43 @logger_mutex = Mutex.new
44 end
45
46 def lock_or_exit
47 if File.exist?(pidfile) && run_command("kill -0 #{pid = File.read pidfile}")
48 abort "There's another backup in progress. Pid: #{pid} (from #{pidfile})."
49 end
50 File.write pidfile, Process.pid
51 end
52
53 def unlock
54 File.unlink pidfile
55 end
56
57 def pidfile
58 @pidfile ||= "#{@backup_root_path}/backup.pid"
59 end
60
61 def run_command!(cmd, sucess_in_test_mode = true, abort_on_stderr: false)
62 run_command cmd, sucess_in_test_mode, abort_on_stderr: abort_on_stderr, abort_on_error: true
63 end
64
65 def run_command(cmd, sucess_in_test_mode = true, abort_on_stderr: false, abort_on_error: false)
66 indented_cmd = ' ' * indenting_level + cmd
67 Thread.current[:indenting_level] += 1
68 if @test_mode
69 @logger_mutex.synchronize{ puts indented_cmd}
70 return sucess_in_test_mode
71 end
72 start = Time.now
73 log "started: '#{indented_cmd}'"
74 stdout, stderr, status = Open3.capture3 cmd
75 stdout = stdout.chomp
76 stderr = stderr.chomp
77 success = status == 0
78 log stdout unless stdout.empty?
79 log stderr, :warn unless stderr.empty?
80 if (!success && abort_on_error) || (abort_on_stderr && !stderr.empty?)
81 die "'#{cmd}' failed to run with exit status #{status}, aborting."
82 end
83 log "finished: '#{indented_cmd}' (#{success ? 'successful' : "failed with #{status}"}) " +
84 "[#{human_duration Time.now - start}]"
85 success
86 end
87
88 def indenting_level
89 Thread.current[:indenting_level]
90 end
91
92 def log(msg, level = :info)
93 return if @test_mode
94 @logger_mutex.synchronize{ @logger.send level, msg }
95 end
96
97 VALID_OPTIONS = ['hourly', 'daily', 'weekly'].freeze
98 def parse_args(args)
99 args.shift if @test_mode = (args.first == 'test')
100 unless args.size == 1 && VALID_OPTIONS.include?(@action = args.first)
101 abort "Usage: 'backup [test] action', where action can be hourly, daily or weekly.
102 If test is specified the commands won't run but will be shown."
103 end
104 end
105
106 def die(message)
107 log message, :fatal
108 was_exiting = @exiting
109 @exiting = true
110 delete_tmp_path_if_exists unless was_exiting
111 unlock
112 abort message
113 end
114
115 def create_tmp_path
116 delete_tmp_path_if_exists
117 create_subvolume @tmp_path
118 end
119
120 def create_subvolume(path, skip_if_exists = false)
121 return if skip_if_exists && File.exist?(path)
122 run_script %Q{btrfs subvolume create "#{path}"}
123 end
124
125 def delete_tmp_path_if_exists
126 delete_subvolume_if_exists @tmp_path, delete_children: true
127 end
128
129 def delete_subvolume_if_exists(path, delete_children: false)
130 return unless File.exist?(path)
131 Dir["#{path}/*"].each{|s| delete_subvolume_if_exists s } if delete_children
132 run_script %Q{btrfs subvolume delete -c "#{path}"}
133 end
134
135 def run_script(script)
136 run_command! script
137 end
138
139 def run_scripts(scripts = all_scripts)
140 case scripts
141 when Par
142 il = indenting_level
143 last_il = il
144 scripts.map do |s|
145 Thread.start do
146 Thread.current[:indenting_level] = il
147 run_scripts s
148 last_il = [Thread.current[:indenting_level], last_il].max
149 end
150 end.each &:join
151 Thread.current[:indenting_level] = last_il
152 when Array
153 scripts.each{|s| run_scripts s }
154 when String
155 run_script scripts
156 when Proc
157 scripts[]
158 else
159 die "Invalid script (#{scripts.class}): #{scripts}"
160 end
161 end
162
163 Par = Class.new Array
164 def all_scripts
165 [
166 Par[->{create_tmp_path}, "mkdir -p #{@backup_root_path}/latest", dump_main_db_on_d1,
167 dump_tickets_db_on_d1],
168 Par[local_docs_sync, local_tickets_files_sync, local_main_db_sync, local_tickets_db_sync],
169 Par[main_docs_script, tickets_files_script, main_db_script, tickets_db_script],
170 ]
171 end
172
173 def dump_main_db_on_d1
174 %q{ssh backup@backup-server.com "pg_dump -Fc -f /tmp/main_db.dump } +
175 %q{main_db_production"}
176 end
177
178 def dump_tickets_db_on_d1
179 %q{ssh backup@backup-server.com "pg_dump -Fc -f /tmp/tickets.dump redmine_production"}
180 end
181
182 def local_docs_sync
183 [
184 ->{ create_subvolume local_docmanager, true },
185 "rsync -azHq --delete-excluded --delete --exclude doc --inplace " +
186 "backup@backup-server.com:/var/main-documents/production/docmanager/ " +
187 "#{local_docmanager}/",
188 ]
189 end
190
191 def local_docmanager
192 @local_docmanager ||= "#{@backup_root_path}/latest/docmanager"
193 end
194
195 def local_tickets_files_sync
196 [
197 ->{ create_subvolume local_tickets_files, true },
198 "rsync -azq --delete --inplace backup@backup-server.com:/var/redmine/files/ " +
199 "#{local_tickets_files}/",
200 ]
201 end
202
203 def local_tickets_files
204 @local_tickets_files ||= "#{@backup_root_path}/latest/tickets-files"
205 end
206
207 def local_main_db_sync
208 [
209 ->{ create_subvolume local_main_db, true },
210 "rsync -azq --inplace backup@backup-server.com:/tmp/main_db.dump " +
211 "#{local_main_db}/main_db.dump",
212 ]
213 end
214
215 def local_main_db
216 @local_main_db ||= "#{@backup_root_path}/latest/postgres"
217 end
218
219 def local_tickets_db_sync
220 [
221 ->{ create_subvolume local_tickets_db, true },
222 "rsync -azq --inplace backup@backup-server.com:/tmp/tickets.dump " +
223 "#{local_tickets_db}/tickets.dump",
224 ]
225 end
226
227 def local_tickets_db
228 @local_tickets_db ||= "#{@backup_root_path}/latest/tickets-db"
229 end
230
231 def main_docs_script
232 create_snapshot_cmd local_docmanager, "#{@tmp_path}/docmanager"
233 end
234
235 def create_snapshot_cmd(from, to)
236 "btrfs subvolume snapshot #{from} #{to}"
237 end
238
239 def main_db_script
240 create_snapshot_cmd local_main_db, "#{@tmp_path}/postgres"
241 end
242
243 def tickets_db_script
244 create_snapshot_cmd local_tickets_db, "#{@tmp_path}/tickets-db"
245 end
246
247 def tickets_files_script
248 create_snapshot_cmd local_tickets_files, "#{@tmp_path}/tickets-files"
249 end
250
251 LAST_DIR_PER_TYPE = {
252 'hourly' => 23, 'daily' => 6, 'weekly' => 3
253 }.freeze
254 def rotate
255 last = LAST_DIR_PER_TYPE[@action]
256 path = ->(n, action = @action){ "#{@backup_root_path}/#{action}.#{n}" }
257 delete_subvolume_if_exists path[last], delete_children: true
258 n = last
259 while (n -= 1) >= 0
260 run_script "mv #{path[n]} #{path[n+1]}" if File.exist?(path[n])
261 end
262 dest = path[0]
263 case @action
264 when 'hourly'
265 run_script "mv #{@tmp_path} #{dest}"
266 when 'daily', 'weekly'
267 die 'last hourly back-up does not exist' unless File.exist?(hourly0 = path[0, 'hourly'])
268 create_tmp_path
269 Dir["#{hourly0}/*"].each do |subvolume|
270 run_script create_snapshot_cmd subvolume, "#{@tmp_path}/#{File.basename subvolume}"
271 end
272 run_script "mv #{@tmp_path} #{dest}"
273 end
274 end
275
276 def report_completed
277 log "Backup finished in #{human_duration Time.now - @start_time}"
278 end
279
280 def human_duration(total_time_sec)
281 n = total_time_sec.round
282 parts = []
283 [60, 60, 24].each{|d| n, r = n.divmod d; parts << r; break if n.zero?}
284 parts << n unless n.zero?
285 pairs = parts.reverse.zip(%w(d h m s)[-parts.size..-1])
286 pairs.pop if pairs.size > 2 # do not report seconds when irrelevant
287 pairs.flatten.join
288 end
289end
290
291Backup.new.run(ARGV) if File.expand_path($PROGRAM_NAME) == File.expand_path(__FILE__)

So, this is what I get running the test mode:

1$ ruby backup.rb test hourly
2btrfs subvolume create "/home/rodrigo/backups/tmp"
3mkdir -p /home/rodrigo/backups/latest
4ssh backup@backup-server.com "pg_dump -Fc -f /tmp/main_db.dump main_db_production"
5ssh backup@backup-server.com "pg_dump -Fc -f /tmp/tickets.dump redmine_production"
6 btrfs subvolume create "/home/rodrigo/backups/latest/docmanager"
7 btrfs subvolume create "/home/rodrigo/backups/latest/tickets-files"
8 btrfs subvolume create "/home/rodrigo/backups/latest/postgres"
9 btrfs subvolume create "/home/rodrigo/backups/latest/tickets-db"
10 rsync -azHq --delete-excluded --delete --exclude doc --inplace backup@backup-server.com:/var/main-documents/production/docmanager/ /home/rodrigo/backups/latest/docmanager/
11 rsync -azq --delete --inplace backup@backup-server.com:/var/redmine/files/ /home/rodrigo/backups/latest/tickets-files/
12 rsync -azq --inplace backup@backup-server.com:/tmp/main_db.dump /home/rodrigo/backups/latest/postgres/main_db.dump
13 rsync -azq --inplace backup@backup-server.com:/tmp/tickets.dump /home/rodrigo/backups/latest/tickets-db/tickets.dump
14 btrfs subvolume snapshot /home/rodrigo/backups/latest/tickets-db /home/rodrigo/backups/tmp/tickets-db
15 btrfs subvolume snapshot /home/rodrigo/backups/latest/tickets-files /home/rodrigo/backups/tmp/tickets-files
16 btrfs subvolume snapshot /home/rodrigo/backups/latest/postgres /home/rodrigo/backups/tmp/postgres
17 btrfs subvolume snapshot /home/rodrigo/backups/latest/docmanager /home/rodrigo/backups/tmp/docmanager
18 mv /home/rodrigo/backups/tmp /home/rodrigo/backups/hourly.0

The "all_scripts" method is the one you should adapt for your needs.

Final notes

I hope that script will help you serving as a base for your own back-up script in Ruby in case I was able to convince you to give this strategy a try. Unless you are already using some robust back-up solution such as Bacula or other advanced systems, this strategy is very simple to implement, takes little space and allows for fast incremental backups and might interest you.

Please let me know if you have any questions in the comments section or if you'd suggest any improvements over it. Or if you think you've found a bug I'd love to hear about it.

Good luck dealing with your back-ups. :)

Powered by Disqus