DSL Ideas and Suggestions :: Improving DSL fallover recover



Hello again,

I had a problem with DSL booting recently which I tracked down to my restore drive running out of space, so the backup file was incompletely written and the thus corrupt and upon booting (frugal) it hung trying to restore it. The only way I could recover was to boot from a livecd and delete the backup file.

I wanted to suggest making the system more robust by putting in some checks to make sure the system is not left in a damaged state at  shutdown or can hang on bootup.

Some suggestions ...

A check that writing the backup file was successfully before shutdown.
Allow the boot process to test the validity of the backup file before restoring.
Installing the skeleton backup file if the corruption is detected.

Regards b1m1

Checking that sufficient space is available before writing the backup file in theory is a good idea.  The same thing has happened to me before - you lose both your current and former backups :(

There would be the question, though, of what would be "sufficient" space. AFAIK tar can't estimate the size of an archive before creating it, though it can count total bytes written using --totals

Perhaps a "backup" could be made first to /dev/null, bytes written counted with --totals - this would give the size of the potential backup -pre-compression.

gzip compression ratios spread between about 0.1 and 0.75 so a worst-case multiplier could be used to estimate the max possible size of the backup tarball. If this was greater than df -h showed free on the backup partition, the backup could be aborted.

Haven't tried this, might not work (--totals might not work when writing to /dev/null ?)

Tried it, appears it would work.  

Writing the uncompressed archive to /dev/null is very fast so this would not slow down the backup process by much, either.

Might try to code it into the backup scripts in the next few days and see how it goes

Just curious, what's in your backup that is taking so much space.
Hopefully not extensions as they should not be.  I would guess the fat mail client?
I will take a look into this for v3.1
Quote
gzip compression ratios spread between about 0.1 and 0.75 so a worst-case multiplier could be used


I should have said a 'typical' range.  Brain was not engaged because, for example, if the user wants to back up a whole lot of compressed files (eg jpegs), obviously gzip will compress these by little or nothing.

So the only max upper bound on size that will always be 100% safe is very close to the size of the uncompressed archive.

Robert, if you're busy and want to wait until I have a go at this for your perusal/modification/rejection/pooh-hooing by all means do so?  I'm on holidays so beach takes precedence, but it might make me feel like my current existence has meaning ...

Next Page...
original here.