BitDepth

The Backup Sermon

3 Mins read

Above: People imagine similar to this when they hear “cloud backup,” an admittedly celestial sounding notion. Like the bald child in The Matrix, I must warn, “There is no cloud,” there are only servers in other countries that you rent space on. Image by Oleksiy Mark/DepositPhotos.

BitDepth#1034 for March 29, 2016

It seems oddly relevant, on the Tuesday after one of the most sacred weekends on the Christian calendar, to be talking about backup.

It’s a text I return to occasionally, sometimes preaching from a place of deep and unreplaceable loss, but like the crucifixion and resurrection, it’s a narrative that never seems to get old. Or particularly comfortable, come to think of it.

I’ve written this before, and I’ll write it again. There are only two kinds of computer users. Those who have lost data and those who will lose data.

If you think the human spirit is easily broken, have a look at the razor thin tolerances and impossibly small mechanisms inside your hard drive.

What’s shocking about drive failures isn’t that they happen, it’s that they happen with acceptably minimal frequency.

You may rest assured, however, that drives, whether mechanical or solid state, will eventually fail and you have no assurance that such failures will come with any forewarning at all. I run software on my server that monitors each drive’s SMART status continuously, but that didn’t help with any failures.

My paranoia about these matters is pervasive, absolute and ongoing. Painful experience has taught me that backup should never be considered as a matter of duplicating files; it must begin from the perspective of recovery.

Your strategy must begin by envisioning the horror of losing everything on your computer. The effectiveness of your backup strategy will be measured by how long it takes you to first, get back to work and then to restore all your data.

My own system has four active tiers.

Everything on my imaging workstation/server is continuously backed up using the built-in Macintosh incremental backup service, Time Machine.

To create a big enough drive for that, I’ve taken the risky measure of creating a striped raid, which creates a volume that’s twice as likely to fail, so it’s a safety net made of feebler strands than I’d like.

I’ve already recovered a 2TB internal drive from a Time Machine backup (the server hosts seven different drives internally), which took around a day.

The next tier is a duplicated pair of drives holding files that have moved from active use to archival status that sit in an external drive box (usually switched off until needed) on my desktop.

The third tier is a drive that gets updated annually (it’s physically switched with a newer, more current drive) with all of my digital images to date that sits in my sister’s house in Houston.

The next two tiers have proven far more elusive.

The fourth tier was meant to be a collection of optical disks which would have moved the images to write once media, which has some advantages over mechanical drives.

That strategy migrated from DVD data disks to Blu-Ray in short order, but while both media and drives are in place, the process demands a level of perfected organisation in filing that I haven’t been able to muster either the time or energy to hammer into place.

So I’ve skipped to the fifth tier, motivated by slowly rising upload speeds available from local broadband providers.

That solution is a cloud based strategy, moving critical files to an overseas location over an Internet connection.

To be clear, I am currently managing a 4TB dataset of master files and derivative files with thousands of hours worth of corrective work invested in them. This is actually a fairly small overall dataset to work with for anyone who works with RAW files as originals and retains TIFF images as masters for corrected images.

I know wedding photographers who handle much larger collections and once you start replicating across multiple drives for redundancy, things quickly escalate to enterprise class data solutions.

The eternal dilemma is to get maximum redundancy for minimum cost with ready access to critical files that are kept up-to-date with frequency.

For online backup, I’d investigated Backblaze and CrashPlan, both popular with my colleagues, but eventually chose Amazon Glacier, the most cost-effective online data storage I’ve found so far.

To access it, I’ve been using Arq, a cross platform backup tool created by a cloud service company that performs incremental backup to a range of other services, adding new files to an existing data storage pool after an initial backup with excellent interruption recovery.

The inital backup took around a month, and I can add around 50GB of data overnight.

Amazon Glacier is “cool” storage. Transfer rates are slower than the more popular S3 storage services and the company makes its money off of Put, Get and List requests, charging as you add or download information from their servers.

Left alone during February, my charges for 4TB dropped to US$15 for the month though it cost US$40 per month during the initial upload.

Immolation level data loss (turns to knock wood desperately) sees me requesting the backup drive from Houston via UPS while accessing ongoing work from Glacier.

The pieces are in place and test well. I really hope they never need to be put into action.

🤞 Get connected!

A once weekly email notification of new stories on TechNewsTT. Just that. No spam.

Possible UI Glitch. Click top right corner to dismiss 👉

Get Connected!

A once weekly email notification of new stories on TechNewsTT.

Just that. No spam.

Related posts
BitDepthFeatured

Let's talk backup. Again

5 Mins read
Computers have a functional life of around five years, and most media will last roughly that long before either becoming more prone to failure or simply running out of room.
Press Releases

TSTT's statement on the impact of the blackout on its operations

2 Mins read
Most of our remote mobile sites are equipped with standby batteries engineered to provide at least 4 hours of reserve capacity in the event of a loss or interruption in commercial power.
Press Releases

Managing cybersecurity risks

3 Mins read
Always make sure things like network segmentation, endpoint protection, central authentication, central patch management, and other good practices are in place.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Share your perspective in the comments!x
()
x