– By Dwane Pottratz, Senior Solutions Architect at ORock Technologies –

Modern businesses are juggling a growing amount of sensitive data. To reach that data, cybercrimes and ransomware attacks are sweeping across public and private sector organizations and the toll it is taking is only getting worse. According to a recent McAfee report, “global losses from cybercrime now total over $1 trillion, a more than 50 percent increase from 2018.” Among the findings, the average interruption to operations was approximately 18 hours and the average cost to each company was more than half a million dollars per incident.

Yet, what’s most shocking about the report’s findings is that “56 percent of surveyed organizations said they do not have a plan to both prevent and respond to a cyber incident.” If you are someone in that statistic, you should be worried. Nowadays, cybercrime can go undetected for weeks or months, exploit vulnerabilities in enterprise networks, and continue to cost companies.

IT decision makers know that a key defense against a data breach is to develop a comprehensive IT playbook that includes storing sensitive data separately, controlling user access and privileges, and doing regular off-site backups of your data.

So, what are your backup options to protect your most sensitive files?

In lieu of using commercial backup software, UrBackup with s3backer is an open-source solution that provides IT administrators with the best parts of most backup solutions today. UrBackup is an open-source solution that is easy to setup, can scale and is very configurable.  It takes advantage of the btrfs (better file system) on Linux operating systems. Using btrfs, storage needs are greatly reduced. I have seen data ratios of 18:1. The other half of the solution is s3backer, an open-source solution that creates a block storage in an s3 object bucket.  s3backer also has compression, and I have seen data ratios of 30:1.

Below is my blueprint for how you can setup UrBackup with s3backer to create an effective solution that will back up your data to local storage and then a S3 bucket in the cloud.  The 3-2-1 backup rule is to have 3 copies of your data (your production data and 2 backup copies) on 2 different media (disk and tape) with 1 copy off-site for disaster recovery. With the UrBackup/s3backer solution, you get two different media copies with one offsite copy.  I’ll give some ideas for the third backup solution.

While this solution may involve more steps to set up than other solutions, this one is pretty easy, is entirely open source and the only costs are hardware and cloud storage.  It is also competitive with commercial software for data ratios: 18:1 for local storage and 30:1 in the cloud using RAW image formats.

Go from Zero to UrBackup with S3 in as Little as 2 Hours

Compute needs

Before you start, you will need compute for the server. The compute can be bare metal or virtual.  UrBackup server will install on Linux as well as MS Windows. I have chosen to use Ubuntu Linux to take advantage of the btrfs capabilities that UrBackup has for the file system.

S3 storage needs

Before you start, you will need an S3 bucket with API access. To be S3 compliant, it will have an API access. I am using ORock Technologies because it is a federally approved cloud that does not charge anything for egress fees. If I need to restore my data, there are zero charges.

Step-by-step Instructions

You will be doing everything as root.  It could be done using ‘sudo’.

  1. Setup up Ubuntu

I have setup Ubuntu 20.04 (focal) on a VM on a local Openstack cloud.  It could be setup on bare metal or any other virtualization system, VMWare, Hypervisor, etc.

I have set ubuntu up with two virtual hard drives.  One for Ubuntu distro using 150GB using ext4 (could be others).  One for the btrfs file system using 1TB.

a) Install Ubuntu. See Ubuntu documentation.

b) Update the distro after install and reboot.

$ apt update && apt dist-upgrade -y && apt autoremove -y && reboot


  1. Setup s3backer

a) Create a bucket in S3 storage.

Log on to your S3 storage console and create a bucket called ‘s3backer’.

b) Install s3backer.

$ apt install s3backer

c) Make a directory to mount our s3 bucket in

$ mkdir -p /media/s3backer

d) Test our ability to mount our s3 bucket. The bucket will be a virtual block storage when we are connected.  We will need an accessID, accessKey, and baseURL.  You can get these from the API access in the dashboard.  Replace them in the command.

$ s3backer –accessId=<your_accessID> –accessKey=<your_accesskey> –authVersion=aws2 –baseURL=<baseURL> –blockSize=1M –compress –listBlocks –prefix=urbackup –size=1T s3backer /media/s3backer

  1. Explanation of parameters
  2. accessID: Get from S3 console.
  3. accessKey: Get from S3 console.
  4. authVersion: We need to use aws2 for ORock S3 storage. The latest version of authentication is on the roadmap for 2021.
  5. baseURL: This is the s3 base URL used to make the connection.
  6. blockSize: What size of blocks we want to use in our S3 storage.
  7. compress: We want to use compression to save on our S3 storage costs.
  8. listblocks: Auto-detect non-empty blocks at startup.
  9. prefix: The prefix for the object names in S3. If you look at your S3 storage you will not see regular file names so we can add a prefix to keep them different than other files.
  10. size: The size of the S3 storage.
  11. s3backer: This is the name of the bucket we will be using.  It should be created in the S3 storage.
  12. /media/s3backer: This is the local mount point.

e) Make btrfs in the S3 bucket. This is a loop device.

$ mkfs.btrfs /media/s3backer/file

f) Make a directory to mount the S3 bucket virtual block device.

$ mkdir -p /media/cloudbank

g) Test the mounting of the S3 virtual block device.

$ mount -o loop /media/s3backer/file /media/cloudbank

h) Create a subvolume for our cloud backup.

$ btrfs subvolume create /media/cloudbank/urbackup

i) Unmount both. (order matters)

$ umount /media/cloudbank

$ umount /media/s3backer

j) Add entries in /etc/fstab so the cloud file system will be available after reboot. (replace <your_accessID>, <your_access_key>, and <baseURL>)

1) s3backer#s3backer /media/s3backer fuse _netdev,defaults,x-systemd.mount-timeout=10min,accessId=<your_accessID>,accessKey=<your_accessKey>,authVersion=aws2,baseURL=<baseURL>,blockSize=1M,compress,listBlocks,prefix=urbackup,size=1T,force                0 0

2) /media/s3backer/file /media/cloudbank    btrfs   _netdev,defaults,discard,x-systemd.requires=media-s3backer.mount        0 0

Extra parameter not covered above.

  1. _netdev: Tell mount to mount the S3 storage as a network file system.
  2. defaults: tell mount to use the default settings.
  3. x-systemd.mount-timeout=10M: Tell mount that is should allow 10 minutes to mount the file system. This is needed as mounting the S3 bucket can take time.  This will also make booting take longer.  It will look like the boot has hung.  Check if the boot finishes after the time indicated.
  4. force: Force the mounting of the file system. This is needed in case the file system was not umounted correctly.
  5. 0 0: standard fs_freq and fs_passno. See fstab documentation for more details.


  1. Setup the second hard drive

a) Create a partition on the second hard drive.

$ fdisk /dev/vdb

1.) g – create a gpt partition table.

2.) n – create a new partition take all the defaults.

3.) w – write the partition to the table and exit.

b) Create the btrfs file system on the disk.

$ mkfs.btrfs /dev/vdb1

c) Create a label for the partition. We will use it in /etc/fstab for mounting.

$ btrfs filesystem label /dev/vdb1 DATA

d) Make a directory to mount the file system.

$ mkdir -p /media/data

e) Add entries file to /etc/fstab so it will mount the file system.

LABEL=DATA /media/data     btrfs   defaults        0 0

f) Mount the file system.

$ mount /media/data/

g) Create a subvloume to use.

$ btrfs subvolume create /media/data/urbackup


  1. Install Backup

a) Add the urbackup repository and update apt cache.

$ add-apt-repository ppa:uroni/urbackup

$ apt update

b) Install urbackup server.

$ apt install urbackup-server

When prompted for backup location enter “/media/data/urbackup”

c) Configure urbackup as desired.

At the lease I configure in the GUI. http://localhost:55414/

  1. File Backups->Default directories to backup=C:\;D:\;E:\
  2. File Backups->Directories to backup are optional by default=’checked’
  3. Image Backups->Volumes to backup=ALL_NONUSB


  1. Sync the local storage to the cloud

There are several ways to sync the local backups to the cloud.  The simplest method is to use rsync. i.e. “$ rsync -rltv –delete /media/data/urbackup /media/cloudbank/urbackup”. This will sync the files, but not take advantage of the btrfs file system that urbackup takes advantage of.  However, since s3backer is using compression, the data is ~30:1 in the cloud compared to what is published by du in ubuntu. You could also use the btrbk tool to sync the file system. Btrbk is beyond the scope of this post.

Syncing can be scheduled as a cron job, a Jenkins CI/CD, or by a post backup script.  For the post backup script look at the UrBackup administration documentation and look for ‘post_”.


  1. Install the client on the machine to backup

Install of the client is easy. Go to https://www.urbackup.org/index.html. Download the client.  Run the install.  There are other clients available. Using the non ‘MSI’ installs will ask some basic configuration at the end of the install.  The client will find the server if it is on the same network.

Good luck and contact me at dpottratz@orocktech.com if you have any questions about these steps.

Request more information about ORock Technologies or schedule a demo.


Share This