HFS Storage Protect node registration requirements

Before you register a system for HFS Storage Protect backup please ensure that it conforms on the following points.

1. Minimise large-scale changes to your data

The Storage Protect software works on the principle of incremental forever, only sending new or changed data to the backup servers.  However, if a folder is moved or renamed then all the data below that level will be regarded as new, and will be resent to the HFS. Similarly, permissions changes usually occasion a data resend.

If your data is in need of rearrangement, we therefore request that you perform this work before your initial backup.  Thereafter, it is best to keep large-scale changes to a minimum.  On this subject please see further the changes that cause the resend of data to the HFS.

2. Keep partitions to a reasonable size

Storage Protect processes data one partition at a time.  If you have a very large amount of data to back up, such as several terabytes, then the client software may find this difficult to process.  This is especially noticeable if there is a large number of files in one partition (more than a few million), even if the quantity of data is low.

We therefore request that server account owners ensure that their data is appropriately arranged before the initial backup.  If you have several terabytes of data or more than a few million files, have it spread across a number of partitions.

3. Ensure that the necessary resources are available to the client machine

Backing up can be an intensive process for the client machine and for the local network:

If the machine contains millions of files, then backup client software will require a large amount of RAM in order to be able to process them.  If the quantity of RAM is not sufficient then the backup will stall.

IBM state that the Storage Protect backup client will use 300 bytes per file or directory, dependent on file name and pathname length, namely 300MB to process one million files.  So, if a process on the client machine is limited to 2GB, then the backup client software would be limited to processing 7 million files in one filesystem/partition.

If the quantity of data to be backed up is hundreds of gigabytes, then that will require adequate network bandwidth in order to complete in a timely manner.  For normal servers which frequently back up large amounts of data we recommend a gigabit network connection to the University backbone network, for large servers, this is a requirement.

Get support

If you cannot find the solution you need here then we have other ways to get IT support

Get IT support