June 02, 2010
Server administrators should make backup copies of the operating system, application software, and critical data on a regular basis. The backup frequency, method, media used, and retention period may vary depending on the criticality of the services available from the server.
Server administrators should always have a current backup copy of the most recent version of the operating system and application software available to rebuild the server. These copies are generally obtained from the hardware or software vendor at the time of purchase or initial installation. Server administrators should create a new backup of the currently installed software version as soon as possible after each software upgrade or update.
Server administrators should also have access to backup copies of the server resident data files in case the current data is lost. The frequency of the backups should consider how much time and effort users would need, or can afford, to invest in manual recovery of lost transactions. An application with low activity and sufficient paper transactional records may only need occasional backup. An application database with a high volume of transactions and little paper documentation of these transactions requires daily backup.
One method is the "grandfather, father, son" method where three versions are kept, the oldest version is always replaced by the newest. This method allows restoring of data to a point from 48 to 72 hours prior, in the case of a daily backup cycle, or two to three weeks in the case of a weekly backup cycle.
Another method often used in daily backup schemes involves the designation of a different set of backup media for each day of the week. Monday’s backup is retained until the following Monday, at which time it is overwritten. Tuesday’s backup is retained until the following Tuesday when it is overwritten, and so on. In this scheme, Friday backups are often retained for longer periods (e.g., a month). Thus, the backup of the 1st Friday of the month is retained until it is overwritten on the 1st Friday of the following month. This method provides recovery to any day in the prior week and to a designated point in each week of the prior month.
There are numerous variations on the methods described above. Regardless of the method, server administrators should have written backup procedures that specify clear labeling of all backup media. System administrators must destroy obsolete backup media as described in Section 04.10 of UPPS No. 04.01.01, Security of Texas State Information Resources.
There are different schemes used for rotating backups to offsite locations. It is always best to have a current copy of the data stored offsite. Where this is impracticable, send the most current copy offsite at least weekly or monthly. In the above example, where Friday backups are recycled monthly, the Friday backups are often taken offsite. If data were backed up daily and a copy sent offsite weekly, users would lose, at most, one week’s worth of data. Again, the criticality of the data should drive the decision.
Ideally, the server administrator should select an offsite backup location that is as physically secure as the onsite location, yet accessible during an emergency situation. One effective practice is storing backups in another campus building, under a reciprocal agreement with another server administrator, using locking fireproof safes and a hard copy access log that stays in the safe with the backup. Server administrators should also encrypt offsite backups to reduce risk in case of loss during transit to or from the offsite location. Server administrators should also contact the Information Technology Assistance Center (ITAC@txstate.edu) for offsite storage options available through Technology Resources. NOTE: Server owners and administrators should avoid storing offsite backups in private residences. The homeowner could sever ties with the university and not return the data files. There is also no way to assure restricted access to the backup data when it is stored in someone's home.
Frequency:Backup frequency will depend on the files’ volatility and criticality and on the cost and time required to recover the lost files by other means. For example, operating system and application software files may not change very often (low volatility) making the recovery from their original media feasible, whereas spreadsheet and database files may change daily (high volatility), making the original source data difficult to obtain.
Retention and Cycling of Media:Many different schemes exist for rotating and replacing the oldest backup with the most current. Some keep many copies of the data (up to a year’s worth), others just a few. The key factor in determining backup retention is how far back one is willing to go to recover lost data.
Offsite Storage:Once the frequency of the backups is determined, the server administrator should consider storing some of these backups at an offsite location and not in the server room location along with the servers. Otherwise, the same disaster that destroys the server facility is likely to destroy the backups of what was on the server. For this reason, it is a common business best practice to have a copy of the software and data kept at an offsite location.
Testing Recovery:One of the most important, but often neglected, steps is actual testing of all the plans and procedures in place to recover from a disaster. It can be time consuming and requires careful planning. Server administrators should annually test the recovery of a complete system or selected applications from backups. At a minimum, the server administrator should periodically restore data from backup media to make sure the backup generation process works correctly and the data is recoverable in a usable form. As processes change, the backup plan may need updates and the best way to determine this is through testing.