RAID 1 Solution: Solutions like drbd also comes under this category. In RAID 1, disks are grouped into mirrored pairs. Two copies of the same data are maintained on each of the disks in a mirrored pair. Every write of the data has to be applied to both the disks whereas a read request can be satisfied from any of them. During backup, one disk can be removed from the mirror and the data on that disk can be backed up. Once backup is over, the disk removed from the mirror has to be re-synchronized with the active disk, preferably in the background. But synchronization even in the background impacts I/O performance. During backup and synchronization, the disk that is removed from the mirror will not be available for servicing requests from normal applications, thus losing the redundancy advantage of RAID 1. To overcome this disadvantage, we have to use 3-way mirroring, which results in a large storage overhead.
Snapshots in File System: Some file systems like VxFS 1 [1] have a snapshot feature in their file system. Logically snapshots happen at file system level. The implementation is actually at the block I/O level. But each filesystem in the system has to provide the snapshot feature to take online backup of its disk
Clones in File system: Some file systems such as VxFS and WAFL 2 provide clones in their file system that enable online backup. When a snapshot is created, the entire file system is marked copy on write. Whenever any data is about to be changed, metadata (like inodes) that point to the data are copied and given new data to point at. A new superblock points to the old data and we now have file system structures that point to both old and new data. The old data and new data along with their file system metadata constitute two different file systems and therefore can be mounted at two different mount points. Backup can be taken from the filesystem that has the old data. This design, however, requires major changes in the file system.
Log-Structured Filesystem: Log-structured file systems (LFS) [2] use a sequential, append-only log as their only on-disk structure. Since writes are always at the tail of the log, they are all sequential and disk seeks can be eliminated. The data that has to be retrieved from the disk will always be located by traversing towards the left of tail of the log. When the snapshot is on, the data towards the left portion of the log tail position at checkpoint time corresponds to snapshot data. All new writes after the checkpoint will be written at the tail of the log and so all the new writes will be towards right of log tail position at checkpoint. But changes are required in file system to retrieve snapshot data from the log and garbage 3 collection should not be done when the snapshot is on.
LVM For Linux: The Logical Volume Manager (LVM) [3] adds an additional layer between the physical peripherals and the I/O interface in the kernel to get a logical view of disks. This allows the concatenation of several disks (``physical volumes'') to form a storage pool (``volume group''). Recent releases include support for snapshot logical volumes where snapshots can be taken for any file system. However, the implementation in the 2.2 linux kernel does not support persistent snapshots. In addition to this, the current implementation is not clean: it is not a separate module but hacked into the Linux source code to get the required mapping between the logical devices and original devices, so that the actual request never reaches the lvm pseudo device.