XFS File System

Discussion in 'Filesystem' started by Jarret W. Buse, Jul 29, 2013.

  1. Jarret W. Buse

    Jarret W. Buse Active Member Staff Writer

    Messages:
    107
    Likes Received:
    120
    Trophy Points:
    43
    XFS File System


    The XFS file system is an extension of the Extent File System (EFS). XFS was originally referred to as the ‘X’ File System (XFS) and the name was used ever since. The file system was created by Silicon Graphics in 1993.

    XFS was first used on IRIX 5.3 and ported to Linux in 2001. It was first added to the Linux kernel version 2.4 in 2002.

    The file name size limit is 255 characters. To support large files and a larger partition (more addressing values), the file system is 64-bit. The file and space limitations are as follows:

    32-bit system 64-bit system
    File size: 16 Terabytes 16 Exabytes
    File system: 16 Terabytes 18 Exabytes

    For added file growth, XFS allows a large number of inodes and directories.

    The file system consistency is guaranteed by the use of Journaling. The Journal size is calculated by the partition size used to make the XFS file system. If a crash occurs, the Journal can be used to ‘redo’ the transactions. When the file system is mounted and a previous crash is detected, the recovery of the items in the Journal is automatic.

    For support of faster throughput, Allocation Groups are used. Allocation Groups provide for simultaneous I/O by multiple application threads at once. The ability allows systems with multiple processors or multi-core processors to provide better throughput with the file system. These benefits are better when the XFS file system spans multiple devices.

    For multiple devices to be used within the same XFS file system, RAID 0 can be implemented. The more devices used in the striped file system, the higher the throughput can be achieved.

    To also provide higher throughput, XFS uses Direct I/O to allow data retrieved from the XFS file system to go directly to the application memory space. Since the data is bypassing cache and the processor, the retrieval and writing of data is faster.

    Another feature to provide faster throughput is Guaranteed Rate I/O. Guaranteed Rate I/O provides a method to reserve bandwidth to and from the file system. The method provides faster access for real-time applications.

    Another ability of XFS to increase performance and reduce fragmentation is by using Delayed Allocation. This method works well for files of unknown size being written while they are being modified.

    Another method that XFS uses to prevent fragmentation is to use sparse files. Here the real file contains large sections of zeroes. The zeroes are eliminated, but metadata is used to represent the zeroes. Instead of writing all the zeroes it writes the metadata, saving space. When the file is accessed, it is expanded again to its normal state in memory.

    Even with the file system having methods to reduce fragmentation, it can still occur when it becomes low on free space. To help alleviate this issue, Online Defragmentation is a process which can move files into contiguous blocks to reduce fragmentation. The process can occur while the XFS volume is mounted and being used.

    Allocating space on the file system is accomplished by using Extents. To manage the free space on the file system, B+ Trees are used to track these spaces. Other file systems use a bitmap to track free and used space. Two B+ Trees are used to track free space on an XFS file system. One tree is used to store the starting block of the free extents, while the second B+ Tree indexes the number of free extents for each starting block. To write a file, the file system can check for a free space with enough contiguous extents, and then find the starting block to begin writing.

    NOTE: Bitmaps are used to track used and unused space. These bitmaps are not images, but a file where each bit represents an addressable block. Each bit is either on (1) or off (0) to represent if it is used or free.

    For space preservation, block sizes are variable and can be set as 512 bytes to 64kb. If a system will have many small files, then the smaller block size should be used. If larger files are to be used, then use larger block sizes. The block size is set at the time of file system creation. Wasted space by block size is discussed in the article on Extents.

    To provide more storage space than is physically available, Data Management API (DMAPI) can be utilized to support offline storage for unused files. Files that are rarely accessed can be moved to another storage device allowing the space to be used by other files. When the file is requested, DMAPI moves the offline file back to the hard drive for access by the application.

    When disk space is running low on an XFS volume, it is possible to perform an Online Resizing to increase the free space. Extra space is created by using unused partitions on other hard disks. The space is added to the existing file system increasing its size.

    If multiple user accounts are set up, as well as groups, it is possible to monitor the disk usage by each user and/or group. Atomic Disk Quotas allow for disk usage management to not only monitor the usage but to place limits on them.

    To provide better backups, Snapshots can be used to create a read-only ‘image’ of the file system so it can be backed up even though the ‘real’ file system is still being used and modified.

    XFS also provides for Native Backup/Restore capabilities. Utilities such as xfsdump and xfsrestore allow a user to backup and restore files while the file system is in use. The process can be done without creating a snapshot and can even be performed in multiple streams to various devices. The restore and backup process can be interrupted and resumed without causing issues.

    Attached Files:

    • slide.jpg
      slide.jpg
      File size:
      12.7 KB
      Views:
      124,682
    DevynCJohnson likes this.
  2. Cristal Skull

    Cristal Skull Member

    Messages:
    42
    Likes Received:
    23
    Trophy Points:
    8
    my favorite filesystem!
  3. Don Sturdy

    Don Sturdy New Member

    Messages:
    1
    Likes Received:
    0
    Trophy Points:
    1
    Correction, the Original XFS implementation under Irix at SGI was
    NOT derived nor heavily influenced by EFS of System V fame.
    In fact SGI explicitly set out to build a new FS from scratch for the high
    end Origin servers with a completely new design for a journaled
    FS. The late Jeff Beck was the first manager and cheerleader
    of the effort and along with numerous engineers who did the work
    on the first implementation. The XFS port to Linux was started
    in 1999 and took considerable work to "prove" that there was
    no contaminated code and every single header file and code
    file was examined line-by-line. The effort was further complicated
    by the use of Vnodes and the System V and BSD 4.X code and
    FS framework under Irix. Hence the Linux FS interfaces required
    some rethinking and rework to adapt the stock XFS code.

    The work Dave Chinner, aka RedHat, has done on XFS over the years
    has greatly improved the performance in numerous areas.

    It is simply too bad that SGI has forgotten XFS and all of the originals
    are now longer at the company to continue the legacy.

Share This Page