The FreeBSD File Systems

published on

One of the biggest diversities between Linux and FreeBSD are the preferred file systems. While the most widely used Linux filesystems are ext2/3/4 and recently also xfs and btrfs, FreeBSD maintains a much shorter list of available file systems, although FreeBSD supports at least read-only access to the most Linux file systems through additional kernel modules.

The two major file systems in FreeBSD are UFS and ZFS.


UFS is the abbreviation for Unix File System and is one of the oldest active file systems. Developed in the early 80s by Billy Joy, who also significantly influenced BSD itself, it superseded the original UNIX file system from version 6.

Today UFS is still the default filesystem in FreeBSD but not longer in it's original version. Instead, FreeBSD 5 introduced a enhanced and more up-to-date version called UFS2, featuring support for larger partition, snapshots, additional file attributes and a POSIX conform implementation of access control lists.

Not exclusively but also due to the lack of journaling support, which is only available using an additional GEOM layer (which we'll discuss later), it seems like FreeBSD is slowly migrating towards ZFS as the primary file system.


With the porting of the ZFS file system, which was initially implemented by Sun, the BSD family obtained a unique feature compared to the mass of Linux distributions on the market. This is caused by ZFS' functionality and stability as well as it's inheritant security mechanisms.

Still ZFS is not really a "file system" in the classical sense because it does more than only handling the file-system level. Beside that ZFS implements a volume management system, much like Linux' LVM (Logical Volume Manager). Therefore ZFS offers native support for RAID, encryption, disk consolidation, checksumming and much more. And while ZFS recently comes under fire for violating the strict distinction between file system and volume management, the merger with and into a single entity comes with lots of advantages.

ZFS first appeared in OpenSolaris but has been ported to FreeBSD and became a integral part of the operating system in version 7 (early 2008). Since then, it has constantly been improved and made even more stable.

The initial project had three main design goals:

  1. Data integrity: ZFS checksums any block written to the disk. This allows the file system to reliably detect, whether a block has become corrupt or not. If the disk becomes faulty, ZFS will automatically detect and report this error. When using a mirrored setup (RAID 1), the file system simple can read the block from the mirror device and continue operation seamlessly.

  2. Pool storage: Like on LVM, ZFS consolidates all disk space from any device and makes it available as a storage pool. This allows for a partition to grow larger than the boundary of the physical device. So you might create a pool out of a 1 TB and a 3 TB drive and create two file systems, each with 2 TB of size. Additionally you might also split existing file systems into sub volumes.

  3. Performance: Using a combination of different caching mechanisms like ARC, L2ARC and ZIL, ZFS can achieve a relatively high writing throughput.

But there are many more advantages, e.g. Copy-on-Write, a technology that always copies an existing block before changing. So when your system crashes, there's still a consistent and unchanged version of the block to roll back. This also allows to create snapshots that freeze the disk state at a specific point in time in order to go back to in the future.

The downside of this is, that you will need at least approximately half a gigabyte of memory for every 500 gigabytes of storage.


Even though GEOM itself is no file system, it fits pretty good here. It's a modular framework for block devices that was implemented in FreebSD 5 and provides a standardized way to access the storage layer. It also has additional function blocks called "GEOM modules" to enhance a disk with features not provided by the file system itself. So you might use geom_mirror to create a RAID 1 on UFS drives. You could also combine multiple modules, e.g. create a geom_mirror and on top of that a geom_eli which handles disk encryption. Or you might add geom_journal, a module that enables journaling on UFS.

GEOM is rarely used in combination with ZFS. Usually one would either use ZFS or UFS + GEOM.