ZFS took 92 hours to process one billion files in a 2024 George Mason University benchmark — and Btrfs couldn’t finish the read test at all. Those two results capture the gap between what file systems promise and what they deliver under real load. This article covers performance data, structural limits, and RAM requirements for ext4, XFS, Btrfs, and ZFS drawn from peer-reviewed research and independent benchmarks published in 2024.
File System Performance Statistics: Key Numbers
- ZFS required roughly 330,000 seconds (about 92 hours) to complete a billion-file write-and-read cycle in 2024.
- Btrfs ranked last or second-to-last in all four benchmark workloads tested on Linux 6.11 in August 2024.
- A 16 TB ZFS pool with deduplication enabled needs approximately 98 GB of RAM reserved for ZFS alone.
- XFS was the only file system in the arXiv billion-file study that required zero reconfiguration before the test could run.
- Proxmox VE 8.1 capped ZFS’s default ARC memory claim at 10% of physical RAM, down from up to 50%, to prevent VM memory starvation.
File System Performance Comparison: Technical Limits
Maximum file size, volume capacity, and inode handling set hard ceilings before any benchmark runs. ext4 tops out at 16 TiB per file. XFS, Btrfs, and ZFS all support 16 EiB or larger. The practical ceiling for ext4 volumes sits around 50 TB despite the 1 EiB theoretical maximum.
One structural difference with real operational consequences: ext4 pre-allocates inodes at filesystem creation. Running out of inodes on a large volume means a full reformat. XFS, Btrfs, and ZFS all allocate inodes dynamically, which removes that failure mode.
| Attribute | ext4 | XFS | Btrfs | ZFS |
|---|---|---|---|---|
| Max File Size | 16 TiB | ~8 EiB | 16 EiB | 16 EiB |
| Max Volume Size | ~50 TB practical | ~8 EiB | 16 EiB | 256 ZiB |
| Max File Count | ~4.29 billion | Dynamic | 2⁶⁴ | Dynamic |
| Inode Allocation | Static (at mkfs) | Dynamic | Dynamic | Dynamic |
| Native Snapshots | No | No | Yes | Yes |
| Data Checksums | No | No | Yes | Yes |
| Built-in Compression | No | No | Yes | Yes |
| Filesystem Shrink | Yes | No | Yes | No |
| Linux Kernel Native | Yes | Yes | Yes | No (CDDL, out-of-tree) |
Source: Red Hat Enterprise Linux 9 Documentation; Linux Bash; LinuxHint
XFS cannot shrink after creation — only grow. ZFS pools share the same limitation. In storage environments where capacity gets reallocated regularly, that constraint matters. Btrfs and ext4 both support bidirectional resizing.
File System Benchmark Results: I/O Performance on Linux 6.11
Phoronix ran file system benchmarks in August 2024 on an AMD EPYC 8534P server with a Solidigm D7-PS1010 7.6 TB PCIe 5.0 NVMe SSD using Linux 6.11-rc2. All file systems ran at default mount options with no tuning. ZFS was not included in this round; results cover ext4, XFS, Btrfs, F2FS, and Bcachefs.
Btrfs ranked last or second-to-last across all four workloads. The gap was widest for database writes: XFS and ext4 were described as “easily the fastest” in the SQLite concurrent write test, while Btrfs was “by far the slowest.” The copy-on-write mechanism that gives Btrfs its data integrity properties is the direct cause of that write penalty.
Source: Phoronix — Linux 6.11 File System Benchmarks, August 9, 2024
| Workload Type | Fastest | Mid-Range | Slowest |
|---|---|---|---|
| SQLite writes (4 concurrent DBs) | XFS / ext4 | Bcachefs | Btrfs |
| FIO 4K random reads | XFS / ext4 / F2FS | Bcachefs | Btrfs |
| FIO random writes (32 jobs) | F2FS | ext4 / XFS | Btrfs (2nd slowest) |
| Sequential writes | F2FS / XFS / ext4 | Btrfs (moderate gap) | Bcachefs |
Source: Phoronix — Linux 6.11 File System Benchmarks, August 9, 2024
Red Hat’s documentation notes one scenario where ext4 holds an advantage over XFS: single-threaded, metadata-intensive workloads, where XFS shows “relatively low performance.” Outside that narrow case, the two are interchangeable at the top of the performance rankings.
File System Performance at Billion-File Scale
A 2024 paper from George Mason University’s Department of Computer Science created and read back one billion files, each between 1 KB and 10 KB in size, using a purpose-built C application on a 14 TB Seagate IronWolf HDD. The hardware was an HP Z820 with two Xeon E5-2670 processors and 256 GB RAM.
ZFS completed the test but took roughly 92 hours. Btrfs wrote the files, but read performance was too slow to capture metrics — the test never finished. ext4 required a full reformat with expanded inode tables before it could even start. XFS ran without any reconfiguration.
Source: Shaikh, S. — “Billion-files File Systems (BfFS): A Comparison”, arXiv, August 2024
| Metric | ext4 | XFS | Btrfs | ZFS |
|---|---|---|---|---|
| 1B file writes completed? | Yes (reformat needed) | Yes | Yes | Yes |
| 1B file reads completed? | Yes | Yes | No — too slow | Yes |
| Approx. time (write + read) | Not an outlier | Not an outlier | Did not finish | ~92 hours |
| Reconfiguration required? | Yes (inode tables) | No | No | No |
| CPU at 10M files | ~20% | ~20% | ~20% | ~20% |
| CPU at 100M–1B files | ~10% | ~10% | ~10% | ~10% |
Source: Shaikh, S. — “Billion-files File Systems (BfFS): A Comparison”, arXiv, August 2024
CPU overhead dropped from around 20% at 10 million files to roughly 10% at 100 million and above, consistent across all four file systems. The overhead is front-loaded in structure setup, not proportional to file count.
ZFS ARC Memory Requirements by Pool Size
ZFS manages its own read cache — the Adaptive Replacement Cache — in RAM. ext4, XFS, and Btrfs rely on the Linux page cache and require no dedicated memory reservation. The Proxmox VE documentation specifies ZFS minimum RAM as 2 GiB base plus 1 GiB per TiB of raw storage.
Deduplication multiplies that cost dramatically. A 16 TB pool needs about 18 GB of RAM for ARC without dedup. Enable deduplication, and that number jumps to roughly 98 GB reserved for ZFS alone — making dedup impractical on most commodity servers at that scale.
Source: Proxmox VE — ZFS on Linux; MrPlanB — ZFS Configuration Guide, January 2024
| Pool Size | Min. ARC RAM (Standard) | With ZFS Dedup Enabled |
|---|---|---|
| 1 TB | ~3 GiB | ~8 GiB |
| 4 TB | ~6 GiB | ~26 GiB |
| 8 TB | ~10 GiB | ~50 GiB |
| 16 TB | ~18 GiB | ~98 GiB |
| 32 TB | ~34 GiB | ~194 GiB |
Source: Proxmox VE — ZFS on Linux; MrPlanB — ZFS Configuration Guide, January 2024
By default, ZFS claims up to 50% of available system RAM for ARC. Proxmox VE 8.1 changed new installs to cap this at 10% of physical memory — a change made specifically to stop ZFS from consuming memory that hosted VMs and containers need.
Btrfs vs ZFS in HPC and Large-File Workloads
A June 2024 peer-reviewed paper in Computers (MDPI), by researchers at Algebra University Zagreb and the University of Zagreb, tested Btrfs and ZFS across sequential, random, small-file, and large-file HPC scenarios. The researchers explicitly excluded ext4 from HPC testing, calling journal-only file systems “impractical, at times irresponsible” for large-scale virtualised environments.
For large sequential workloads, Btrfs and ZFS performed comparably. Btrfs showed an advantage with large numbers of small files. ZFS handled large block I/O more reliably. One gap the researchers flagged with no ambiguity: Btrfs RAID 5/6 remains experimental and is not recommended for production, while ZFS RAIDZ configurations are considered stable.
| Workload Scenario | Btrfs | ZFS |
|---|---|---|
| Sequential large-file reads/writes | Comparable to ZFS | Comparable to Btrfs |
| Small file management (high file count) | Can outperform ZFS | Slightly slower |
| HPC large block I/O | Competitive | Generally stronger |
| RAID 5/6 stability | Experimental — avoid in production | Stable (RAIDZ1/RAIDZ2) |
| Snapshot maturity | Available, actively developed | More mature |
| Kubernetes/HPC suitability | Tested and documented | Tested and documented |
Source: Dakic, V., Kovac, M., Videc, I. — “HPC Storage Performance and Design Patterns”, Computers, MDPI, June 2024
Which File System Fits Which Workload?
The 2024 benchmark data points to clear use-case splits. ext4 remains the default for general-purpose Linux deployments, with the fastest fsck repair times — up to 6x faster than ext2/ext3 per Red Hat documentation. XFS is the default on RHEL 9 and Rocky Linux for a reason: it leads in SQLite writes and random reads, and it handles a billion files without any setup changes.
Btrfs earns its place for snapshot-heavy workloads — home servers, backup targets, and environments where CoW integrity matters more than raw write speed. ZFS is the right choice for NAS and archival use cases where checksum-based self-healing justifies the RAM cost. Its storage pool commands give administrators granular control that journaling file systems can’t match.
| Use Case | Recommended FS | Key Evidence |
|---|---|---|
| General-purpose Linux | ext4 | Fastest fsck; default on Ubuntu/Debian |
| Large files, streaming I/O | XFS | Default on RHEL 9; top in Phoronix 2024 SQLite and random-read tests |
| Snapshots, home server | Btrfs | Built-in CoW snapshots; avoid RAID 5/6 |
| NAS, archival, data integrity | ZFS | Checksums + self-healing; budget 2 GiB + 1 GiB/TB RAM |
| Billion-file object storage | XFS | Only FS requiring zero reconfiguration at 1B files (arXiv 2024) |
| HPC / Kubernetes storage | Btrfs or ZFS | Both tested; ext4/XFS excluded by HPC practitioners (MDPI 2024) |
Source: Red Hat Enterprise Linux 9 Documentation; Phoronix, August 2024; arXiv, August 2024; MDPI, June 2024
No single file system leads across every category. ext4 and XFS consistently win on raw throughput under default settings. Btrfs pays a measurable write penalty for its integrity features. ZFS offers the most complete data-protection stack outside dedicated enterprise arrays, but that comes with RAM overhead that has to be planned for — not discovered in production. For those working directly with ext4 formatting tools or comparing XFS administration utilities, the performance differences above translate directly into configuration decisions.
FAQs
Which Linux file system is fastest for database workloads?
XFS and ext4 are the fastest for database-style writes. In Phoronix’s August 2024 Linux 6.11 benchmarks, both ranked as the top performers for SQLite concurrent writes, while Btrfs was the slowest by a clear margin.
How much RAM does ZFS actually need?
At minimum, 2 GiB base plus 1 GiB per TiB of raw storage. A 4 TB pool needs roughly 6 GiB. Enable deduplication and that jumps to around 26 GiB for the same pool size, per Proxmox VE documentation.
Is Btrfs RAID 5/6 safe for production use?
No. Btrfs RAID 5/6 remains experimental as of 2024 and is actively discouraged for production environments. ZFS RAIDZ1 and RAIDZ2 are the stable alternatives for redundant storage setups requiring production reliability.
Can XFS be shrunk after creation?
No. XFS volumes can only grow, never shrink. ext4 and Btrfs both support bidirectional resizing. ZFS pools also cannot shrink. This limitation matters in environments where storage capacity is frequently reallocated.
Which file system handles the most files efficiently?
XFS. In a 2024 arXiv study testing one billion files, XFS was the only file system that required zero reconfiguration. ext4 needed a full reformat for expanded inodes, Btrfs couldn’t complete the read test, and ZFS took roughly 92 hours.