Skip to main content

TS

Terabytes on a diet

Authors

Peter Chubb

    School of Computer Science and Engineering
    UNSW,
    Sydney 2052, Australia

Abstract

Terabytes on a Diet

You can buy a multiTerabyte raid array off the shelf nowadays. But it's not much use if you can't plug it into your trusty Linux box...

Although the block layer is in flux, there's still a lot of careless coding that means:

  • Even 64 bit platforms are limited to 1 or 2 Tb filesystems (use of 32-bit type to hold sector number; sector size hard-coded to 512 bytes)
  • Even where the partitioning scheme allows partitioning of larger discs (e.g., EFI), other limitations prevent them from being used to their full capacity
  • Even though the page-cache limit is 16Tb with 4k pages (and indeed if you can create a file this big you can read and write it!) you can't have a filesystem that big.

So...

I set out to remove these limitations on both 64 and 32 bit platforms.

But how do you test support for huge (>2TB) filesystems under Linux when the biggest disc you have is 100G? Simple, write a simulator, and use a sparse file for the disc contents. But... it's not that simple, as I'll explain in my talk.

BibTeX Entry

  @inproceedings{Chubb_02b,
    author           = {Peter Chubb},
    title            = {Terabytes on a Diet},
    month            = sep,
    year             = {2002},
    booktitle        = {AUUG Winter Conference},
    address          = {Melbourne, Australia}
  }

Download