i have 3 120GB ide drives that i've been using in a server as a software raid array. before i had multiple arrays set up on the disks (for /, /home, /usr, etc.). This morning I wiped the drives and set them back up as a single large array, formatted as ext2. When I mounted the new filesystem and checked the available space on the array i got this: root at yura:/mnt/hd# df -h Filesystem Size Used Avail Use% Mounted on --snip-- /dev/md/0 221G 20K 209G 1% /home now, it's been a while since i had to count higher than requires taking off my socks and counting toes, but i don't think 20K used + 209G Available = 221G. I looked around some and found that ext2 supports a 16TB fs (w/ 4k blocks), googling doesn't seem to turn up any limits imposed by the software raid. the partition tables for the three drives looks like so: Disk /dev/hdc: 122.9 GB, 122942324736 bytes 255 heads, 63 sectors/track, 14946 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdc1 1 14593 117218241 fd Linux raid autodetect /dev/hdc2 14594 14946 2835472+ 82 Linux swap Disk /dev/hdg: 120.0 GB, 120034123776 bytes 255 heads, 63 sectors/track, 14593 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdg1 1 14593 117218241 fd Linux raid autodetect Disk /dev/hdh: 120.0 GB, 120034123776 bytes 255 heads, 63 sectors/track, 14593 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdh1 1 14593 117218241 fd Linux raid autodetect Disk /dev/md0: 240.0 GB, 240062038016 bytes 2 heads, 4 sectors/track, 58608896 cylinders Units = cylinders of 8 * 512 = 4096 bytes --------------------------------------------- and running tune2fs -l /dev/md0 shows the following (after adding a journal with -j): root at yura:/mnt/hd# tune2fs -l /dev/md0 tune2fs 1.35 (28-Feb-2004) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 802e74b4-12a3-4d87-9cf6-f0c071f9743b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype needs_recovery sparse_super Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 29310976 Block count: 58608896 Reserved block count: 2930444 Free blocks: 53683617 Free inodes: 29290515 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Filesystem created: Thu Sep 8 11:50:05 2005 Last mount time: Thu Sep 8 13:58:48 2005 Last write time: Thu Sep 8 13:58:48 2005 Mount count: 2 Maximum mount count: 34 Last checked: Thu Sep 8 11:50:05 2005 Check interval: 15552000 (6 months) Next check after: Tue Mar 7 10:50:05 2006 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 12 Default directory hash: tea Directory Hash Seed: de8a895c-0898-4dad-9a1f-85a2f1a5cc7f -------------------------------------------- If anyone has any ideas, I'd love to hear them. Tom Johnson