3

I'm using btrfs for my home directory, which spans multiple devices. In total I should have around 7.3TB of space - and that's what df shows, but I ran out of space after using only 5.7TB of data:

# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd3       7.3T  5.7T   63G  99% /home

btrfs has this to say for itself:

# btrfs fi df /home
Data, RAID0: total=5.59TB, used=5.59TB
System, RAID1: total=8.00MB, used=328.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=11.50GB, used=8.22GB

Which is weird, because there should have been enough partitions to support 7.3TB (also, the btrfs data configuration should have been "single" and not RAID0).

Here is what btrfs show says:

# btrfs fi show
Label: none  uuid: 2dd4a2b6-c672-49b1-856b-3abdc12d56a5
    Total devices 9 FS bytes used 5.59TB
    devid    2 size 303.22GB used 303.22GB path /dev/sdb1
    devid    3 size 303.22GB used 303.22GB path /dev/sdb2
    devid    4 size 325.07GB used 324.50GB path /dev/sdb3
    devid    1 size 2.73TB used 1.11TB path /dev/sdc1
    devid    5 size 603.62GB used 589.05GB path /dev/sdd1
    devid    6 size 632.22GB used 617.65GB path /dev/sdd2
    devid    7 size 627.18GB used 612.61GB path /dev/sdd3
    devid    8 size 931.51GB used 931.51GB path /dev/sde1
    devid    9 size 931.51GB used 931.51GB path /dev/sde2

As you can see, devid 1 (which is the last disk I added) has only 1.11TB used out of 2.73TB available in the partition (its a supposedly 3TB drive, but only 2.7TB usable :-[ ).

I've searched far and wide but couldn't figure out how to make btrfs use more of the partition. What am I missing?

Notes:

  1. I'm using Ubuntu 12.04.2 with the current kernel 3.2.0-23.
  2. This is after I've ran btrfs fi resize max /home and btrfs fi balance /home
Guss
  • 3,535
  • 1st, for btrfs, never trust df/di output. You are supposed to be using btrfs filesystem df /path.

    2nd, it is important to let others know the btrfs file system was created, I mean, for your /home. For example, number of block devices, how metadata (RAID 1 whichi is default) and data (RAID 0 from what I can see) span across devices.

    1. Try to keep a minimum number of snapshots, because they silently consume your disk spaces (Copy-on-Write...).
    – Terry Wang Apr 21 '13 at 23:27
  • @TerryWang: 1st+2nd: you can see the output of btrfs fi df in the question. Also, the filesystem in question has no snapshots. – Guss Apr 22 '13 at 10:50
  • Guss, I came across this kernel patch when using ksplice uptrack to patch my VPS. I think this may be related to your issue. Install [3fyotdy2] Btrfs filesystem reports no free space when there is. – Terry Wang May 26 '13 at 23:18
  • @TerryWang - I couldn't find information about this on Google. Is it possible for you to provide a link? – Guss May 27 '13 at 19:22
  • I cannot find any further info either. On that system it was running 3.2.0-41-generic, ksplice uptrack automatically (I set it to be) applied the kernel patch to it. If you are running 3.2.0-44-generic it should have included the fix. – Terry Wang May 27 '13 at 23:01
  • Then its probably not relevant - I was running 3.5. Anyway, this question has become moot for me - it took me about a month but I rebuilt the pool on Ubuntu 13.04 with kernel 3.8 and it currently works fine. – Guss May 28 '13 at 06:24
  • @bain, while the scenario in #170044 looks similar, the output from btrfs fi df is completely different, so the answer in #170044 (that relies on that piece of data) is not applicable here. I was familiar with #170044 and still decided to ask this question. – Guss May 11 '14 at 07:53
  • Sorry, you are right it is a different issue. – bain May 11 '14 at 12:18

1 Answers1

2

You're using data raid0, which means striped without parity. Once you fill ANY disk in a raid0 array, the array is full because you no longer have room on that disk to write its piece of a stripe.

That ~3TB device is just too much larger than the other devices you have to make full use of it in a btrfs-raid0 practical. To try to force the system to use the whole disk, you'd end up needing to partition it and then add both partitions as separate disks. DON'T DO THAT, by the way, as it will do weird and awful things to performance, which I would assume is pretty critical to you if you're using raid0...?

Another note: 3.2 is a pretty ancient kernel to be running btrfs IMO. Btrfs is still in HEAVY development, and you really should be tracking much newer kernels if you're going to run btrfs.

Using Btrfs with Multiple Devices - Filesystem creation: When you have drives with differing sizes and want to use the full capacity of each drive, you have to use the single profile for the data blocks, rather than raid0:

# Use full capacity of multiple drives with different sizes (metadata mirrored, data not mirrored and not striped)  
mkfs.btrfs -d single /dev/sdb /dev/sdc
bain
  • 11,260
Jim Salter
  • 4,343
  • Actually, I'm not using raid0 - I'm using "single". I'm not sure why btrfs df says raid0. Performance is not that important, but as I understand it the options for btrfs setup are "single" (like raid0 but without stripping), raid0, raid1 (loose half of your storage for dup data) and raid5 (doesn't actually work). So the choice of raid0 isn't that surprising. – Guss Feb 15 '14 at 12:29
  • BTW - I eventually rebuilt the file system on current Ubuntu stable, which uses 3.11 kernel - and now I no longer have this problem. – Guss Feb 15 '14 at 12:30
  • Perhaps you forgot to use mkfs.btrfs -d single to format each drive? – bain May 11 '14 at 12:09
  • I think Jim got it right. To use multiple drives with different sizes you need to format them all together with mkfs.btrfs -d single /dev/sda /dev/sdb /dev/sdc /dev/sdd.... If you do this, btrfs fi df will show single and not RAID0. The fact that the output in the question shows the replication as being RAID0 and not single indicates that this was likely the issue. – bain May 11 '14 at 12:18