User Tools

Site Tools


Sidebar

linux:btrfs:btrfs

btrfs

Since btrfs seems to become the next-gen filesystem, so i did some playing with it.

Basics

Creating a btrfs Filesystem

Nothing special about it ;-)

 mkfs.btrfs /dev/sdc /dev/sdd ...

Mounting

Still nothing special…

 mount /dev/sdc /mnt/sdc

Show disc usage

The "old"-way

If you start using btrfs, you will notice that the “df”-command shows a lot of crap.

F.e. this is the output for an btrfs-raid-1 filesystem, each disk has a size of 250GB.

 # df -hT /mnt/sdc
  Filesystem     Type   Size  Used Avail Use% Mounted on
  /dev/sdc       btrfs  466G  4.2G  460G   1% /mnt/sdc

It shows double the size, it should show. Also it does for the used size. Same will happen if you start using subvolumes or snapshots.

Ant it gets even worse if we take a look at the default partitions of a modern SuSE Linux Enterprise 12:

# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2        23G  1.4G   21G   7% /
/dev/sda2        23G  1.4G   21G   7% /.snapshots
/dev/sda2        23G  1.4G   21G   7% /var/tmp
/dev/sda2        23G  1.4G   21G   7% /var/opt
/dev/sda2        23G  1.4G   21G   7% /var/log
/dev/sda2        23G  1.4G   21G   7% /var/lib/pgsql
/dev/sda2        23G  1.4G   21G   7% /var/spool
/dev/sda2        23G  1.4G   21G   7% /var/lib/mailman
/dev/sda2        23G  1.4G   21G   7% /usr/local
/dev/sda2        23G  1.4G   21G   7% /var/crash
/dev/sda2        23G  1.4G   21G   7% /srv
/dev/sda2        23G  1.4G   21G   7% /boot/grub2/x86_64-efi
/dev/sda2        23G  1.4G   21G   7% /boot/grub2/i386-pc
/dev/sda2        23G  1.4G   21G   7% /tmp
/dev/sda2        23G  1.4G   21G   7% /opt
/dev/sda2        23G  1.4G   21G   7% /var/lib/named

How to do it right

So this is how to really show the size of an btrfs-filesystem:

 # btrfs filesystem df /mnt/sdc
  Data, single: total=1.01GiB, used=631.50MiB
  System, DUP: total=8.00MiB, used=16.00KiB
  System, single: total=4.00MiB, used=0.00B
  Metadata, DUP: total=1.00GiB, used=112.00KiB
  Metadata, single: total=8.00MiB, used=0.00BGlobal
  Reserve, single: total=16.00MiB, used=0.00B

Notice: You always have to specify a mountpoint

But we can do even better:

# btrfs filesystem usage -t /btrfs
Overall:
    Device size:                   5.00GiB
    Device allocated:              5.00GiB
    Device unallocated:            1.00MiB
    Used:                        100.47MiB
    Free (estimated):              4.59GiB      (min: 4.59GiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:               16.00MiB      (used: 0.00B)

                  Data     Metadata  System
         single   single   DUP       DUP      Unallocated

/dev/sdc        -  4.69GiB 256.00MiB 64.00MiB     1.00MiB
         ======== ======== ========= ======== ===========
Total    16.00MiB  4.69GiB 128.00MiB 32.00MiB     1.00MiB
Used        0.00B 99.94MiB 256.00KiB 16.00KiB

Show all btrfs Filesystems

RAID 1:

# btrfs filesystem show
Label: 'btrfs-raid1'  uuid: 7376200a-b9d2-42c1-9147-748241f6b413
        Total devices 2 FS bytes used 2.05GiB
        devid    1 size 232.88GiB used 4.03GiB path /dev/sdc
        devid    2 size 232.88GiB used 4.01GiB path /dev/sdd

Single disks:

# btrfs fi sh
Label: none  uuid: 45b52c41-9872-43e9-9db4-1964d910d605
        Total devices 1 FS bytes used 692.33MiB
        devid    1 size 232.88GiB used 3.04GiB path /dev/sdc

Label: none  uuid: 3de19c07-cf4d-4bbe-b532-cd7bbb535306
        Total devices 1 FS bytes used 112.00KiB
        devid    1 size 232.88GiB used 2.04GiB path /dev/sdd

Software RAID 1 using btrfs

Creating:

# mkfs.btrfs -d raid1 -mraid1 -f /dev/sdc /dev/sdd

-m = meta-data; -d = data

Quotas

Enable Quotas on btrfs-Volume

# btrfs quota enable /btrfs/

Set Limit for Subvolume

# btrfs qgroup limit 100M /btrfs/sub

Show Quota of Subvolume

# btrfs qgroup show -r /btrfs/sub

Managing

Add Device to btrfs

# btrfs device add /dev/sdc /btrfs

Remove Device

Removing isn't as easy as adding, because the data on the discs might be raid-1:

# btrfs filesystem usage -t /btrfs
...
                  Data       Metadata  System
         single   single     RAID1     RAID1    Unallocated

/dev/sdb        -          - 256.00MiB 32.00MiB     4.72GiB
/dev/sdc        - 1008.00MiB 256.00MiB 32.00MiB     3.73GiB
         ======== ========== ========= ======== ===========
Total    16.00MiB 1008.00MiB 256.00MiB 32.00MiB     8.45GiB
Used        0.00B  320.00KiB 112.00KiB 16.00KiB
...

Convert RAID1 to "single"

# btrfs fi balance start -mconvert=single --force /btrfs
# btrfs fi balance start -sconvert=single --force /btrfs

# and on Data-RAID1:
# btrfs fi balance start -dconvert=single --force /btrfs

Now it should look (a little bit) better:

btrfs files usage -t /btrfs
...
                  Data       Metadata  System
         single   single     single    single   Unallocated

/dev/sdb        -          - 256.00MiB 32.00MiB     4.72GiB
/dev/sdc        - 1008.00MiB         -        -     4.02GiB
...

After that we are able to remove the device

Remove device from btrfs

# btrfs device delete /dev/sdc /btrfs

After btrfs did all the rebalancing for us, it should look something like that:

# btrfs files usage -t /btrfs
...
                  Data      Metadata  System
         single   single    single    single   Unallocated

/dev/sdb        - 496.00MiB 256.00MiB 32.00MiB     4.23GiB
         ======== ========= ========= ======== ===========
Total    16.00MiB 496.00MiB 256.00MiB 32.00MiB     4.23GiB
Used        0.00B 128.00KiB 112.00KiB 16.00KiB
...
linux/btrfs/btrfs.txt · Last modified: 2015/12/29 08:32 by Nold