How to create a ZFS filesystem on two striped disk drives

I ran out of space on my SAN drive on the Solaris 10 SPARC server. Since I had two spare disk drives sitting in the server that were not being used, I decided to create a single striped volume out of the two drives. The drives contained the factory installed Solaris 10 image.

Here are the step by step instructions on how to create a ZFS pool on the two hard disks, and then create a single ZFS filesystem.

I tried to create the ZFS pool but received the error message that the drive contains ufs filesystems. This was the factory installed filesystem.

bash-3.00# zpool create backup c0t0d0
invalid vdev specification
use ‘-f’ to override the following errors:
/dev/dsk/c0t0d0s1 contains a ufs filesystem.
/dev/dsk/c0t0d0s2 contains a ufs filesystem.

Use the -f option with the create command to forcefully create the zfs pool.

bash-3.00# zpool create -f backup c0t0d0

Now confirm that the zfs pool was created successfully.

bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
backup 136G 76.5K 136G 0% ONLINE –
rpool 348G 221G 127G 63% ONLINE –

Add the second disk drive to the zfs pool backup.

bash-3.00# zpool add backup c0t1d0

Confirm that the second disk was added, the size of backup should have doubled.

bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
backup 272G 79.5K 272G 0% ONLINE –
rpool 348G 221G 127G 63% ONLINE –

Create a ZFS filesystem called archive with the zfs create command. It should reside in the backup pool.

bash-3.00# zfs create backup/archive

Confirm that the new zfs filesystem archive was created.

bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
backup 106K 268G 21K /backup
backup/archive 21K 268G 21K /backup/archive
rpool 223G 120G 97K /rpool
rpool/ROOT 4.39G 120G 21K legacy
rpool/ROOT/s10s_u8wos_08a 4.39G 120G 4.39G /
rpool/dump 1.00G 120G 1.00G –
rpool/export 215G 120G 215G /export
rpool/export/home 21K 120G 21K /export/home
rpool/swap 2G 122G 16K –

A ZFS filesystem is automatically mounted after you create it, you can now use the new filesystem archive. If you do not want archive to use all available disk space in the zfs pool backup, then you can define a quota. If you were to create a second filesystem in the backup pool, then the available disk dynamically shared between the two filesystems unless a quota is defined.

About Andrew Lin

Hi,
I have always wanted to creat a blog site but never had the time. I have been working in Information Technology for over 15 years. I specialize mainly in networks and server technologies and dabble a little with the programming aspects.

Andrew Lin

View all posts by Andrew Lin →

2 Comments on “How to create a ZFS filesystem on two striped disk drives”

  1. Hi Andrew,

    Indeed, it’s deceptively simple… But I miss one thing – you’ve apparently chosen to run this pool without any protection. It would have been better to create a mirror:

    zpool create -f backup mirror c0t0d0 c0t1d0

    ZFS is very particular about errors – if a read from a disk does not match the checksum in its parent-block it won’t give you the data, unless it can find an alternative source.

    It’s pretty scary that you’ve named this pool ‘backup’ as well.

    It’s very easy to add space to a zpool, but there’s one very important issue – if you want protection of your data, you have to decide the type (mirror, raidz or raidzN with N=2|3 ) before adding the storage.

    Once you add the new storage, you have to add the whole protection unit (‘vdev’ or virtual device in zfs terms)
    in one go.

    Example:
    ‘zpool add poolname raidz d0 d1 d2 d3’
    or
    ‘zpool add poolname raidz2 d0 d1 d2 d3’

    These commands expand the pool with four disks with either one or two parity disks.

    Cheers,
    Henk

    1. Henk,

      Thank you for the indepth explanation, I appreciate it. In my situation I merely wanted a temporary filesystem to store some backups. But I do find your comment very helpful.

      Please feel free to comment on my other articles. I am sure my readers will appreciate knowledgeable iput as well.

      Regards,
      Andrew

Leave a Reply

Your email address will not be published. Required fields are marked *