Freenas/ZFS and FreeNAS expansion

Disclaimer: Use this info at your own risk, don’t come to me if all your data disappears one day. But it works for me.

Contents

[hide]

Background

FreeNAS Logo.png

Want to use 4 different sized drives together as one large storage tank with 1 drive fault tolerance?

Do you also want to be able to upgrade the smallest capacity drive with a larger one without having to move all of your data to a temporary location then re-build a new array and then copying all your data back?

I’ve figured out how you can do it.

I’ve figured out how to make FreeNAS using ZFS (nightly build) to use an array of 4 drives (theoretically start with one drive but I’m not going to talk about that here) and allow the array to grow by replacing the smallest drive in the array with a larger one similarly to how a Drobo works. I’m going to describe the process then I’ll do a real world example.

Summary

This is the basic idea… I will be creating five raid5 sets inside one ZFS pool. I will be creating four partitions on each of the four drives and using these partitions as the storage containers for the raid arrays. This way I can control the sizes of each partition and get the best use out of the drive space within a raid5 array. ZFS allows you to expand the size of a raid array and the method I’m describing let’s you take advantage of that cool feature.

What I will be doing is slicing up the four hard drives into as many equal sized parts as I can. From there I’ll build five Raid5 arrays (one for each line of partitions in the phase 1 illustration) and create a zpool called tank0 containing all five raid5 arrays.

Details

As you can see below I will start with four drives:

Drive 0 – 80 Gig

Drive 1 – 120 Gig

Drive 2 – 200 Gig

Drive 3 – 200 Gig

The zpool will be five raid5 arrays and will give me about 329 Gigs of protected storage (54% use of the total space of the drives). Then I’ll take the 80 Gig drive out of the picture and put in a 250 Gig and I’ll end up with the following drives:

Drive 0 – 120 Gig

Drive 1 – 200 Gig

Drive 2 – 200 Gig

Drive 3 – 250 Gig

My zpool will increase in size to about 531 Gigs of protected storage (68% use of the total space of the drives). This will be done without doing any backing up, just swapping of drives. It is a four step process and each drive must be installed wiped repartitioned and resilvered (replaced) back into the zpool. I actually moved all the drives around one at a time. But I’m not sure if I had to bother. It might be possible and easier by just swapping out the smallest drive with the largest one and repartion all the drives in place one at a time… I’ll test that in the future. As for right now I’ve completed it the long way and it works!

Configurations

See how the configurations change below from phase 1 to phase 2 to get an idea of what I’m about to do.

Phase 1

Phase 1
Drive 0 Drive 1 Drive 2 Drive 3 Partition Size
ad0 ad1 ad2 ad3
Drive Size 80 Gig 120 Gig 200 Gig 200 Gig
Actual usable 81918 123482 203884 203884
ad0s1 ad1s1 ad2s1 ad3s1 Raid5a 81726 x 3 245178
ad0s2 ad1s2 ad2s2 Raid5b 64 x 2 128
ad1s3 ad2s3 ad3s2 Raid5c 41628 x 2 83256
ad0s3 ad1s4 ad3s3 Raid5d 64 x 2 128
ad0s4 ad2s4 ad3s4 Raid5e 64 x 2 128
Unused Space 0 0 80402 80402
Total Space (sum of all drives) 613168 Megabytes = 613 Gigabytes
Total available within Zpool 328818 Megabytes = 329 Gigabytes
Percent of usable storage 54%

Phase 2

Phase 2
Drive 0 Drive 1 Drive 2 Drive 3 Partition Size
ad0 ad1 ad2 ad3
Drive Size 120 Gig 200 Gig 200 Gig 250 Gig
Actual usable 123482 203884 203884 250018
ad0s1 ad1s1 ad2s1 ad3s1 Raid5a 123290 x 3 369870
ad0s2 ad1s2 ad2s2 Raid5b 64 x 2 128
ad1s3 ad2s3 ad3s2 Raid5c 80466 x 2 160932
ad0s3 ad1s4 ad3s3 Raid5d 64 x 2 128
ad0s4 ad2s4 ad3s4 Raid5e 64 x 2 128
Unused Space 0 0 0 46134
Total Space (sum of all drives) 781268 Megabytes = 781 Gigabytes
Total available within Zpool 531186 Megabytes = 531 Gigabytes
Percent of usable storage 68%

Procedure

The procedure using a FreeNAS nightly build (newest experimental version):

Step 1

First thing you need to do is figure out the best use of your four drives space. Use the chart in Phase 1. I suggest using a spreadsheet to work out the best sizes for the partitions. Remember if you are doing this yourself, make sure you keep the partition sizes the same size or larger. You can’t shrink them, ZFS only allows you to replace parts of a raid array with one the same size or larger.

Step 2

Using FreeNAS’s GUI Mount the four disks and enable SMART monitoring Disks|Management| +

Preformatted file system set as ZFS storage pool device.

This allows FreeNAS to monitor the drive using S.M.A.R.T. and can be setup to email when a drive is showing symptoms of problems.

Step 3

Now I create raid5a1 on drive0 (ad0) using fdisk-linux (which I removed the original FreeNAS fdisk and renamed the fdisk-linux to fdisk on my FreeNAS server). Get the fdisk-linux package [here]

Step 4

Delete or rename the original /sbin/fdisk and copy the fdisk-linux to your /sbin folder. I renamed the fdisk-linux to fdisk and that is what I used for the rest of this document.

Step 5

From the command prompt:

Partition Drives

Partition Drive 0 (ad0) (delete all partitions on the drives first)

Beginning the Partitioning

Partition Drive 1 (ad1)

Now we Partition Drive 2 (ad2)

Now we partition Drive3 (ad3)

Now let’s create our zpool containing our 5 (Raid5a to Raid5e) raid5 arrays (of the partitions)

Now we add the 2nd raid array to the same pool (tank0).

Now we add the 3rd raid array:

Now we add the 4th raid array:

Now we add the 5th and final raid to our pool (tank0)

Before we can share it on the network with SMB (samba) we have to enable writing:

Now add the tank0 to samba and test the speed.

Testing

Over my 100Mb/s Ethernet connection.

Writing a 1,411,699,808 byte file to tank0 over the network took 152 seconds. Which is 74 Mb/s not too bad considering all the writing to the different raid5 arrays within the pool. ZFS must do a lot in memory… Ram usage was around 25% of my 1 Gig FreeNAS box and CPU varied from 25% to 50% from my AMD Sempron(tm) Processor 3000+, running at 1800 Mhz

Reading a 1,411,699,808 byte file from tank0 over the network took 160 seconds. Which is 70 Mb/s not too bad either. Ram usage was around 14% of my 1 Gig FreeNAS box and CPU was consistently at 25% from my AMD Sempron(tm) Processor 3000+, running at 1800 Mhz

It is surprising that reading from the zpool tank0 was a little slower then writing… Either way these speeds aren’t too shabby and will do well for streaming HD quality movies or doing archival backups of your data.

Swapping Drives On The Fly

Now on to swapping out the 80 Gig drive for a 250 Gig without backing up to another drive and then rebuilding the array and copying it all back.

First scrub the zpool to make sure it’s all good.

After a little while the scrub completes. This will take longer with more data in the pool.

Let’s export the zpool.

Shutdown the computer and swap out the 200 Gig drive3 with the new larger 250 Gig drive 3. We work our way down to drive0 in steps, swapping larger drives along the way.

With the new drive installed we have to import the pool to see it.

During testing I already used the name tank0 and built a similar raid array. ZFS is trying to get all the info it can. Since they both have the same name I must import the real one using it’s ID.

So far so good, now we partition Drive3 (the new 250 Gig drive) ad3 and replace the partitions into the raid arrays of our zpool.

Now we replace the partitions of old drive3 (200 Gig) with the new Drive3 (250 Gig).

After it completes we replace the other 3 partitions:

When they are complete we have a nice clean pool again:

Testing

Copy some more files on tank0 for testing… No problem. I’ve got a 500 Meg .ZIP file on tank0 and I’ll do a test on it to make sure the data is still perfect, just to be 100% sure.

Test found no errors. As expected.

Let’s scrub again then when it completes without error, we export and shutdown the computer. Then we swap out drive 2 (200 Gig) with the old drive3 200 Gig drive, probably not needed since they are the same size, but I’m going to do it for completeness.

Shutdown and swap out drives reboot with new drive2 (200 Gig):

Now we repartition drive2 with our new sizes:

Now we replace drive2’s larger partitions in our pool:

After awhile resilvering should complete and we look clean again:

Some more quick tests, TOK as they should.

Size is still the same:

It will grow after the last part of any one raid array is replaced with a larger one.

So all is good now we need to swap out the Drive1 (120 Gig) drive with the old Drive2 (200 Gig) drive. So we scrub and export:

Got to 97.7% then my system gave some errors (didn’t have time to read the screen) and then it rebooted.

I wonder if it’s FreeNAS’s ZFS problem or something else?

After reboot the pool tank0 came up ok by itself. Maybe it did complete then the reboot happened. I’m not 100% sure.

Going to do another scrub…

Completed without any problems this time. Weird!

Now to export and swap out the drives.

Shutdown and swap Drive1 (120 Gig) with old Drive2 (200 Gig).

Reboot and check the zpool

Now we import our zpool

As you can see ad1 partitions need to be replaced. But first we have to resize our new drive1 (200 Gig) partitions.

New partitions have been created, we can now do the zpool replace thing.

I don’t wait for each partition to resilver. I type all four replace commands as soon as I can and I let it chug away.

Once it finishes we should look like:

Still the same size:

Our tank0 pool still hasn’t changed Size, it should after we swap the last (fourth) drives partitions.

So now we get ready to replace the last drive. Drive0 (80 Gig, ad0) with our old drive1 (120 Gig) drive.

Once it completes (check with #zpool status) we export it and shutdown the computer.

Ok to shutdown and swap drives now.

With the last drive replaced, Drive0 (80 Gig) is Drive0 (120 Gig).

We do it all one last time:

Now repartition drive0

Now we replace our new partitions into the pool:

Once it finishes, we should now have a nice clean pool:

We now have more storage space and we didn’t have to find somewhere to store our old data temporarily, which will get harder to do as the storage pool grows larger and larger. We just expanded it in place.

Old Space:

After one last Reboot:

I also add all the new drives in my FreeNAS using the webgui under Disk Management and enable SMART on them so I can have SMART monitoring within FreeNAS email me if some errors on any of my drives start to show up.

The read and write times are still the same. I thought they might have gotten a little better since the new 250Gig drive is faster then the older 80Gig drive. But with all the raid work going on I guess so much is being done in RAM that drive read and write speeds aren’t too heavily used. That’s a good thing!

I hope this method is helpful for others as I will be using it all the time now. One thing to remember you can’t swap out IDE drives to SATA drives. The reason is the SATA controllers will show up with a new device name and ZFS needs to replace the exact device name for the old one. So if you are going to do this and want to be future proof I suggest starting with four SATA drives.

Good luck, Glen Hewlett.

ps Here’s a bunch more drive expansion examples (the drive sizes aren’t perfect so to maximize space you might want to verify your drive sizes yourself)

Author: stratus

Laisser un commentaire