Storage capacity is not always easy to determine when you don’t know the exact space requirement in advance. Even with the price of hard drives going down, you do not want to pay too much in advance for space you will not use. Extending the space available after purchase, without rebooting your server, can be a great way to save money in the present without compromising the future.
Let’s see how to extend a RAID 10 array of a Dell R720xd running Ubuntu Server 12.04, even when Dell says the RAID controller does not support this feature.
Introduction
I just received a wonderful Dell PowerEdge R720xd that I am about to setup for a client. I choose this one because of it’s large hard disks capacity (12 x 3.5 inches drives, hot swappable). The thing is, most clients do not want to spend too much in advance for their needs. I think this is a wise decision, considering we may never need it before changing the server. With this server, I had to think in advance about the possiblity of extending “live” the space available to the MySQL database that will be hosted on the disks.
I normally use RAID 10 on important data storage and that is what I planned for this baby. But reading forums and Dell documentation, I learned that RAID 10 cannot be extended on H710 controller. I then thought about LVM (Logical Volume Manager) and the possiblity to create another RAID 10 with new drives when the need comes. I also knew I had to test my theories on the server before putting it into production.
So here are the steps to extend a 6 disks RAID 10 with another 4 or 6 disks RAID 10 in the future. I planned to purchase another 6 disks and full the 12 drives array if we really need extra space in the server lifetime.
NOTE: In most command output, I removed unimportant information to leave only the essential.
Machine description
This is a Dell PowerEdge R720xd with 32G RAM, an H710 SAS RAID Controller with 512M of battery backed NVRAM. There is two redundant hot swappable 495W power supplies, 6 x 1 TB 7.2K RPM Near-Line SAS 3.5" hard drives and a quadruple Gbits ports Broadcom network card.
The server is configured to boot in UEFI mode, allowing for GPT partition table working with the 3T RAID 10.
By having the capacity to add 6 more drives for another RAID 10, I can choose to use larger drives next time, depending on the remaining life of the server and the space needed.
The big picture
I have 6 disks that will be setup in RAID 10 for production. I want to know if I will be able to grow the filesystem where the database reside without interruption by creating a new RAID 10 with 6 other disks and adding them to the current filesystem free space.
Because I only have 6 disks, I created a RAID 1 (2 disks) with the CTRL-R option on boot. This RAID 1 would normally be the original RAID 10 that will go in production. I then have 4 disks left, out of the server, to create a RAID 10 that would be the future 6 disks RAID 10.
Installation of Ubuntu 12.04.2 LTS Server
I made a normal install of the software on the first RAID. The partition table in GPT format is as follow:
Partition Number | Filesystem | Mount point | Size |
---|---|---|---|
#1 | EFI Boot Partition | N/A | 500M |
#2 | ext4 | /boot | 300M |
#3 | LVM Physical Volume | N/A | [remaining available space] |
In the LVM configuration, I created a Volume Group named “main” using the partition #3 as Physical Volume. I then created two logical volumes:
- “swap” of 16G
- “root” with the remainging free space of the Volume Group
I then selected “swap space” for the main/swap Logical Volume (LV) and the “XFS” filesystem for the main/root LV.
Is is important that the main/root LV be formatted with a growable filesystem. XFS will make it quickly without any reboot when we need it.
Install the OpenManage Dell Storage Services Software
The OSMA Installation Procedure on the linux.dell.com website show how to install the necessary tools to manage your server. In our case, only the storage tools are necessary:
1 2 3 4 5 |
|
(Yes, there is really as space between (latest and /) on line 1)
Important: you need to reboot after installing the OSMA software. Only then will it detect your hardware.
Check out the RAID configuration and create a new RAID 10
Now that you have rebooted and have to OSMA storage tools, you can list the current RAID 1 configuration and the two disks in the array:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
I then added 4 disks to simulate the new disks I would add in a few years to grow our available space. We can see them with by repeating the “pdisk” command:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
Note that the new drives are in state “Ready” instead of “Online” like the first two because they are not part of any RAID yet.
I then create the new RAID 10 with the 4 disks. You have to specify the disks ID to add. These IDs are shown in the previous “pdisk” command.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
As you can see, the newly created RAID 10 is assigned the device name /dev/sdd. This will be used in other commands along this procedure. Just substitute it with yours.
At this point, you way want to wait for the newly created RAID 10 to be fully initialized before using it. This way you will be sure of its maximum performance before adding data to it.
You can follow the state of the initialization with the “vdisk” command. In this example, it is 66% completed. All the LEDs of the newly inserted disks are blinking.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
For this test, I did not wait and continued before the background initialization was completed.
Make a GPT partition on the new RAID 10
When the new RAID 10 is ready, it does not contains any partition. We have to create one with the “parted” command line.
As you see in the example below, we use the “print” command to find the total size of the RAID 10 (1999G in our case). In the “mkpart” command, the “end” parameter of the partition should be this value if you want to use all the disk space as a LVM Physical Volume (PV). We then use the “set” command to change the partition to a LVM Physical Volume type.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Create the new Physical Volume and extend the Volume Group
In LVM, a Volume Group (VG) is composed of one or more Physical Volume (PV). When I installed Ubuntu, I created a VG called “main” and used one PV on the RAID 1 as space to be used.
The following command allow to create a new PV with the new partition on the RAID 10 and add it to the “main” VG.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
You can see the 1.82 terabytes free with the last “vgs” command. This is the space from the RAID 10. It is not available to the filesystem and the database yet. Let’s continue…
Extend the “root” Logical Volume
Now I wanted to use all the free space of the “main” VG to be available to the “root” Logical Volume (LV). The LV is where the filesystem resides.
1 2 3 4 5 6 7 |
|
As you can see by the last “vgs” command, there is no VFree left after extending the “main/root” LV. The filesystem will now have the space to grow, but it does not take the new available space automatically.
Extend the XFS filesystem
The last step consist in making the filesystem of the root filesystem (or where the database resides in your case) grow and use all the space available in the LV.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
And you now have two RAID 10 (in my example, a RAID 1 + a RAID 10) without rebooting your server.
Conclusion
I am happy that my theory about using LVM as a way to extend the RAID 10 worked. I knew that it was possible to extend a VG, a LV and the XFS filesystem. What I doubted was the creation of a new RAID 10 with the Dell OpenManage tools, without rebooting.
I tested to reboot the server just to be sure the Volume Group is working properly and all the PVs are detected. Everything went fine, even if the newly created /dev/sdd1 Physical Volume is now named /dev/sdb1 after reboot. It does not create any boot issue and the filesystem stays at the newly extended size.