I have a lot of digital data to store, like most people I have photos, music, home movies, email and lots of other random data. Being a programmer I also tend to have huge piles of source code and builds lying about. If all that was not enough I work from home so I have copious mountains of work data too.
Many years ago I decided I wanted a single robust, backed up, file server for all of this. So I slapped together a machine from leftovers stuffed some drives in a software RAID array, served over NFS and CIFS and never looked back.
Over time the hardware has changed and the system upgraded but the basic approach of a custom built server has remained. When I needed a build engine to churn out hundreds of kernels a day for the ARM Linux autobuilder the system was expanded to cope and mid 2009 the current instantiation was created.
The current system is a huge tower case (courtesy of Mark Hymers) containing a Core 2 Quad 2.33GHz (8 threads) with 8Gigabytes of memory and 13 drives across four SATA controllers split into several RAID arrays. Despite buying new drives at higher capacities I have tended to keep the old drives around for extra storage resulting in what you see here.
I recently looked at the power usage of this monster and realised I was paying a lot of money to spin rust which was simply uneconomic. Seriously, why did I have six sub 200Gigabyte drives running when a single 2T to replace them would pay for itself in power saved in under a month! In addition I no longer required the compute power available either, most definitely time for a downsize!
Several friends suggested a HP micro server might be just the thing. After examining and evaluating some other options (Thecus and QNAP NAS) I decided the HP route was most definitely the best value for money.
The HP Proliant micro server is a dual core Athlon II 1.3GHz system with a Gigabyte of memory, space for four SATA hard drives and a single 5¼ inch bay for an optical drive. All this in a roughly 250mm on a side cube.
I went out and bought the server from ebuyer for £235 with free shipping and £100 cashback. I Immediately sent off the cash back paperwork so I would not forget(what an odd way to get discount) so total cost for the unit was £135. I then used Crucial to select a suitable memory upgrade to take the total to 2 Gigabytes of RAM for £14
The final piece of the solution was the drives for the storage. I decided the best capacity to cost ratio could be had from 2 TB drives and with four bays available would give a raw capacity of 8 TB or more usefully for this discussion 7.8 TiB
I did an experiment with 3x1 TB 7200 RPM drives from the existing server and determined that The overall system would not really benefit enough to justify the 50% price premium of 7200 RPM drives over 5400 RPM devices. I ended up getting four Samsung Spinpoint F4EG 2 TB drives for £230.
I also bought a black LG DVD-RW drive for £16 I would have also required a SATA data cable and a molex to SATA power cable if I had not already got them.
Putting the components together was really simple. The internal layout and design of the enclosure mean it is easy to work with and has the feel of build quality I usually associate with HP and IBM server kit not something this small and inexpensive.
The provided documentation is good but unnecessary as most operations are obvious. They even provide the bolts to attach all the drives along with a wrench in the lockable front door, how thoughtful is that!
I then installed the system with Debian squeeze from the optical drive. Principally because I happened to have a network installer CD to hand although the BIOS does have network boot capability.
I used the installer to put the whole initial system together and did not have to resort to the command line even once, very impressed with how far D-I has come.
After asking several people for advice the general consensus was that I should create two partitions on each drive one for a RAID 1 /boot and one for a RAID 5 LVM area.
I did have to perform the entire install a second time because there is a gotcha with GUID Partition Table, RAID 1 boot drives and GRUB. You must have a small "BIOS" partition on the front of the drive or GRUB cannot install in the MBR and your system will not boot!
The partition layout I ended up with looks like:
Model: ATA SAMSUNG HD204UI (scsi)
Disk /dev/sda: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 17.4kB 32.0MB 32.0MB bios_grub
3 32.0MB 1000MB 968MB raid
2 1000MB 2000GB 1999GB raid
The small Gigabyte partition was configured as a RAID 1 across all four drives and formatted with ext2 and mount point of /boot. The large space was configured as RAID 5 across all four drives with LVM on top. Logical volumes were allocated formatted ext3 (on advice from seevral people about ext4 instability they had observed) for 50 GiB root, 4 GiB swap and 1 TiB home space.
The normal Debian install proceeded and after the post install reboot I was presented with a login prompt. Absolutely no surprises at all no additional drivers required and a correctly running system.
Over the next few days I did the usual sysadmin stuff, rsynced data from the old fileserver including creating logical volumes for the various arrays from the old server none of which presented much of a problem. The 5.5TiB Raid 5 did however take a day or so to sync!
I used the microservers eSATA port to attach external drives I use for backup purposes which has also not been an issue so far.
I am currently running both the new and old systems for a few days and rsyncing data to the microserver until I am sure of it. Actually I will make the switch this weekend and shut the old system down and leave it till next weekend before I scrub the old drives.
Before I made it live I decided to run some benchmarks and gather some data just for interest.
Bonnie (Version 1.96) was run in the root logical volume (I repeated the tests in other volumes, there is sub 1% variation) the test used a 4GiB size and 16 files
| Sequential Output | Sequential Input | Random Seeks | Sequential Create | Random Create |
---|
| Per Chr | Block | Rewrite | Per Chr | Block | | Create | Read | Delete | Create | Read | Delete |
---|
/sec | 378K | 41M | 37M | 2216K | 330M | 412.8 | 11697 | +++++ | 18330 | 14246 | +++++ | 14371 |
---|
%CPU | 97 | 11 | 8 | 91 | 30 | 15 | 24 | +++ | 28 | 29 | +++ | 22 |
---|
Latency | 109ms | 681ms | 324ms | 116ms | 93389µs | 250ms | 29021µs | 814µs | 842µs | 362µs | 51µs | 61µs |
---|
Does not seem to be any notable issues there, the write speeds are a little lower than I might like but that is the cost of RAID 5 and 5400 RPM drives.
The rsync operations used to sync up the live data seem to manage just short of 20MiB/s for the home partition comprising of 250GiB in two and a half million files with the expected mix of file sizes. The video partition managed 33MiB/s on 1TiB of data in nine thousand files.
The bonnie tests were performed accessing the server over NFS with 24GiB size and 16 files.
| Sequential Output | Sequential Input | Random Seeks | Sequential Create | Random Create |
---|
| Per Chr | Block | Rewrite | Per Chr | Block | | Create | Read | Delete | Create | Read | Delete |
---|
/sec | 1733K | 29M | 19M | 4608K | 106M | 358.3 | 1465 | 3714 | 2402 | 1576 | 4082 | 1529 |
---|
%CPU | 98 | 2 | 4 | 93 | 10 | 10 | 8 | 10 | 9 | 9 | 9 | 7 |
---|
Latency | 10894µs | 23242ms | 69159ms | 49772µs | 224ms | 250ms | 148ms | 24821µs | 157ms | 108ms | 2074µs | 719ms |
---|
or alternatively as percentages against the previous direct access values
| Sequential Output | Sequential Input | Random Seeks | Sequential Create | Random Create |
---|
| Per Chr | Block | Rewrite | Per Chr | Block | | Create | Read | Delete | Create | Read | Delete |
---|
/sec | 464 | 68 | 51 | 213 | 32 | 87 | 12 | +++ | 13 | 11 | +++ | 10 |
---|
CPU | 101 | 18 | 50 | 104 | 34 | 71 | 33 | +++ | 32 | 31 | +++ | 31 |
---|
Latency | 9 | 2512 | 13248 | 93 | 227 | 79 | 509 | 3049 | 18646 | 29834 | 4066 | 1178688 |
---|
Not that that tells us much aside from that write is a bit slower over the network, read is gigabit network bandwidth limited and latency of disc over the network is generally poorer than direct.
In summary the total cost was £395 for a complete ready to use system with 5.5TiB of RAID 5 storage which can be NFS served at nearly 900Mbit/s. Overall I am happy with the result, my only real issue is the write performance is a little disappointing but it is good enough for what I need.