Thursday, 27 October 2011

Software that adds another dimension to things

I think that it will come as no surprise to my fellow software engineers if I note that I almost never write new software any more. I maintain, I augment, I refactor, I debug but very, very rarely do I start something new.

This probably has something to do with my maturity as a code monkey, my immediate reaction is to seek out a solution to a problem that already exists and perhaps extend it to fulfil my requirements.

Partially this comes from my innate laziness but also over time I have discovered that I am a "finisher" the role I invariably end up in involves doing all the final bits to make the client accept a project. Because I know my reaction is to always finish something I start, I avoid starting things.

Anyhow, enough introspection, a couple of months ago I was talking on IRC about my 3D printer and was asked "can you print the Debian logo?". So I hunted around for software that would let me convert a bitmap into a suitable 3d format. The result was rather disappointing, the few tools I could find were generally python scripts which simply generated a matrix of cuboids, one for each pixel their heights corresponding to the pixel value.

I used one such script to generate a file for the Debian swirl and imported it into the OpenScad 3d modelling application. I got an inkling of the issues involved after the scene render took over half an hour. The resulting print was blocky and overall I was not terribly happy with the outcome.

So I decided I would write a program to convert images into a 3D representation. I asked myself, how hard can it be?

Sometimes starting from an utterly naive approach with no knowledge of a problem can lead to new insights. In this case I have spent all my free coding time for a month producing a program which I am now convinced has barely even scratched the surface of the possible solutions.

Having said that I have started from a blank editor window and a manual gcc command line compilation and progressed to an actually useful tool and, arguably, of more import to me I have learned and implemented a load of new algorithms which has actually been mentally stimulating and fun!

The basic premise of the tool is to take a PNG image, quantise it into a discrete number of levels , convert that as a height map into a triangle mesh, index that mesh (actually a very hard problem to solve efficiently), simplify the indexed mesh and output the result in a selection of 3D file formats.

The mesh generation alone is a complex field which it appears often devolves into the marching cubes algorithm simply out of despair of anything better ;-) I have failed to implement marching cubes so far (though I have partially implemented marching squares, an altogether simpler algorithm)

The mesh indexing creates an indexed list of vertices from the generated mesh and back annotates it with which faces are connected to which vertices. This effectively generates a useful representation of the meshes topology which can then be used to reduce the complexity of the mesh, or at least describe it. To gain efficiency I implemented my first ever bloom filter as part of my solution. I also learned that generating a plausible hash for said filter is a lot harder than it would seem. In the end I simply used the FNV hash which produces excellent results for very little computation cost.

The mesh simplification area is awash with academic research, most of which I ended up skipping and simply went for the absolute simplest edge removal algorithm. Implementing even this and maintaining a valid mesh topology was challenging.

By comparison the output of the various formats was positively trivial, mainly littered with head scratching over the bizzare de-facto "extensible" formats where only one trivial corner is ever actually implemented.

All in all I have had fun creating the PNG23D project and have actually used it to generate some useful output. I have even printed some of it to generate a lithophane of Turing. I now look forward to several years of maintaining and debugging it and doing all the other things I do instead of writing new software ;-)

Wednesday, 12 October 2011

I do not want anything NASty to happen

I have a lot of digital data to store, like most people I have photos, music, home movies, email and lots of other random data. Being a programmer I also tend to have huge piles of source code and builds lying about. If all that was not enough I work from home so I have copious mountains of work data too.

Many years ago I decided I wanted a single robust, backed up, file server for all of this. So I slapped together a machine from leftovers stuffed some drives in a software RAID array, served over NFS and CIFS and never looked back.

Over time the hardware has changed and the system upgraded but the basic approach of a custom built server has remained. When I needed a build engine to churn out hundreds of kernels a day for the ARM Linux autobuilder the system was expanded to cope and mid 2009 the current instantiation was created.

Current full height tower fileserverThe current system is a huge tower case (courtesy of Mark Hymers) containing a Core 2 Quad 2.33GHz (8 threads) with 8Gigabytes of memory and 13 drives across four SATA controllers split into several RAID arrays. Despite buying new drives at higher capacities I have tended to keep the old drives around for extra storage resulting in what you see here.

I recently looked at the power usage of this monster and realised I was paying a lot of money to spin rust which was simply uneconomic. Seriously, why did I have six sub 200Gigabyte drives running when a single 2T to replace them would pay for itself in power saved in under a month! In addition I no longer required the compute power available either, most definitely time for a downsize!

Several friends suggested a HP micro server might be just the thing. After examining and evaluating some other options (Thecus and QNAP NAS) I decided the HP route was most definitely the best value for money.

The HP Proliant micro server is a dual core Athlon II 1.3GHz system with a Gigabyte of memory, space for four SATA hard drives and a single 5¼ inch bay for an optical drive. All this in a roughly 250mm on a side cube.

My HP proliant microserverI went out and bought the server from ebuyer for £235 with free shipping and £100 cashback. I Immediately sent off the cash back paperwork so I would not forget(what an odd way to get discount) so total cost for the unit was £135. I then used Crucial to select a suitable memory upgrade to take the total to 2 Gigabytes of RAM for £14

The final piece of the solution was the drives for the storage. I decided the best capacity to cost ratio could be had from 2 TB drives and with four bays available would give a raw capacity of 8 TB or more usefully for this discussion 7.8 TiB

I did an experiment with 3x1 TB 7200 RPM drives from the existing server and determined that The overall system would not really benefit enough to justify the 50% price premium of 7200 RPM drives over 5400 RPM devices. I ended up getting four Samsung Spinpoint F4EG 2 TB drives for £230.

I also bought a black LG DVD-RW drive for £16 I would have also required a SATA data cable and a molex to SATA power cable if I had not already got them.

My HP microserver with the front door openPutting the components together was really simple. The internal layout and design of the enclosure mean it is easy to work with and has the feel of build quality I usually associate with HP and IBM server kit not something this small and inexpensive.

The provided documentation is good but unnecessary as most operations are obvious. They even provide the bolts to attach all the drives along with a wrench in the lockable front door, how thoughtful is that!

I then installed the system with Debian squeeze from the optical drive. Principally because I happened to have a network installer CD to hand although the BIOS does have network boot capability.

I used the installer to put the whole initial system together and did not have to resort to the command line even once, very impressed with how far D-I has come.

After asking several people for advice the general consensus was that I should create two partitions on each drive one for a RAID 1 /boot and one for a RAID 5 LVM area.

I did have to perform the entire install a second time because there is a gotcha with GUID Partition Table, RAID 1 boot drives and GRUB. You must have a small "BIOS" partition on the front of the drive or GRUB cannot install in the MBR and your system will not boot!

The partition layout I ended up with looks like:
Model: ATA SAMSUNG HD204UI (scsi)
Disk /dev/sda: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 17.4kB 32.0MB 32.0MB bios_grub
3 32.0MB 1000MB 968MB raid
2 1000MB 2000GB 1999GB raid

The small Gigabyte partition was configured as a RAID 1 across all four drives and formatted with ext2 and mount point of /boot. The large space was configured as RAID 5 across all four drives with LVM on top. Logical volumes were allocated formatted ext3 (on advice from seevral people about ext4 instability they had observed) for 50 GiB root, 4 GiB swap and 1 TiB home space.

The normal Debian install proceeded and after the post install reboot I was presented with a login prompt. Absolutely no surprises at all no additional drivers required and a correctly running system.

Over the next few days I did the usual sysadmin stuff, rsynced data from the old fileserver including creating logical volumes for the various arrays from the old server none of which presented much of a problem. The 5.5TiB Raid 5 did however take a day or so to sync!

I used the microservers eSATA port to attach external drives I use for backup purposes which has also not been an issue so far.

I am currently running both the new and old systems for a few days and rsyncing data to the microserver until I am sure of it. Actually I will make the switch this weekend and shut the old system down and leave it till next weekend before I scrub the old drives.

Before I made it live I decided to run some benchmarks and gather some data just for interest.
Bonnie (Version 1.96) was run in the root logical volume (I repeated the tests in other volumes, there is sub 1% variation) the test used a 4GiB size and 16 files

Sequential OutputSequential InputRandom SeeksSequential CreateRandom Create
Per ChrBlockRewritePer ChrBlockCreateReadDeleteCreateReadDelete
/sec378K41M37M2216K330M412.811697+++++1833014246+++++14371
%CPU9711891301524+++2829+++22
Latency109ms681ms324ms116ms93389µs250ms29021µs814µs842µs362µs51µs61µs

Does not seem to be any notable issues there, the write speeds are a little lower than I might like but that is the cost of RAID 5 and 5400 RPM drives.

The rsync operations used to sync up the live data seem to manage just short of 20MiB/s for the home partition comprising of 250GiB in two and a half million files with the expected mix of file sizes. The video partition managed 33MiB/s on 1TiB of data in nine thousand files.

The bonnie tests were performed accessing the server over NFS with 24GiB size and 16 files.
Sequential OutputSequential InputRandom SeeksSequential CreateRandom Create
Per ChrBlockRewritePer ChrBlockCreateReadDeleteCreateReadDelete
/sec1733K29M19M4608K106M358.3146537142402157640821529
%CPU98249310108109997
Latency10894µs23242ms69159ms49772µs224ms250ms148ms24821µs157ms108ms2074µs719ms

or alternatively as percentages against the previous direct access values

Sequential OutputSequential InputRandom SeeksSequential CreateRandom Create
Per ChrBlockRewritePer ChrBlockCreateReadDeleteCreateReadDelete
/sec4646851213328712+++1311+++10
CPU1011850104347133+++3231+++31
Latency925121324893227795093049186462983440661178688

Not that that tells us much aside from that write is a bit slower over the network, read is gigabit network bandwidth limited and latency of disc over the network is generally poorer than direct.

In summary the total cost was £395 for a complete ready to use system with 5.5TiB of RAID 5 storage which can be NFS served at nearly 900Mbit/s. Overall I am happy with the result, my only real issue is the write performance is a little disappointing but it is good enough for what I need.

Tuesday, 11 October 2011

Sometimes I am just dumb

I have recently been working on some code for NetSurf which builds up an output buffer by repeatedly calling snprintf(); No great shock there, well understood trivial pattern that has been used repeatedly for ages.

Of course I discovered a buffer overflow, which to be fair had already been pointed out to me by another developer and I just failed to see it...Can I blame my old age? no? bah, get off my lawn!

Basically it boils down to me simply not seeing where C helpfully let me subtract one size_t typed value from another for a length and me completely forgetting that a negative result would simply become a large positive value...

I (erroneously) beleived snprintf took a (signed) int as the buffer length, of course it returns one, but it takes a size_t which is, of course unsigned.
Gosh I feel silly now, in fact I was so convinced I was right I wrote a program to "prove" it.
1 /* snprintf.c
2 *
3 * snprintf example
4 *
5 * Public domain (really I do not think its even copyrightable anyway)
6 *
7 * cc -Wall -o snprintf-ex snprintf-ex.c
8 */

9
10 #include <stdio.h>
11 #include <string.h>
12
13 #define SHOW printf("%3ld %.*s*%s\n", string_len - slen, (int)slen, string, string + strlen(string) + 1 )
14
15
16 int main(int argc, char**argv)
17 {
18 char string[64];
19 size_t string_len = sizeof(string) / 2;
20 int bloop;
21 size_t slen = 0;
22
23 /* initialise string */
24 for (bloop = 0; bloop < (sizeof(string) - 1); bloop++) {
25 string[bloop] = '0' + (bloop % 10);
26 }
27 string[bloop] = 0; /* null terminate */
28
29 printf("%3ld %s\n", string_len, string);
30
31 /* try an empty string */
32 slen += snprintf(string + slen, string_len - slen, "%s", "");
33
34 SHOW;
35
36 /* blah blah */
37 slen += snprintf(string + slen, string_len - slen, "Lorem ipsum dolor");
38
39 SHOW;
40
41 /* this one should exceed the allowed length */
42 slen += snprintf(string + slen, string_len - slen, "Lorem ipsum dolor");
43
44 SHOW;
45
46 /* should not call snprintf up if slen exceeds string_len as the
47 * subtraction results in a negative length and snprintf takes a unisigned
48 * size!
49 */

50
51 /* this one starts exceeding the allowed length */
52 slen += snprintf(string + slen, string_len - slen, "Lorem ipsum dolor");
53
54 SHOW;
55
56 return 0;
57 }

Of course all this really proved was that I was wrong and I needed to clean up the original code as soon as possible.

A good lesson to learn here is that no matter how experienced you are, you can be mistaken and that, perhaps, some redemption I can take from this is I have matured enough as a programmer to write a test program to prove myself wrong!

Thursday, 6 October 2011

Introduction to printing in another dimension

Mankind, it would appear, owe a great deal of their evolutionary advantage to using tools. This ability appears to have been massively amplified by our creation of machine tools.

A machine tool is widely defined to be a machine where the movement of the tool (the tool path) is not directly controlled by a human. One of the first known examples is a late 15th century lathe used to cut screw threads . The Industrial revolution was intimately interconnected with the creation of new machine tools and arguably by the mid 19th century all the distinct subtractive machine tool types had been discovered.

I ought to explain the word subtractive in this context, it is a pretty simple and rather arbitrary distinction (but important for this discussion). Traditional machining removes or subtracts material to obtain a finished item akin to a sculptor revealing the statute from within a block of stone by using a chisel and hammer. The corollary to this is, unsurprisingly, the additive process where material is added to create the finished item.

The machine tools from the 19th centuary were primarily single use devices controlled by gears and link mechanisms. Although the Jacquard loom was well known, because of the physical engineering difficulties, combining the concept with a machine tool to create a programmable tool path was not fully realised until the opening of the 20th century.

In the late 1940s electrical motors and punch cards/tape made machine tools Numerically Controlled (NC) and when computers arrived in the 60s we gained Computer Numerical Control (CNC) and the opportunity to completely screw things up with software became available.

With the advent of CNC additive systems became practical and by the late 1980s these machines were being widely used used for Rapid Prototyping.

The first additive systems generally used was the simple pen plotter which added ink on top of paper and became popular in draughting offices for producing blueprints etc. Though more generally thought of as computer printing technique plotters owe their heritage to CNC machines.

Next came prototyping systems based on layered object manufacture which cut shapes in a thin flat material (paper or plastic) and glued them together. These systems were expensive compared to casting processes (use a subtractive machine to make a mould and cast the part), extremely wasteful of source material and the results can be of variable quality. Systems based on this process are still manufactured and used.

Then came the stereolithography approach which scans a focused UV laser to cure resin and build up an object. There are several commercial machines available and even some home built systems but the costs of the resin have not yet made this approach generally cost effective.

Currently the most common commercial rapid prototyping additive systems are selective sintering processes where either an electron beam or a high power laser melt a layer of powdered material on a bed, the bed is lowered, more powder added and the process repeated. This process can use many different types of material and is very flexible as the power used can be plastic or metals. The quality is very high and high resolutions are available. Unfortunately these machines are expensive and generally start around £20,000 which puts them out of most individuals reach.

If anyone is still reading here is the summary of what we have covered so far:
  • Humans have used tools since they stopped being monkeys.
  • More than a century back we figured out how to make machines control the tools.
  • Fifty years back we made computers control the tools, before this all tools were subtractive.
  • In the last twenty years we have discovered several expensive ways to make objects with additive methods.
Now we get to the promise of the title, in the last few years Fused Filament Fabrication has become a viable option for a hobbyist. This method extrudes a thermoplastic through a nozzle and constructs an object one layer at a time from the bottom up.

The RepRap project at Bath university helped kickstart development of a plethora of practical operational 3D printers that can be built or bought. These machines are relatively inexpensive (starting from £400 if you build it yourself) and the feedstock is also reasonably inexpensive.

In another post I will discuss the actual practicalities of building and running one of these devices and looking at their software.