Thursday, 27 October 2011

Software that adds another dimension to things

I think that it will come as no surprise to my fellow software engineers if I note that I almost never write new software any more. I maintain, I augment, I refactor, I debug but very, very rarely do I start something new.

This probably has something to do with my maturity as a code monkey, my immediate reaction is to seek out a solution to a problem that already exists and perhaps extend it to fulfil my requirements.

Partially this comes from my innate laziness but also over time I have discovered that I am a "finisher" the role I invariably end up in involves doing all the final bits to make the client accept a project. Because I know my reaction is to always finish something I start, I avoid starting things.

Anyhow, enough introspection, a couple of months ago I was talking on IRC about my 3D printer and was asked "can you print the Debian logo?". So I hunted around for software that would let me convert a bitmap into a suitable 3d format. The result was rather disappointing, the few tools I could find were generally python scripts which simply generated a matrix of cuboids, one for each pixel their heights corresponding to the pixel value.

I used one such script to generate a file for the Debian swirl and imported it into the OpenScad 3d modelling application. I got an inkling of the issues involved after the scene render took over half an hour. The resulting print was blocky and overall I was not terribly happy with the outcome.

So I decided I would write a program to convert images into a 3D representation. I asked myself, how hard can it be?

Sometimes starting from an utterly naive approach with no knowledge of a problem can lead to new insights. In this case I have spent all my free coding time for a month producing a program which I am now convinced has barely even scratched the surface of the possible solutions.

Having said that I have started from a blank editor window and a manual gcc command line compilation and progressed to an actually useful tool and, arguably, of more import to me I have learned and implemented a load of new algorithms which has actually been mentally stimulating and fun!

The basic premise of the tool is to take a PNG image, quantise it into a discrete number of levels , convert that as a height map into a triangle mesh, index that mesh (actually a very hard problem to solve efficiently), simplify the indexed mesh and output the result in a selection of 3D file formats.

The mesh generation alone is a complex field which it appears often devolves into the marching cubes algorithm simply out of despair of anything better ;-) I have failed to implement marching cubes so far (though I have partially implemented marching squares, an altogether simpler algorithm)

The mesh indexing creates an indexed list of vertices from the generated mesh and back annotates it with which faces are connected to which vertices. This effectively generates a useful representation of the meshes topology which can then be used to reduce the complexity of the mesh, or at least describe it. To gain efficiency I implemented my first ever bloom filter as part of my solution. I also learned that generating a plausible hash for said filter is a lot harder than it would seem. In the end I simply used the FNV hash which produces excellent results for very little computation cost.

The mesh simplification area is awash with academic research, most of which I ended up skipping and simply went for the absolute simplest edge removal algorithm. Implementing even this and maintaining a valid mesh topology was challenging.

By comparison the output of the various formats was positively trivial, mainly littered with head scratching over the bizzare de-facto "extensible" formats where only one trivial corner is ever actually implemented.

All in all I have had fun creating the PNG23D project and have actually used it to generate some useful output. I have even printed some of it to generate a lithophane of Turing. I now look forward to several years of maintaining and debugging it and doing all the other things I do instead of writing new software ;-)

Wednesday, 12 October 2011

I do not want anything NASty to happen

I have a lot of digital data to store, like most people I have photos, music, home movies, email and lots of other random data. Being a programmer I also tend to have huge piles of source code and builds lying about. If all that was not enough I work from home so I have copious mountains of work data too.

Many years ago I decided I wanted a single robust, backed up, file server for all of this. So I slapped together a machine from leftovers stuffed some drives in a software RAID array, served over NFS and CIFS and never looked back.

Over time the hardware has changed and the system upgraded but the basic approach of a custom built server has remained. When I needed a build engine to churn out hundreds of kernels a day for the ARM Linux autobuilder the system was expanded to cope and mid 2009 the current instantiation was created.

Current full height tower fileserverThe current system is a huge tower case (courtesy of Mark Hymers) containing a Core 2 Quad 2.33GHz (8 threads) with 8Gigabytes of memory and 13 drives across four SATA controllers split into several RAID arrays. Despite buying new drives at higher capacities I have tended to keep the old drives around for extra storage resulting in what you see here.

I recently looked at the power usage of this monster and realised I was paying a lot of money to spin rust which was simply uneconomic. Seriously, why did I have six sub 200Gigabyte drives running when a single 2T to replace them would pay for itself in power saved in under a month! In addition I no longer required the compute power available either, most definitely time for a downsize!

Several friends suggested a HP micro server might be just the thing. After examining and evaluating some other options (Thecus and QNAP NAS) I decided the HP route was most definitely the best value for money.

The HP Proliant micro server is a dual core Athlon II 1.3GHz system with a Gigabyte of memory, space for four SATA hard drives and a single 5¼ inch bay for an optical drive. All this in a roughly 250mm on a side cube.

My HP proliant microserverI went out and bought the server from ebuyer for £235 with free shipping and £100 cashback. I Immediately sent off the cash back paperwork so I would not forget(what an odd way to get discount) so total cost for the unit was £135. I then used Crucial to select a suitable memory upgrade to take the total to 2 Gigabytes of RAM for £14

The final piece of the solution was the drives for the storage. I decided the best capacity to cost ratio could be had from 2 TB drives and with four bays available would give a raw capacity of 8 TB or more usefully for this discussion 7.8 TiB

I did an experiment with 3x1 TB 7200 RPM drives from the existing server and determined that The overall system would not really benefit enough to justify the 50% price premium of 7200 RPM drives over 5400 RPM devices. I ended up getting four Samsung Spinpoint F4EG 2 TB drives for £230.

I also bought a black LG DVD-RW drive for £16 I would have also required a SATA data cable and a molex to SATA power cable if I had not already got them.

My HP microserver with the front door openPutting the components together was really simple. The internal layout and design of the enclosure mean it is easy to work with and has the feel of build quality I usually associate with HP and IBM server kit not something this small and inexpensive.

The provided documentation is good but unnecessary as most operations are obvious. They even provide the bolts to attach all the drives along with a wrench in the lockable front door, how thoughtful is that!

I then installed the system with Debian squeeze from the optical drive. Principally because I happened to have a network installer CD to hand although the BIOS does have network boot capability.

I used the installer to put the whole initial system together and did not have to resort to the command line even once, very impressed with how far D-I has come.

After asking several people for advice the general consensus was that I should create two partitions on each drive one for a RAID 1 /boot and one for a RAID 5 LVM area.

I did have to perform the entire install a second time because there is a gotcha with GUID Partition Table, RAID 1 boot drives and GRUB. You must have a small "BIOS" partition on the front of the drive or GRUB cannot install in the MBR and your system will not boot!

The partition layout I ended up with looks like:
Model: ATA SAMSUNG HD204UI (scsi)
Disk /dev/sda: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 17.4kB 32.0MB 32.0MB bios_grub
3 32.0MB 1000MB 968MB raid
2 1000MB 2000GB 1999GB raid

The small Gigabyte partition was configured as a RAID 1 across all four drives and formatted with ext2 and mount point of /boot. The large space was configured as RAID 5 across all four drives with LVM on top. Logical volumes were allocated formatted ext3 (on advice from seevral people about ext4 instability they had observed) for 50 GiB root, 4 GiB swap and 1 TiB home space.

The normal Debian install proceeded and after the post install reboot I was presented with a login prompt. Absolutely no surprises at all no additional drivers required and a correctly running system.

Over the next few days I did the usual sysadmin stuff, rsynced data from the old fileserver including creating logical volumes for the various arrays from the old server none of which presented much of a problem. The 5.5TiB Raid 5 did however take a day or so to sync!

I used the microservers eSATA port to attach external drives I use for backup purposes which has also not been an issue so far.

I am currently running both the new and old systems for a few days and rsyncing data to the microserver until I am sure of it. Actually I will make the switch this weekend and shut the old system down and leave it till next weekend before I scrub the old drives.

Before I made it live I decided to run some benchmarks and gather some data just for interest.
Bonnie (Version 1.96) was run in the root logical volume (I repeated the tests in other volumes, there is sub 1% variation) the test used a 4GiB size and 16 files

Sequential OutputSequential InputRandom SeeksSequential CreateRandom Create
Per ChrBlockRewritePer ChrBlockCreateReadDeleteCreateReadDelete
/sec378K41M37M2216K330M412.811697+++++1833014246+++++14371
%CPU9711891301524+++2829+++22
Latency109ms681ms324ms116ms93389µs250ms29021µs814µs842µs362µs51µs61µs

Does not seem to be any notable issues there, the write speeds are a little lower than I might like but that is the cost of RAID 5 and 5400 RPM drives.

The rsync operations used to sync up the live data seem to manage just short of 20MiB/s for the home partition comprising of 250GiB in two and a half million files with the expected mix of file sizes. The video partition managed 33MiB/s on 1TiB of data in nine thousand files.

The bonnie tests were performed accessing the server over NFS with 24GiB size and 16 files.
Sequential OutputSequential InputRandom SeeksSequential CreateRandom Create
Per ChrBlockRewritePer ChrBlockCreateReadDeleteCreateReadDelete
/sec1733K29M19M4608K106M358.3146537142402157640821529
%CPU98249310108109997
Latency10894µs23242ms69159ms49772µs224ms250ms148ms24821µs157ms108ms2074µs719ms

or alternatively as percentages against the previous direct access values

Sequential OutputSequential InputRandom SeeksSequential CreateRandom Create
Per ChrBlockRewritePer ChrBlockCreateReadDeleteCreateReadDelete
/sec4646851213328712+++1311+++10
CPU1011850104347133+++3231+++31
Latency925121324893227795093049186462983440661178688

Not that that tells us much aside from that write is a bit slower over the network, read is gigabit network bandwidth limited and latency of disc over the network is generally poorer than direct.

In summary the total cost was £395 for a complete ready to use system with 5.5TiB of RAID 5 storage which can be NFS served at nearly 900Mbit/s. Overall I am happy with the result, my only real issue is the write performance is a little disappointing but it is good enough for what I need.

Tuesday, 11 October 2011

Sometimes I am just dumb

I have recently been working on some code for NetSurf which builds up an output buffer by repeatedly calling snprintf(); No great shock there, well understood trivial pattern that has been used repeatedly for ages.

Of course I discovered a buffer overflow, which to be fair had already been pointed out to me by another developer and I just failed to see it...Can I blame my old age? no? bah, get off my lawn!

Basically it boils down to me simply not seeing where C helpfully let me subtract one size_t typed value from another for a length and me completely forgetting that a negative result would simply become a large positive value...

I (erroneously) beleived snprintf took a (signed) int as the buffer length, of course it returns one, but it takes a size_t which is, of course unsigned.
Gosh I feel silly now, in fact I was so convinced I was right I wrote a program to "prove" it.
1 /* snprintf.c
2 *
3 * snprintf example
4 *
5 * Public domain (really I do not think its even copyrightable anyway)
6 *
7 * cc -Wall -o snprintf-ex snprintf-ex.c
8 */

9
10 #include <stdio.h>
11 #include <string.h>
12
13 #define SHOW printf("%3ld %.*s*%s\n", string_len - slen, (int)slen, string, string + strlen(string) + 1 )
14
15
16 int main(int argc, char**argv)
17 {
18 char string[64];
19 size_t string_len = sizeof(string) / 2;
20 int bloop;
21 size_t slen = 0;
22
23 /* initialise string */
24 for (bloop = 0; bloop < (sizeof(string) - 1); bloop++) {
25 string[bloop] = '0' + (bloop % 10);
26 }
27 string[bloop] = 0; /* null terminate */
28
29 printf("%3ld %s\n", string_len, string);
30
31 /* try an empty string */
32 slen += snprintf(string + slen, string_len - slen, "%s", "");
33
34 SHOW;
35
36 /* blah blah */
37 slen += snprintf(string + slen, string_len - slen, "Lorem ipsum dolor");
38
39 SHOW;
40
41 /* this one should exceed the allowed length */
42 slen += snprintf(string + slen, string_len - slen, "Lorem ipsum dolor");
43
44 SHOW;
45
46 /* should not call snprintf up if slen exceeds string_len as the
47 * subtraction results in a negative length and snprintf takes a unisigned
48 * size!
49 */

50
51 /* this one starts exceeding the allowed length */
52 slen += snprintf(string + slen, string_len - slen, "Lorem ipsum dolor");
53
54 SHOW;
55
56 return 0;
57 }

Of course all this really proved was that I was wrong and I needed to clean up the original code as soon as possible.

A good lesson to learn here is that no matter how experienced you are, you can be mistaken and that, perhaps, some redemption I can take from this is I have matured enough as a programmer to write a test program to prove myself wrong!

Thursday, 6 October 2011

Introduction to printing in another dimension

Mankind, it would appear, owe a great deal of their evolutionary advantage to using tools. This ability appears to have been massively amplified by our creation of machine tools.

A machine tool is widely defined to be a machine where the movement of the tool (the tool path) is not directly controlled by a human. One of the first known examples is a late 15th century lathe used to cut screw threads . The Industrial revolution was intimately interconnected with the creation of new machine tools and arguably by the mid 19th century all the distinct subtractive machine tool types had been discovered.

I ought to explain the word subtractive in this context, it is a pretty simple and rather arbitrary distinction (but important for this discussion). Traditional machining removes or subtracts material to obtain a finished item akin to a sculptor revealing the statute from within a block of stone by using a chisel and hammer. The corollary to this is, unsurprisingly, the additive process where material is added to create the finished item.

The machine tools from the 19th centuary were primarily single use devices controlled by gears and link mechanisms. Although the Jacquard loom was well known, because of the physical engineering difficulties, combining the concept with a machine tool to create a programmable tool path was not fully realised until the opening of the 20th century.

In the late 1940s electrical motors and punch cards/tape made machine tools Numerically Controlled (NC) and when computers arrived in the 60s we gained Computer Numerical Control (CNC) and the opportunity to completely screw things up with software became available.

With the advent of CNC additive systems became practical and by the late 1980s these machines were being widely used used for Rapid Prototyping.

The first additive systems generally used was the simple pen plotter which added ink on top of paper and became popular in draughting offices for producing blueprints etc. Though more generally thought of as computer printing technique plotters owe their heritage to CNC machines.

Next came prototyping systems based on layered object manufacture which cut shapes in a thin flat material (paper or plastic) and glued them together. These systems were expensive compared to casting processes (use a subtractive machine to make a mould and cast the part), extremely wasteful of source material and the results can be of variable quality. Systems based on this process are still manufactured and used.

Then came the stereolithography approach which scans a focused UV laser to cure resin and build up an object. There are several commercial machines available and even some home built systems but the costs of the resin have not yet made this approach generally cost effective.

Currently the most common commercial rapid prototyping additive systems are selective sintering processes where either an electron beam or a high power laser melt a layer of powdered material on a bed, the bed is lowered, more powder added and the process repeated. This process can use many different types of material and is very flexible as the power used can be plastic or metals. The quality is very high and high resolutions are available. Unfortunately these machines are expensive and generally start around £20,000 which puts them out of most individuals reach.

If anyone is still reading here is the summary of what we have covered so far:
  • Humans have used tools since they stopped being monkeys.
  • More than a century back we figured out how to make machines control the tools.
  • Fifty years back we made computers control the tools, before this all tools were subtractive.
  • In the last twenty years we have discovered several expensive ways to make objects with additive methods.
Now we get to the promise of the title, in the last few years Fused Filament Fabrication has become a viable option for a hobbyist. This method extrudes a thermoplastic through a nozzle and constructs an object one layer at a time from the bottom up.

The RepRap project at Bath university helped kickstart development of a plethora of practical operational 3D printers that can be built or bought. These machines are relatively inexpensive (starting from £400 if you build it yourself) and the feedstock is also reasonably inexpensive.

In another post I will discuss the actual practicalities of building and running one of these devices and looking at their software.



Tuesday, 13 September 2011

Electricity is really just organized lightning.

I have recently been working on a project that requires a 12V supply. Ordinarily this is no problem my selection of bench supplies are generally more than a match for anything I throw at them.

My TS3022S Bench SupplyThis project however needed a little more "oomph" than usual, specifically 200W more. Funnily enough a precision variable output bench supply capable of supplying 20A are rare and *very* expensive beasties.

So we turn to a fixed output supply, after all I will want to run my project without hogging my bench supplies anyway. These can be bought from various electronics suppliers like Farnell from around the £50 mark and Chinese imports from Ebay sellers start around the £20 mark.

All very well and good but that is money I was not planning on spending and possibly a month of waiting for an already badly delayed project. So I decided to Convert an old ATX PSU into a 12V source. This is not a new idea and a quick search revealed many suitable guides online. I had a quick skim, decided I did understand the general idea and ploughed ahead.

Wikipedia has a very useful page on the ATX standard complete with pinout diagrams and colour codes. The pile of grey box ATX supplies available on my shelf was examined and one was helpfully labelled with a sticker proclaiming 22A@12V and we had a winner.

Opening the case of the donor 450W CIT branded supply revealed a mostly empty enclosure with the usual basic switching arrangement. I removed most of the wire loom aside from two of each output voltage (3.3V, 5V and 12V i figured the other voltages might be useful in future) and three commons, the 3.3V and 5V sense lines were also kept. Each of these pairs were cut to length and leads were wired to 4mm sockets.

The "PWR_EN" line was wired via a toggle switch to ground so the output can be switched on and off easily. The 5V standby and a 5V output line were wired to a green/red bi-colour LED (via 270Ω current limit resistors) to give indication that mains is present and when the output is on.

Holes were drilled for four 4mm sockets an indicator LED and a switch. The connectors and switches were all mounted in the PSU casework. I plugged it all in, put an 8.2Ω load resistor on the 5V line with an ammeter in line and a voltmeter across the 12V rail.

ATX bench power supply I turned the mains on and the LED lit up green (5V standby worked) and when I flicked the output switch the LED turned orange, the 12V line went to 12V and the expected 0.6A flowed through the load resistor.

Basically, Success!

I have since loaded the supply up to the 200W operating load and nothing unexpected has happened so I am happy. Seems converting an ATX PSU is a perfectly good way of getting a 200W 12V supply and I can recommend it for anyone as cheap as me willing to put an hour or so into such a project.

Sunday, 21 August 2011

A year of entropy

It has been a couple of years now since the release of the Entropy Key Around a year ago we finally managed to have enough stock on hand that I obtained a real production unit and installed it in my border router.

I installed the Debian packages, configured the ekeyd into EGD server mode and installed the EGD client packages on my other machines and pretty much forgot about it.

The recent new release of the ekey host software (version 1.1.4) reminded me that I had been quietly collecting statistics for almost a whole year and had some munin graphs to share.

The munin graphs of the generated output is pretty dull. Aside from the minor efficiency improvement in the 1.1.3 release installed mid December the generated rate has been a flat 3.93 Kilobytes a second.
The temperature sensor on the Entropy key shows a good correlation with the on-board CPU thermal sensors within the host system.
The host border router/server is a busy box which provides most network services including secure LDAP and SSL web services, it shows no sign of not having enough entropy at any point in the year.
The sites main file server and compile engine is a 4 core 8 gigabyte system with 12 drives. This system is heavily used with high load almost all the time but without the EGD client running has almost no entropy available.
The next system is my personal workstation. This machine often gets rebooted and is usually turned off overnight which is why there are gaps in the graph and odd discontinuities. Nonetheless entropy is always available just like the rest of my systems ;-)
And almost as a "control" here is a file server on the same network which has not been running EGD client (Ok, Ok already it was misconfigured and I am an idiot ;-)
In conclusion it seems an entropy key can keep at least this small network completely filled up with all the entropy it needs without much fuss. YAY!

Tuesday, 31 May 2011

Can you just...

I should have learned by now, no sentence that starts "Can you just" ever ends well. In my experience it means someone else has misunderstood the problem at hand. Then we proceed to the part of the project where (according to my lovely wife) I end up using my condescending voice.

I work through what I have been asked for and eventually, if it goes well you end up defining what the actual, real job needs to be done is. And almost envitably the "Can you just" has become a major job.

Most of us I fear recognise this "pattern" from our working lives with software. Well I am glad to report this pattern exists in real, physical world too.

Last week we took a trip to my parents in law, two thoroughly nice people (I lucked out, no evil mother in law here). I had been asked before I went "Can you just fix the garage door, it sticks". So I took along some basic tools expecting to lubricate a hinge or something.

Turns out it was the garage back door (for humans to get in and out) and...well there were bigger issues. The door frame was rotten and the door had pulled it away from the wall. So a new door frame you say? ah, well, yes

At some point in the past someone had fitted a double glazed window and had, kinda removed the lintle above the door and window! Yes there were several courses of brick masonry wall resting on top of a upvc window frame. The door frame had provided some support till it rotted and fell apart.

The window was under a huge strain and was actually 5cm shorter at one end than the other. The brickwork was no longer mortared and could better be described as a pile of bricks held together with caulking.

Vincent fitting the latch to the new door frame
So my bank holiday weekend was spent removing those bricks, making good, building a frame from 44x97mm planed timber bolted into the walls and covering it with weatherboard. OK it is not masonry but on the other hand it will not be falling on anyone's head any-time soon.

And before anyone comments, yes that frame is true, the spirit level says so. Alas that window frame is very, very wonky indeed and the wall it is sitting on is 4cm out too, so It looks a bit off.

Possibly not my best work but you can hang a couple of hundred kilos from the frame and it not budge so I think its solid enough for this purpose.

Providing my father in law keeps treating it with the wood preserver every couple of years it will not go rotten either and should last a long time.

Friday, 18 February 2011

Shedding

For some time now Melodie has wanted more outside storage.

The current outhouse is an 3 foot by 8 foot converted outside toilet. Due to its age (built 1884) this building is no longer watertight and is generally disintegrating at an alarming rate. One day soon it will have to be demolished. That day has not yet arrived, instead we purchased a plastic shed.

Unfortunately the only viable place for the new shed was next to the old one, this required removing a six foot section of flower bed complete with ivy, bamboo and an old sink.

Last Saturday I completed this removal and lay a concrete base ready to take the new shed. You would not think such a small area (2.8m square) would require so much material and effort to concrete over. 300Kg of 3:2:1 aggregate:sand:cement concrete mix went into the hole along with 100Kg of instant set concrete (for a rapid surface in the changable weather).

Thursday afternoon Geoff (my nice helpful neighbour) offered to assist me in the assembly of the shed. I re-arranged my work schedule (yay home working) and after three hours the shed was assembled.

This morning it occurred to me that my webcam had recorded a time-lapse movie of the construction. I uploaded it to YouTube and present it here for your amusement.

Monday, 7 February 2011

It is a bit breezy

The weather has been a bit odd round here for a while now. The snow storms in December and early January were a mild inconvenience for me but as I work from home the advice not to travel was not too much of a problem.

It seems however that now February is here and the snow is gone we are in for some pretty strong storms. This actually affected me today when my neighbours garden wall was blown over!

As you can see my nice new IP camera captured the event, well OK the frame before and after but you get the idea. Unfortunately for my neighbour the wall collapsed onto his pickup causing extensive damage.

A short time later when my weather station was recording gusts well over 50mph nearby drains started flowing the wrong way and it became a case of water, water everywhere!

It seems that when the new buildings were erected a few years ago that the architect while maximising used space on the building plot may have inadvertently created something of a wind tunnel.

The Gap between our properties is on a parallel (north to south) to the valley below. The wind seems to travel along the crest of the valley and be funnelled through any spaces between the houses. Fortunately the rest of the properties on green lane are pretty old and the spacing between is very generous and the funnelling effect is minimal.

I wonder if we could fit a wind turbine in there? Alas it was too much for my secondary anemometer which is now smashed in three parts.

Also gaining access to the gable end wall of my property has become somewhat perilous (Hence the wonky APT antenna I cannot get fixed). Yes that really is a guy balancing on a ladder 10m high in a strong wind. And indeed the platform the ladder is resting on is built from scaffolding board wedged between the houses.

I guess the hospital emergency room being 300m away means medical assistance is on hand, even so he is braver than I am. So my weather satellite imagery will just have to come from the internet like everyone else's for a while.

Monday, 6 December 2010

New Video Camera

Last week they boys were playing with their remote control car in the snow (which was fun) and Alex wanted to record what his car saw. I immediately dissuaded him from the idea that he can use the family's DV camcorder taped to his car!

The camera and a UK penny
Later on that day though I saw a rocket project on LMR which used a micro camera and suggested such cameras were available from ebay very cheaply. I did a quick search and ordered on from a UK seller at £15 plus £2.99 pnp and thought no more of it.

This afternoon the camera arrived and it really is tiny and Alex is already scheming of ways to use it in addition to attaching it to his RC car.

The video output is low quality (very blurry in low light) and I have yet to figure out how to disable the time stamp (which is wrong) but it does indeed record video to the storage and can download it via USB and played using VLC.

So if you want a tiny video camera (and an 8Gig micro SD card) which is so cheap you do not care if it gets broken, I can recommend these.

Monday, 29 November 2010

Mobile Telecommunication Luddite

Actually I can not really be called a Luddite because I am not really against telecommunication progress nor do I fear it will negatively effect my employment...but the title sounded good? ;-)
Anyhow, I have a strange relationship with mobile phones. My ability to have a functioning device has historically limited their usefulness to me.

Because of my low usage and odd attitude for a techie I have always used PAYG for my personal phone. Work may have provided me with a device with a contract for being on-call etc. but in general its been PAYG all the way. My first phone was a Nokia 1610 back in the late nineties, second user after my employer at the time contract upgraded and had a load of "leftovers". I paid 50quid for it and bought a ten quid SIM and ten pounds credit.

My phones
The standing joke among my friends for the next decade has been that whatever provider I moved to they would go out of business within the year. I went through about eight providers in ten years. And for the first six the same tenner credit went with me! Each time the PAYG provider folded etc. I would be moved on to someone new with a tenners credit and a new SIM and number.

After Easy Mobile closed they did not have an option of a new provider with credit and the "recommended" provider was very poor, so this time I shopped around and went with Tesco mobile but remembered to take my number, which did at least stop my colleagues making fun of me for another move.

During this period my phones were no better than my providers. I bought a Nokia 1100 and used other peoples leftovers. Culminating in Daniel Silverstone taking pity on me and giving me his Sony Ericsson K800 at the end of his contract, despite acquiring an ADP1 and a G1 (both have which have dropped dead) this is the phone I have been using for three years now.

Due to my dreadful relationship I did not get the most from the technology and felt like I was missing out. Over the last few years to try and address this I have set myself a target of having a phone physically with me, turned on and in credit at all times. This I have finally managed for a whole six month stretch and as a reward I have bought myself a nice Android based smartphone on contract with T-Mobile.

I have done all the administrative things to port the number so no-one will need to alter their address books :-)

After only a few days of usage I have already discovered why the combination of smart phone and decent contract are so appealing. The freedom to just call and text and use the internet wherever you are without stopping to worry if you have enough credit is a wonderful thing. And decent hardware with the guarantee that if it breaks all I have to do is go into the store and they give me a new one.

I went for the HTC Wildfire instead of the Desire on cost grounds (100 pounds up front instead of 290) which seems perfectly reasonable hardware performance wise. My one and only niggle is T-mobile have nobbled the media player so it only plays some mp3s and not oggs or flacs. No real challenge, just a bit disappointing that vendurs seem to think they need to fiddle.


Friday, 5 November 2010

Keeping kindling dry

I, along with a great number of people I know, now posses a 3rd generation Kindle. It seems Amazon have found a feature set and price point which makes this device a winning solution.

My bookshelf complete with covered kindle
I did look at a huge number of alternatives like the Sony PRS600 and others but they were all more expensive than the £110 for the Kindle and did not have enough features to make a compelling argument for spending more.

Yes it has DRM. Yes it "only" supports PDF, MOBI and mp3. Yes it will not win any style or usability awards. But I went into this eyes open the device is "good enough".

The device lets me read books from a reasonable display. The integration with amazon.com is so seamless it poses a serious danger to my bank account. I should expand on that last point :-) Amazon have got the whole spending money for a book thing executed so well that you do not think twice about a couple of pounds here and there, this soon adds up. I have set myself a rigid budget.

My main complaints are really just niggles:

  • Another different USB connector! Wahh, I thought everyone had agreed on mini USB? seems that I now have to have yet another lead for micro USB

  • The commercial book selection is a bit limited and missing a surprising number of popular titles. Some of this appears to be the publishers and authors simply clinging to their old business model. I fear some of them might not survive and early indications are they are behaving like the music industry did...Guys you are selling an infinite good a scarcity model is going to fail!

  • The price of some of the books is absurd...they are asking hardback prices for the electronic edition! Seriously? how on earth can that possibly be justified? I can see that a hardback book with its print run could cost £5 per physical item (going from hulu print on demand prices as a worst case) plus shipping and stocking fees. So how can you possibly justify charging the same price for a pile of bits where none of that applies? Also the pile of bits cannot be lent or sold, not impressed.

  • eBook formatting is generally dreadful. I do not know who is mastering these books but they need to do a better job. If they tried to pull this in the physical editions they would get a seriously large number of returns.

  • I still have to pay for whispernet delivery fees even though, because its the wi-fi model, I am providing the bandwidth myself. I can see that differentiating between 3G and wi-fi delivery is a bit hard for them though.
However my one and only real complaint with the offering as a whole is the astronomical asking price for the leather cover. The cover is currently 25% of the price of the kindle itself! (£30 cover £110 kindle) which is just silly. It is a pretty nice cover and the clever clip attachment means it does offer an integrated solution to protecting your kindle, but not £30 nice.

Kindle in a sock cover
So my lovely wife (her kindle was bought with the cover) made me a sock for mine. This is great for casual round the house usage to stop me scuffing the screen but was a bit lightweight for protecting the kindle when out and about.

One day last week I had an idea. I would make my own protective cover by crafting something I had wanted to do for ages. And the (unoriginal I am sure) project of a hollowed out book for housing my kindle was implemented.


My hollowed out book kindle cover
A quick Google later and I had a set of plausible instructions to follow. I used possibly the most out of date book ever (published 1981) on electronic test equipment, partly because it was a ex library sell off book which cost 10pence back in 1995 but mainly because it was the right size to just enclose the kindle without adding to much size.

I learnt a couple of things doing this:
  • Do not let your pva (white) glue mix get too runny, you want it fluid enough to be easily absorbed but not watery - this is important because otherwise the paper absorbs too much water and crinkles
  • Do not use a book where the binding has gone bad already and select a "clean" book. The spine of this book was yellowed and cracking before I started. This means the book spine simply cracks open at the hollowed out bit and it is very obvious.
  • Work out where the "solid" part at the back is going to be and treat that separately so you get a nice solid base at the back of the hole. In mine its not all stuck together and is a bit wavy. Do be sure you left enough depth for the kindle though.
  • Take your time and be careful with the glue, it is amazing how obvious even a simple splash of glue in the wrong place is. Use a small brush for this a paint brush is fast but sloppy.
  • Measure carefully and cut only a few pages at a time, it takes a bit longer but looks much better. Also I did not drill the corners of my hole which means they are a little scruffy.
  • Use the sharpest thinnest knife you can, this really helps. I started with a small stanley knife but switching to my hobby scalpel gave much better results.
  • If you have some, use woodworking clamps to clamp a bit of timber (I had some offcuts of shelving) around the book to compress it while the glue dries. Do not clamp the spine if you can avoid it. This method ensures:
    1. Heavy things do not fall off the book while it dries.
    2. An even strong pressure is applied.
    3. The book does not warp or bend while the glue dries
All in all I kinda like the results and I think I will try again with a more modern book where the spine is not so broken to begin with.

Saturday, 16 October 2010

Coming to terms

Yesterday was not a great day. One of the family's cats died.

Molly just after she arrived 26th September 2000
Molly, for that was her name, came to us the week Melodie and I moved into our first house together. September 15th 2000, in the middle of the fuel protests which were raging at the time, we hired a van and moved in.

Most people would have considered that enough for one week! We happened to be in Halifax on the Sunday afternoon at the supermarket and decided to visit the RSPCA because somehow we decided what our new home needed was a cat.

On that day a decade ago we saw Molly and Lucy, a pair of cats who needed a new home as their previous owners had a baby who was allergic to them. I still recall them skulking at the back of their cage in the cattery, Lucy who was reluctant to come down of a perch to be petted and Molly looking generally unhappy. No-one else wanted this misfit pair because they were already both over 5 years old, the warden despairing of ever finding them a home.

Of course they were obviously the right animals for us! ;-) So by the end of the week we had taken them home. Molly immediately showed how things were going to be by shredding her way out of the RSPCA provided double walled cardboard carrier. To the day she died she detested being placed inside a carrier, funny that.

Melodie and Molly in the snow 28th December 2000
By the end of the year, when the snow came, the cats ruled the house and we were all happy together.

Little did we realise that soon in the summer of 2001 there would be another arrival to the family.

Molly with Melodie and Alex 2001
The arrival of our first child in June 2001 was a complete change in all our lives but we all managed to settle back into a routine. Although the banishment of the cats from upstairs remained a point of disagreement for a long time.

As Alex grew up the cats learned that feeding times for Alex could result in all manner of things falling from above. Soon Alex was mobile which resulted in different lessons on cats being sharp objects if not treated with respect. In February 2003 our family grew once more with the arrival of Joshua. The cats, now used to infants, took this in their stride.

Molly in Febuary 2004 hiding on a windowsill
While still about day to day the cats are not captured on camera so often form this point on.

Over the forthcoming months they ensured they were out of reach of the newborn and the precocious toddler.

By the time of Joshuas first birthday in 2004 Molly had taken to hiding "out of the way" as much as possible but remained as affectionate as ever.

Molly in a box March 2007
As the children grew up and life progressed Molly became ever more at home and developed the odd aggravating trait like taking clean washing off airers and dragging it out the window, through the cat flap and upstairs so she could sleep on it!

She still enjoyed participating in claiming boxes and defending them vigorously though!

The kids took on the job of feeding the cats which made their bond closer ensuring they were greeted by happy cats sitting lookout as they came home from school each day.

Molly continued to be a good companion and an infuriating self centred animal like all good cats.
Molly asleep in the kitchen sink
Then on Tuesday evening our Neighbour knocked on our front door. They gave us the news that one of our cats had been run over and taken to the Vet. I immediately went to the back door and called and shook the treat box.

Lucy came running, but molly did not. We rang the vets who confirmed they had molly (we had them electronically tagged back in Halifax) and that they advised we did not come and see her until they had chance to asses her and deal with the shock.

As the days progressed her prognosis improved and then sank. She had a broken jaw, broken teeth, a dislocated hip, extensive bruising and something was definitely wrong with her kidney function.

Thursday evening the whole family went to see her and she purred and seemed happy to see us, she looked as bad as I feared though and somehow I knew there and then that this was probably goodbye. The children stroked her and petted her for a while and we left with an odd sadness, oh and to the sound of molly trying to gouge the veterinary nurse.

Yesterday she was due to have surgery to fix her jaw and hip...Alas when she was anaesthetised and x-rayed once more it became evident the hip was not just dislocated but her femur was fractured and there was additional damage. So just after midday came the call to ask what we wanted to do. The vet could attempt the repairs but due to her age and the other complications it was probably futile. So with heavy heart we agreed the best thing was not to revive her and she died a short time later.

I shall miss her morning greetings, her demands for attention, her sleeping in odd places and her companionship. I keep calling for her at meal times forgetting she will never come. I think Lucy is upset too, after spending their lives together her friend is gone and I cannot explain that to her.

So that is the end of Molly, a good cat.


Tuesday, 5 October 2010

Compiling!

When I am writing software sometimes XKCD is accurate!

Alas I can only fully participate in that activity when the boys get home from school.

The rest of the time I have to make do with other distractions.

I am currently participating in a "higher" speed broadband trial (I already have a 50Mbit service). This appears to involve the drastic step of remotely reconfiguring my Cable Modem :-)

In the last week there has been the odd request from the trial organisers to test throughput using various website based testing applications. These applications seem completely unable to cope with these 50Mbit+ connections and the results are as unreliable as expected.

To address this the trial organisers asked us to time downloading of a gigabyte file from one of their servers. I was surprised to discover that it took over 350 seconds to download the example file giving a less than stellar 3Megabyte/second rate.

So I used some "compiling" time today to look at what was going on. Firstly I went looking for an iperf like tool for http. Turns out there isn't one which came as a bit of a surprise...oh well with a little help from my friends I came up with
curl -o /dev/null http://target.domain/1GB.bin 2>&1 | tr "\r" "\n" |awk '{print $12 }' >test1.dat
Which gets a file with a "Current transfer speed" for each second of the transfer. Well ok so lets do the transfer a few times and collect the output so we have a reasonable data set.

So we have a pile of numbers...not terribly useful, lets visualise them! to the gnuplot mobile!

We need a gnuplot script something like say this:
set terminal png nocrop enhanced font arial 8 size 1024,600 xffffff
set output 'xfer.png'
set style data linespoints
set title "1Gigabyte file transfer throughput"
set ylabel "Throughput in Kilobytes per second"
set y2label "Speed in Megabits per second"
set xlabel "Seconds of transfer"
set ytics 1024
set y2tics ("10" 1220, "20" 2441, "50" 6102, "100" 12204)
set grid noxtics y2tics
set yrange [0:13000]
set datafile missing "-"
plot 'test1.dat' using 1 title 'Test1', \
'test2.dat' using 1 title 'Test2', \
'test3.dat' using 1 title 'Test3'
Once run through gnuplot I extracted a lovely graph which shows a couple of things.

Mainly that even with a nice fat downstream you are unlikely to realise the maximum throughput very often even from a server on your ISP local network.

On the other hand I now have a way to examine throughput of downloads ;-)

Wednesday, 22 September 2010

I like driving in my car. It is not quite a Jaguar

I work from home, this is a good thing. I benefit from a 20 metre commute, comfortable working environment and generally low carbon lifestyle.

Except on Wednesdays, on Wednesday I have to get up early and go to the office, this is not usually too much of a chore and takes around 90 minutes each way.

Today was different, I made a small five minute diversion to collect a colleague to whom I was giving a lift and then due to a little problem near the M6/M61 junction spent a fun filled three hours sitting in traffic crawling along the M56. At least I had company instead of being on my own.

Before I did the return journey I decided to check the traffic news sites. Oh dear now the M60 was stuffed, I altered the route and only had to queue on the M56 for twenty minutes or so. I dropped my colleague off at his place (avoiding the worst bits of the M62/M66 junctions by use of a rather convoluted back route) and proceeded to queue on the M62 for a while for no apparent reason.

Basically I have spent almost seven hours in the car today to do a 150 miles or a little over 20 miles per hour average. I was just going to rant about the dreadful lack of any redundancy or resilience in the UK road system which often grinds to a complete and utter halt if there is a single failure.

However a different thought has wandered across my travel weary mind. It has occurred to me that this average speed is faster than anyone could reasonably expect to do this trip for the majority of human existence.

In 1810 and indeed for all time before, your best possible speed by good horse, for 150 miles, would have been two days (and your horse would have probably been very poorly afterwards) This assumes your horse could do the 75miles (120km) each way in times consistent with modern world endurance trials... across a mountain range! Yes the Pennines are only tiny but even so!

A hundred years later, in 1910, the British railway network was nearing its zenith in most measurable terms. The influence across the north of England was profound and pushed the industrial revolution ever faster towards its climax before the first world war. Even at this point in time my best reading of the available timetables says I would have needed to change trains four times each way, purchased eight separate tickets from six different companies and taken around nine hours to make the journey allowing for hanging around on platforms.

Another fifty years on, in 1960, the trans-pennine car journey would have been on poorly maintained trunk routes through the decaying cores of the declining post-industrial northern cities. The route would probably have involved the A646, A59 or the A58 which at this time were not the well maintained (if slightly shabby) roads of today but instead were dangerous twisty and, from the looks of the archive photographs, positively heaving with traffic. On these pre-motorway strips of tarmac the 150 mile round trip would have taken in excess of seven hours (even today's mapping systems suggest over four and a half hours would be needed)

So instead of being frustrated that my commute took an extended period today I have instead decided that I shall enjoy the fact it was faster and certainly more comfortable than at any time in the past. Well that and I need to get the cars air-conditioning fixed ;-)



Thursday, 16 September 2010

The turmoil of an entropy key release.

Last week we released 1.1.3 of the Entropy Key software. Poor Daniel struggled for days to get this out the door but finally he managed to build all the various debs, rpms and tars for the supported platforms and Rob got it all uploaded and announced.

The release is kinda strange in that it was the first in which the main changes were for performance. OK there is an improvement to resilience in the face of failed re-keying which some users were seeing in high load situations, but that high load was (in some cases) being caused by the daemon itself.

The process was mainly driven by one of our users, Nix, who was experiencing ekeyd using "too much" CPU on his system.

Of course on our servers during testing ekeyd it had used around a percent of CPU, certainly nothing that flagged as a problem in our own use (yes we eat our own dogfood ;-) Alas for this user on a 500MHz geode it was guzzling down 10% of his CPU which was clearly unacceptable.

This user however instead of guessing what the problem might be or simply leaving it up to us did something about it. He instrumented ekeyd, located the garbage collector tuning parameters as being incorrectly set and supplied a patch. Did he stop there? no! he then went on to profile the code further and clean up the hotspots. This resulted in ekeyd falling to less than 1% of the runtime of his system.

By reducing the CPU usage of ekeyd to this level it became more apparent where a previously reported bug was coming from, which enabled me to address it.

I know sometimes I complain about Open Source software but at times like this it makes me happy that we released the ekeyd software freely. This is how its supposed to be! Everyone working to make better software and benefiting together.

It has not just been on this occasion either, throughout the last year since our very first 1.0 release there has been helpful and useful feedback, patches from several users and even the odd thankyou mail. This project then has been a positive Open Source experience and I look forward to another constructive year maintaining this software.




Thursday, 2 September 2010

You shall go to the ball!

Contrary to my last post I was able to attend the Debian UK BBQ at the weekend. My wonderful wife ditched me at Portsmouth station with permission to go play with my friends ;-)

Perhaps a bit more explanation is warranted about that last statement! We travelled back from France last Saturday. We were on the 12:15 (CET) ferry so had to be awake and on the road for the five hour France drive at "oh my gosh its early" time. The crossing to Portsmouth was slow as it was very choppy and we were leaving the Port at 15:30 at which point Melodie was good enough to let me go play with my friends while she drove home.

I did have the "fun" of doing the Portsmouth->London->Cambridge trip on UK public transport but it went pretty smoothly. Walking from Cambridge station to the BBQ location was a bit dumb, next time I am taking a cab!

The BBQ was excellent fun and big thanks for Steve for holding it again. Its always fun to meet the usual suspects. We also got to set a new occupancy record at Steves house Saturday night and discovered that certain members of Debian UK snore rather loudly (I think at one point we could measure it on the Richter scale).

Back home now of course. Work is the same as when I left so no change there and the Boys first day back at school seems to have gone smoothly too.

Thursday, 26 August 2010

Sunny Brittany

Alas I did not go to Debconf 10 which looked like everyone had a blast, congratulations to the organisers. Nor will I be able to attend the traditional Steve McIntyre BBQ at the weekend, hope everyone has fun.

On the other hand I have managed to take a family holiday in sunny Brittany...

OK perhaps sunny is pushing it, we did have several nice days last week which we spent on the Le Pouldu plages but this week has been more challenging.

Fortunately the camp site where we are staying has reasonable bandwidth so I can continue to waste time online.

This has given me time to look at some Debian packaging. Specifically the mingw32-runtime packages. Their maintainer seems to be unwilling to allow an updated version to be uploaded despite there being numerous upstream releases since the last packaged release in 2007.

The packaging manual makes it clear that hijacking is not permitted and I discover my desire for having a huge, unhelpful argument about maintaining a package is non existent.

I guess when I have my updated packages available I will maybe announce them but its not the same. I guess this is one of those problems with being a Debian maintainer, we all have to rub along even with decisions we disagree with. Hmm thought I had more to say on the subject ...perhaps next time.

Anyway must go and entertain the kids for an hour or two, maybe go to the beach in the rain, hell they cannot get any soggier ;-)


Wednesday, 30 June 2010

Programmers are suckers for a meme

Many Open Source projects have and IRC # for developers. The NetSurf project is no different. During a discussion someone jokingly suggested that one contributor should be asked to take the FizzBuzz test. Can you guess what happened next?

Ten minutes later Michael Drake posted this solid example in C which is where it all ought to have ended.


Being programmers, of course this had a predictable result. The original question, the reason for asking it and any the serious point being made in the original article were discarded. Just so everyone (including myself) could play silly buggers over our lunch break. Coders, it would seem, simply like to produce a solution even if it is only for fun.

None of these programs took more than ten minutes (except the JAVA monstrosity), are reproduced with permission and I am to blame for none of them (ok maybe just the one ;-).

First up was Rob Kendrick with the classic solution in C (his day job is as a support team lead which seems to make those programmers who cannot do this seem even more scary bad.)


Next was Daniel Silverstone who turned this Lua solution out very quickly and berated the rest of us for not following the rules ;-)


The final C solution was my own uber silly sieve implementation


Peter Howkins decided the world required a solution in PHP


When pointed out that his solution stopped at 50 he presented this vastly superior and obviously idiomatic solution


Finally after a long time James Shaw caused mental anguish and wailing with this abomination unto Nuggin.


With Luck everyone has now got it out of their system and we will never have to put up with this again (yeah right). And now you also know why Open Source projects sometimes take ages to release ;-)

Thursday, 27 May 2010

Ex Phone

I went to the Linux Plumbers Conference (LPC) last year (which was very interesting and productive). While I was there Qualcomm were in attendance giving out penguin mints and running a competition. The lucky winners of this draw were to receive an ADP1 Android phone. So I dutifully filled in the entry form, handed it in and thought no more about it.

In the break after attending a particularly interesting workshop on the last day of the conference several other people congratulated me on winning the Qualcomm draw, this was the first I knew about it! I went in search of the nice people from Qualcomm and sure enough they handed over an ADP1 after getting a couple of photos.

Once I returned home I switched to using ADP1 as my phone and started experimenting with Android kernel stuff. Then one day the Wi-Fi stopped working which while odd could be overcome by repeatedly unloading and reloading the driver until it worked once more. Then one day the USB stopped working, no more ADB, no more console, no more debugging, no more hacking.

Then one day it turned itself off and never came back. I have been forced to return to using a hand me down, very kindly given to me by Daniel Silverstone. Alas it is not a smart phone of any kind and my finances do not allow for me to spend the money to replace it.

I have repeatedly tried to contact Qualcomm open source to see if there is any kind of warranty I might be able to use to get the phone repaired alas all the contact addresses I have are now simply returning SMTP errors.

So this is pretty much a tale without a happy ending, unless anyone out there knows the right people to contact? Perhaps someone at Google maybe? I only revisit the subject at all because of the recent announcement of an Android 2.1 based edition for these devices which reminded me I like to play with these things.