I think Regina Brett has a point although having now experienced being a juror in a British crown court I have a much better understanding of both the process and effectiveness of the jury system.
The actual process of becoming a juror on a case is something I had not been aware of previously. You simply receive a letter telling you to be at the court on a specific date and that you are required to be available for at least ten days, possibly more. The only qualification to receive the letter is to be on the electoral roll and it is an invitation with few options to refuse without serious repercussions.
When you arrive at the court you are directed to the Jury lounge (practical hint: take a book) where you notice there are over forty people, which would seem odd until you realise there are three courtrooms and they each need a jury, even then there are an excess of people which is because of the selection process.
The process of jury selection is fairly simple, an usher for a court comes and calls fourteen names which forms the jury in waiting. The group is taken up to the court waiting room (this room gets terribly familiar over the forthcoming weeks) and then twelve names are called.
As each person is called they enter the jury box in order which persists for the entire trial (practical hint:remember your juror number). Before each person is sworn or affirmed there is the possibility they will be found unsuitable and will be replaced by one of the previously unselected jurors. Any unselected jurors are then sent back to the jury lounge and become available for forming another jury in waiting.
Anyone unselected at the end of the process has to remain available to return to the court to form a jury in waiting when a previous trial ends until they have exhausted their duty. There were a few of these unfortunate people who were kept in a state of limbo for several days and I am relieved this did not happen to me.
Being a juror, from a purely practical perspective, felt like working an office job for ten days. Duties consisting of attending a series of meetings with strange rules and the typically understated British approach to mentioning the result of breaking them.
I participated in two cases, both of which were (almost by definition) unpleasant happenings, these were a case of Grievous bodily harm and an offence under section 7 of the Sexual Offences Act 2003.
Both cases were challenging in their own ways the first because of the way the case was presented and the second because of its subject matter. One of the most important rules is "Do not discuss anything with anyone as it might be perjury" so going along with that I will not be discussing any details. Because I cannot be specific this post has become a little impersonal, you will have to forgive me as I found I had to remove a great deal of more content which was not appropriate.
An important thing to note is that the trials bore no resemblance to TV courtroom drama. The trials proceed in a professional manner with very little theatrics. The prosecution barrister commences outlining the case against the accused, calling witnesses and reading into evidence uncontested material. The defence then gets to present their case, again calling witnesses and placing documents into evidence.
One of the striking things about this process is that if the barristers do not call a witness or present evidence that would seem to be pertinent, the jury must not draw inference from that omission, which is especially bizarre when a central witness referred to by almost everyone involved with the case is not called.
Once the case is presented the jury is sequestered in a room and must come to a unanimous decision on each of the charges. This was, for me, the most challenging part of the whole process. Twelve people with unique views on the information presented have to attempt to discuss the evidence and not simply go with their first impressions based on their preconceptions.
The jury is allowed to ask for some evidence to be repeated and if deliberations take some time the jury may be instructed that a majority of 10 to 2 may be accepted. I imagine at some point the jury will run out of time to make a decision and something else will happen but I did not experience this.
Overall the experience was enlightening if not enjoyable, I understand the process a lot more and am happy to have discharged my duty and am equally glad the responsibility will not come around again for at least a few years.
Saturday, 24 October 2015
Saturday, 17 October 2015
Raspberries are not the only fruit
I have worked with ARM based systems for longer than I care to admit to myself. From the Acorn Archimedes 305 in 1987 through to modern 64bit systems I have seen many many changes in the ARM community. One big change has been the rise of the inexpensive single board computer (SBC).
Arguably the Raspberry Pi (RPi) was responsible for starting this trend. Before RPi there were small development boards available, I was even involved in producing some of them, none of these really became a big thing and were principally vehicles for silicon vendors to showcase their SoC in an accessible way. When I say accessible I mean for silicon vendors who were previously used to charging many thousands of pounds for their development boards now only charging hundreds.
In my opinion the RPi was a complete disruption to the SBC market. In early 2012 a complete ARM computer system could now be purchased for $35 (£25) which was substantially cheaper than the best contemporary competitor the Beaglebone $89 (£60)
To be clear, the reason the RPi succeeded (millions sold, household name) was not on price alone but also the large amount of good supporting software and how easy it was made for teachers and makers to use.
There were several areas that the RPi managed to change perceived issues into opportunities for using what was already available or third parties to provide. There were also several issues raised about the original RPi:
With that acceptance there have been many, many new SBC coming to market with better peripherals and increasingly competitive pricing. These are technically not clones as none of them use the Broadcom processor of the RPi but they often share many features and possibly even a compatible expansion header.
The Banana Pi was one of these copycats which I acquired for a similar price as an RPi in 2014. The main processor of this system was a 1Ghz dual core Allwinner A20 processor (a considerable advance on the 700MHz single core of the original RPi) coupled to a gigabyte of memory. Additionally the board benefited from having SATA and gigabit Ethernet MAC which made for a much more versatile system. Various third parties filled in the missing peripherals including my own attempt at a case.
I acquired a cubietruck for the NetSurf project to use as a build node in their CI system this is again based on the A20 but with more memory and somewhat better peripheral support but at a substantial cost over the Banana Pi.
The most recent addition to this form factor is the introduction by Xunlong Software of the Orange Pi PC. This little board is the same footprint as the RPi2 B+ design but with differing connector placement. The processor is a quad core 1.6GHz Allwinner H3 with a gigabyte of memory and has has Ethernet and USB but no SATA.
The big news about this board though is the price, at $15 (£10) it resets the price expectations just as the RPi did before it. I was initially sceptical of the quality of the product (or if it would arrive at all) but I have acquired five of these boards and every one of them came well packaged, boxed and in a static bag, just like the RPi does, and they all worked.
I created a case design based on my RPi slimline case so they would be protected when piled up with all the other boards. The use of a DC barrel jack instead of micro USB for power is better in that the connector is more robust and intended for higher current draws but does mean additional leads are needed. There are, however, two flies in the ointment, neither are showstoppers but make the board a little more difficult to use.
One is a simple necessity of a substantial heatsink on the H3 processor. Initially I used a small 20 x 20mm copper heatsink (around 800mm square surface area) but this was insufficient under full load. I did not want to have to use a fan so I milled some slots into copper round bar, then cut off sections and faced them on a lathe. The completed design had more than 2200 square mm surface area and cost around $2.5 (£1.5) in material (and a couple of hours in the workshop but that was fun and I made something)
The second issue and arguably much more serious is that of software. Let me be honest, it is dreadful, I mean very bad indeed. The images provided from the Orange Pi website are some of the worst examples of "do something quick" I have experienced.
Fortunately a user on the forums named loboris decided to create scripts that generate a Debian (and Ubuntu and Fedora) distribution images that can be installed from SD card. He relies on a somewhat patched 3.4 kernel full of Allwinner vendor changes and the inevitable binary blobs for the Marli GPU but the result does work.
I have had a few units acting as distcc compiler slaves for two weeks now at 100% CPU loading and they are still running. The processor does not get overly warm with the heatsink installed and Debian behaves just fine. The main caveat being that it is definitely not going to work if you try and update the kernel through packaging.
The state of software support and Xunlong Software relying on a forum user to complete their product does tarnish an otherwise impressive and possibly market changing SBC.
Perhaps I expect too much for fifteen bucks? I guess when the cost of the case, heatsink, cables and memory storage card is similar to the rest of the computer there is simply no margin left for anything else.
In conclusion hopefully this brief overview has provided some insight into what is available in this market and that the Raspberry Pi is indeed not the only option.
I finish with an image is of my ARM build farm consisting of every SBC I mention here (including a couple of RPi)
Arguably the Raspberry Pi (RPi) was responsible for starting this trend. Before RPi there were small development boards available, I was even involved in producing some of them, none of these really became a big thing and were principally vehicles for silicon vendors to showcase their SoC in an accessible way. When I say accessible I mean for silicon vendors who were previously used to charging many thousands of pounds for their development boards now only charging hundreds.
In my opinion the RPi was a complete disruption to the SBC market. In early 2012 a complete ARM computer system could now be purchased for $35 (£25) which was substantially cheaper than the best contemporary competitor the Beaglebone $89 (£60)
To be clear, the reason the RPi succeeded (millions sold, household name) was not on price alone but also the large amount of good supporting software and how easy it was made for teachers and makers to use.
There were several areas that the RPi managed to change perceived issues into opportunities for using what was already available or third parties to provide. There were also several issues raised about the original RPi:
- no case
- no integrated storage
- not having an x86 processor
- being a slow processor
- limited peripheral support
- no real time clock
- not supplying keyboards, mice
- not providing displays.
With that acceptance there have been many, many new SBC coming to market with better peripherals and increasingly competitive pricing. These are technically not clones as none of them use the Broadcom processor of the RPi but they often share many features and possibly even a compatible expansion header.
The Banana Pi was one of these copycats which I acquired for a similar price as an RPi in 2014. The main processor of this system was a 1Ghz dual core Allwinner A20 processor (a considerable advance on the 700MHz single core of the original RPi) coupled to a gigabyte of memory. Additionally the board benefited from having SATA and gigabit Ethernet MAC which made for a much more versatile system. Various third parties filled in the missing peripherals including my own attempt at a case.
I acquired a cubietruck for the NetSurf project to use as a build node in their CI system this is again based on the A20 but with more memory and somewhat better peripheral support but at a substantial cost over the Banana Pi.
The most recent addition to this form factor is the introduction by Xunlong Software of the Orange Pi PC. This little board is the same footprint as the RPi2 B+ design but with differing connector placement. The processor is a quad core 1.6GHz Allwinner H3 with a gigabyte of memory and has has Ethernet and USB but no SATA.
The big news about this board though is the price, at $15 (£10) it resets the price expectations just as the RPi did before it. I was initially sceptical of the quality of the product (or if it would arrive at all) but I have acquired five of these boards and every one of them came well packaged, boxed and in a static bag, just like the RPi does, and they all worked.
I created a case design based on my RPi slimline case so they would be protected when piled up with all the other boards. The use of a DC barrel jack instead of micro USB for power is better in that the connector is more robust and intended for higher current draws but does mean additional leads are needed. There are, however, two flies in the ointment, neither are showstoppers but make the board a little more difficult to use.
One is a simple necessity of a substantial heatsink on the H3 processor. Initially I used a small 20 x 20mm copper heatsink (around 800mm square surface area) but this was insufficient under full load. I did not want to have to use a fan so I milled some slots into copper round bar, then cut off sections and faced them on a lathe. The completed design had more than 2200 square mm surface area and cost around $2.5 (£1.5) in material (and a couple of hours in the workshop but that was fun and I made something)The second issue and arguably much more serious is that of software. Let me be honest, it is dreadful, I mean very bad indeed. The images provided from the Orange Pi website are some of the worst examples of "do something quick" I have experienced.
Fortunately a user on the forums named loboris decided to create scripts that generate a Debian (and Ubuntu and Fedora) distribution images that can be installed from SD card. He relies on a somewhat patched 3.4 kernel full of Allwinner vendor changes and the inevitable binary blobs for the Marli GPU but the result does work.
I have had a few units acting as distcc compiler slaves for two weeks now at 100% CPU loading and they are still running. The processor does not get overly warm with the heatsink installed and Debian behaves just fine. The main caveat being that it is definitely not going to work if you try and update the kernel through packaging.
The state of software support and Xunlong Software relying on a forum user to complete their product does tarnish an otherwise impressive and possibly market changing SBC.
Perhaps I expect too much for fifteen bucks? I guess when the cost of the case, heatsink, cables and memory storage card is similar to the rest of the computer there is simply no margin left for anything else.
In conclusion hopefully this brief overview has provided some insight into what is available in this market and that the Raspberry Pi is indeed not the only option.
I finish with an image is of my ARM build farm consisting of every SBC I mention here (including a couple of RPi)
Friday, 24 July 2015
NetSurf developers and the Order of the Phoenix
Once more the NetSurf developers gathered to battle the forces of darkness, or as they are more commonly known web specifications.
The fifth developer weekend was an opportunity for us to gather in a pleasant setting and work together in person. We were graciously hosted, once again, by Codethink in their Manchester offices.
Four developers managed to attend in person from around the UK: Michael Drake, John-Mark Bell, Daniel Silverstone and Vincent Sanders.
The main focus of the weekends activities was to address two areas that have become overwhelmingly important: JavaScript and Layout.
Although the browser obviously already has both these features they are somewhat incomplete and incapable of supporting the features of the modern web.
The main reasons for the change are that Spidermonkey is a poor fit to NetSurf as it is relatively large and does not provide a stable API guarantee. The lack of a stable API requires extensive engineering to update to new releases. Additionally support for compiling on minority platforms is very challenging meaning that most platforms are stuck using version 1.7 or 1.85 (current release is version 31 with 38 due).
We started the move to Duktape by creating a development branch, integrating the Duktape library and open coding a minimal implementation of the core classes as a proof of concept. This work was mostly undertaken by Daniel with input from myself and John-Mark. This resulted in a build that was substantially smaller than using Spidermonkey with all the existing functionality our tests cover.
The next phase of this work is to take the prototype implementation and turn it into something that can be reliably used and covers the entire JavaScript DOM interface. This is no small job as there are at least 200 classes and 1500 methods and properties to implement.
This reimplementation of our rendering engine has been in planning for many years. The existing engine stems from the browsers earliest days more than a decade ago and has many shortcomings in architecture and implementation we hope to address.
The work has finally started on libnslayout with Michael taking the lead and defining the initial API and starting the laborious work of building the test harness, a feature the previous implementation lacked!
We discussed some ways to encourage new developers and try and get committed developers especially for the minority platform frontends. The RISC OS frontend for example has needed a maintainer since the previous one stepped down. There was some initial response from its community, culminating in a total of two patches, when we announced the port was under threat of not being releasable in future. Unfortunately nothing further came from this and it appears our oldest frontend may soon become part of our history.
We also covered some issues from the bug tracker mostly to see if there were any patterns that we needed to address before the forthcoming 3.4 release.
There was discussion about recent improvements to the CI system which generate distribution packages from the development branch and how this could be extended to benefit more users. This also included authorisation to acquire storage and other miscellaneous items necessary to keep the project infrastructure running.
We managed over 20 hours of work in the two days and addressed our current major shortcomings. Now it just requires a great deal of programming to complete the projects started here.
Four developers managed to attend in person from around the UK: Michael Drake, John-Mark Bell, Daniel Silverstone and Vincent Sanders.
The main focus of the weekends activities was to address two areas that have become overwhelmingly important: JavaScript and Layout.
Although the browser obviously already has both these features they are somewhat incomplete and incapable of supporting the features of the modern web.
JavaScript
The discussion started with JavaScript and its implementation. We had previously looked at the feasibility of changing our JavaScript engine from Spidermonkey to DukTape. We had decided this was a change we wanted to make when DukTape was mature enough to support the necessary features.The main reasons for the change are that Spidermonkey is a poor fit to NetSurf as it is relatively large and does not provide a stable API guarantee. The lack of a stable API requires extensive engineering to update to new releases. Additionally support for compiling on minority platforms is very challenging meaning that most platforms are stuck using version 1.7 or 1.85 (current release is version 31 with 38 due).
We started the move to Duktape by creating a development branch, integrating the Duktape library and open coding a minimal implementation of the core classes as a proof of concept. This work was mostly undertaken by Daniel with input from myself and John-Mark. This resulted in a build that was substantially smaller than using Spidermonkey with all the existing functionality our tests cover.
The next phase of this work is to take the prototype implementation and turn it into something that can be reliably used and covers the entire JavaScript DOM interface. This is no small job as there are at least 200 classes and 1500 methods and properties to implement.
Layout
The layout library design discussion was an extensive and very involved. The layout engine is a browsers most important component. It takes all the information processed by the CSS and DOM libraries, applies a vast number of involved rules and produces a list of operations that can be rendered.This reimplementation of our rendering engine has been in planning for many years. The existing engine stems from the browsers earliest days more than a decade ago and has many shortcomings in architecture and implementation we hope to address.
The work has finally started on libnslayout with Michael taking the lead and defining the initial API and starting the laborious work of building the test harness, a feature the previous implementation lacked!
The second war begins
For a war you need people and it is a little unfortunate that this was our lowest ever turnout for the event. This is true of the project overall with declining numbers of commits and interest outside our core group. If anyone is interested we are always happy to have new contributors and there are opportunities to contribute in many areas from image assets, through translations, to C programming.We also covered some issues from the bug tracker mostly to see if there were any patterns that we needed to address before the forthcoming 3.4 release.
There was discussion about recent improvements to the CI system which generate distribution packages from the development branch and how this could be extended to benefit more users. This also included authorisation to acquire storage and other miscellaneous items necessary to keep the project infrastructure running.
We managed over 20 hours of work in the two days and addressed our current major shortcomings. Now it just requires a great deal of programming to complete the projects started here.
Monday, 16 February 2015
To a child, often the box a toy came in is more appealing than the toy itself.
I think Allen Klein might not have been referring to me when he said that but I do seem to like creating boxes for my toys.
My Lenovo laptop has an Ultrabay, these are a way to easily swap optical and hard drives drives. They allow me to carry around additional storage and, providing I remembered to pack the drive, access optical media.
Over time I have acquired several additional hard drives housed in Ultrabay caddies. Generally I only need to access one at a time but increasingly I want to have more than one available.
Lenovo used to sell docking stations with multiple Ultrabays but since Series 3 was introduced this is no longer the case as the docks have been reduced to port replicators.
One solution is to buy a SATA to USB convertor which lets you use the drive externally. However once you have more than one drive this becomes somewhat untidy, not to mention all those unhoused drives on your desk become something of a hazard.
Recently after another close call I decided what I needed was a proper external enclosure to house all my drives. After some extensive googling I found nothing suitable ready to buy. Most normal people would give up at this point, I appear to be an abnormal person so I got the CAD package out.
A few hours of design and a load of laser cutting later I came up with a four bay enclosure that now houses all my Ultrabay caddies.
The design was slightly evolved to accommodate the features of some older caddies and allow a pencil to be used to eject the drives (I put a square hole in the back)
The completed unit uses about £10 of plastic and takes 30 minutes to lasercut.
The only issue with the enclosure as manufactured is that Makespace ran out of black plastic stock and I had to use transparent to finish so it is not in classic black as lenovo intended.
As usual all the design files are publicly available from my design repo.
My Lenovo laptop has an Ultrabay, these are a way to easily swap optical and hard drives drives. They allow me to carry around additional storage and, providing I remembered to pack the drive, access optical media.
Over time I have acquired several additional hard drives housed in Ultrabay caddies. Generally I only need to access one at a time but increasingly I want to have more than one available.
Lenovo used to sell docking stations with multiple Ultrabays but since Series 3 was introduced this is no longer the case as the docks have been reduced to port replicators.
One solution is to buy a SATA to USB convertor which lets you use the drive externally. However once you have more than one drive this becomes somewhat untidy, not to mention all those unhoused drives on your desk become something of a hazard.
Recently after another close call I decided what I needed was a proper external enclosure to house all my drives. After some extensive googling I found nothing suitable ready to buy. Most normal people would give up at this point, I appear to be an abnormal person so I got the CAD package out.
A few hours of design and a load of laser cutting later I came up with a four bay enclosure that now houses all my Ultrabay caddies.
The design was slightly evolved to accommodate the features of some older caddies and allow a pencil to be used to eject the drives (I put a square hole in the back)
The completed unit uses about £10 of plastic and takes 30 minutes to lasercut.
The only issue with the enclosure as manufactured is that Makespace ran out of black plastic stock and I had to use transparent to finish so it is not in classic black as lenovo intended.
As usual all the design files are publicly available from my design repo.
Monday, 17 November 2014
NetSurf Developer workshop IV
Over the weekend the NetSurf developers met to make a concentrated effort on improving the browser. This time we were kindly hosted by Codethink in their Manchester office in a pleasant environment with plenty of refreshments.
Five developers managed to attend in person from around the UK: Michael Drake, John-Mark Bell, Daniel Silverstone, Rob Kendrick and Vincent Sanders. We also had Chris Young providing some bug fixes remotely.
We started the weekend by discussing all the thorny core issues that had been put on the agenda and ensuring the outcomes were properly noted. We also held the society AGM which was minuted by Daniel.
The emphasis of this weekend was very much on planning and doing the disruptive changes we had been putting off until we were all together.
John-Mark and myself managed to change the core build system as used by all the libraries to using standard triplets to identify systems and use the gnu autoconf style of naming for parameters (i.e. HOST, BUILD and CC being used correctly).
This was accompanied by improvements and configuration changes to the CI system to accommodate the new usage.
Several issues from the bug tracker were addressed and we put ourselves in a stronger position to address numerous other usability problems in the future.
We managed to pack a great deal into the 20 hours of work on Saturday and Sunday although because we were concentrating much more on planning and infrastructure rather than a release the metrics of commits and files changed were lower than at previous events.
Five developers managed to attend in person from around the UK: Michael Drake, John-Mark Bell, Daniel Silverstone, Rob Kendrick and Vincent Sanders. We also had Chris Young providing some bug fixes remotely.
We started the weekend by discussing all the thorny core issues that had been put on the agenda and ensuring the outcomes were properly noted. We also held the society AGM which was minuted by Daniel.
The emphasis of this weekend was very much on planning and doing the disruptive changes we had been putting off until we were all together.
John-Mark and myself managed to change the core build system as used by all the libraries to using standard triplets to identify systems and use the gnu autoconf style of naming for parameters (i.e. HOST, BUILD and CC being used correctly).
This was accompanied by improvements and configuration changes to the CI system to accommodate the new usage.
Several issues from the bug tracker were addressed and we put ourselves in a stronger position to address numerous other usability problems in the future.
We managed to pack a great deal into the 20 hours of work on Saturday and Sunday although because we were concentrating much more on planning and infrastructure rather than a release the metrics of commits and files changed were lower than at previous events.
Thursday, 13 November 2014
The care of open source creatures
A mini Debian conference happened at the weekend in Cambridge at which I was asked to present. Rather than go with an old talk I decided to try something new. I attempted to cover the topic of application life cycle for open source projects.
The presentation abstract tried to explain this:
Unfortunately due to other issues in my life right now I did not prepare well enough in advance and my slide deck was only completed on Saturday so I was rather less familiar with the material than I would have preferred.
The rest of the conference was excellent and I managed to see many of the presentations on a good variety of topics without an overwhelming attention to Debian issues. My youngest son brought himself along on both days and "helped" with front desk. He was also the only walk out in my presentation, he insists it was just because he "did not understand a single thing I was saying" but perhaps he just knew who the designated driver was.
I would like thank everyone who organised and sponsored this event for an enjoyable weekend and look forward to the next one.
The presentation abstract tried to explain this:
A software project that is developed by more than a single person starts requiring more than just the source code. From revision control systems through to continuous integration and issue tracking, all these services need deploying and maintaining.I presented on Sunday morning but got a good audience and I am told I was not completely dreadful. The talk was recorded and is publicly available along with all the rest of the conference presentations.
This presentation takes a look at what a services a project ought to have, what options exist to fulfil those requirements and a practical look at an open source projects actual implementation.
Unfortunately due to other issues in my life right now I did not prepare well enough in advance and my slide deck was only completed on Saturday so I was rather less familiar with the material than I would have preferred.
The rest of the conference was excellent and I managed to see many of the presentations on a good variety of topics without an overwhelming attention to Debian issues. My youngest son brought himself along on both days and "helped" with front desk. He was also the only walk out in my presentation, he insists it was just because he "did not understand a single thing I was saying" but perhaps he just knew who the designated driver was.
I would like thank everyone who organised and sponsored this event for an enjoyable weekend and look forward to the next one.
Wednesday, 1 October 2014
It is a bad plan that admits of no modification
I find it somewhat interesting that thousands of years later that our society still uses Publilius Syrus sententiae though I imagine the tendency to leave well enough alone means such phrases stay in usage.
One weekend Steve McIntyre asked me if I could find a source of some of some 40mm fans for some systems with some pretty strict requirements. They needed to be long life and shift a lot of air to combat a persistent overheating issue.
I sat with him and went through the Farnell utterly hateful parametric web interface and eventually came up with a couple of options which were very expensive. Only then did I stop and ask what the actual problem was.
Steve showed me one of the Debian ARM buildd boxes which are Marvell development machines. These systems are powerful quad core machines housed in compact steel enclosures.
There is a single 40mm fan trying to provide cooling for the entire enclosure. When the units are placed horizontally and used intermittently this proves adequate. Unfortunately when the system are arranged vertically in a rack and run at full load continuously they often overheat and have to be restarted. In addition the small high speed fans need replacing frequently as their bearings wore out quickly.
This was obviously causing some issues for the ARM Debian ports which Steve wanted to rectify. After talking the problem through for a while we came to the conclusion we could use much larger 60mm fans to blow air directly through the top of the case onto the cpu heatsink.
Larger fans can be run much more slowly to move a similar volume of air to the smaller 40mm fans which gives a much longer service life.
Steve proceeded to order enough parts to allow us to modify all the Debian systems, this worked out cheaper than a single "special" 40mm high volume fan.
I acquired a rather large steel hole punch, I chose this tool because it produces a much superior finish to a hole cutter and this project demanded a high level of finish (not to mention I loved having a valid excuse to own and use a huge allen key!)
If we had simply been modifying a single case I would have measured and marked up by hand. With the prospect of altering at least eight I laser cut a template from plywood which Andy Simpkins took great glee in excessively annotating.
We also used the opportunity to add bolt holes to securely attach the 2.5 inch SATA drives instead of using sticky pads.
Steve and I modified a single system to begin with both to check our alignment and the efficacy of the change. We were pleasantly surprised to discover that hoiby could now repeatedly do kernel compiles with all four cores flat out which was not possible before. The measured CPU temperature, which had previously been around 90°C, did not rise above 40°C
Steve, Andy and I then arranged a day where we took all the remaining units out of the rack at ARM, modified and returned them. We used the facilities at the Cambridge Makespace where I am a member to do the modifications.
I broke two 3mm drill bits and dulled a 4mm bit drilling all the holes, Roger Smith was good enough to loan us the use of his "Christmas tree bit" to ream the fan hole out to 16mm so we could thread the hole punch and cut the 60mm fan aperture out.
We managed to get quite an assembly line going and, in my opinion, the results look pretty professional.
It has been several months since we did this work and these systems continue to run without issue. To complete the story we can see some graphs courtesy of the DSA munin instance.
You can clearly see the huge drop in temperature at the end of Week 25 despite the continuously high CPU load. Also there is only a single gap in the data after the changes (these indicate crashes where data was not recorded) where before there were frequent and extensive times where the systems were simply unusable.
One reason I continue to enjoy Debian so much is the wide variety of ways in which I can contribute not only by maintaining my packages. Sometimes this kind of work does not receive the credit it deserves and hopefully highlights a small part of the frantic paddling that goes on under the serene surface of the Debian project to keep things "just working".
One weekend Steve McIntyre asked me if I could find a source of some of some 40mm fans for some systems with some pretty strict requirements. They needed to be long life and shift a lot of air to combat a persistent overheating issue.
I sat with him and went through the Farnell utterly hateful parametric web interface and eventually came up with a couple of options which were very expensive. Only then did I stop and ask what the actual problem was.
Steve showed me one of the Debian ARM buildd boxes which are Marvell development machines. These systems are powerful quad core machines housed in compact steel enclosures.
There is a single 40mm fan trying to provide cooling for the entire enclosure. When the units are placed horizontally and used intermittently this proves adequate. Unfortunately when the system are arranged vertically in a rack and run at full load continuously they often overheat and have to be restarted. In addition the small high speed fans need replacing frequently as their bearings wore out quickly.
Larger fans can be run much more slowly to move a similar volume of air to the smaller 40mm fans which gives a much longer service life.
I acquired a rather large steel hole punch, I chose this tool because it produces a much superior finish to a hole cutter and this project demanded a high level of finish (not to mention I loved having a valid excuse to own and use a huge allen key!)
If we had simply been modifying a single case I would have measured and marked up by hand. With the prospect of altering at least eight I laser cut a template from plywood which Andy Simpkins took great glee in excessively annotating.
We also used the opportunity to add bolt holes to securely attach the 2.5 inch SATA drives instead of using sticky pads.
Steve and I modified a single system to begin with both to check our alignment and the efficacy of the change. We were pleasantly surprised to discover that hoiby could now repeatedly do kernel compiles with all four cores flat out which was not possible before. The measured CPU temperature, which had previously been around 90°C, did not rise above 40°C
Steve, Andy and I then arranged a day where we took all the remaining units out of the rack at ARM, modified and returned them. We used the facilities at the Cambridge Makespace where I am a member to do the modifications.
I broke two 3mm drill bits and dulled a 4mm bit drilling all the holes, Roger Smith was good enough to loan us the use of his "Christmas tree bit" to ream the fan hole out to 16mm so we could thread the hole punch and cut the 60mm fan aperture out.
We managed to get quite an assembly line going and, in my opinion, the results look pretty professional.
It has been several months since we did this work and these systems continue to run without issue. To complete the story we can see some graphs courtesy of the DSA munin instance.
You can clearly see the huge drop in temperature at the end of Week 25 despite the continuously high CPU load. Also there is only a single gap in the data after the changes (these indicate crashes where data was not recorded) where before there were frequent and extensive times where the systems were simply unusable.
One reason I continue to enjoy Debian so much is the wide variety of ways in which I can contribute not only by maintaining my packages. Sometimes this kind of work does not receive the credit it deserves and hopefully highlights a small part of the frantic paddling that goes on under the serene surface of the Debian project to keep things "just working".
Subscribe to:
Posts (Atom)











