tag:blogger.com,1999:blog-37112697609939931972024-03-18T14:26:13.497+00:00Vincents Random WaffleVincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.comBlogger106125tag:blogger.com,1999:blog-3711269760993993197.post-7107349623517295562019-11-27T19:00:00.000+00:002019-11-27T19:00:07.587+00:00Twice and thrice over, as they say, good is it to repeat and review what is good.Three years ago I <a href="http://vincentsanders.blogspot.com/2016/08/down-rabbit-hole.html">wrote</a> <a href="https://vincentsanders.blogspot.com/2016/10/rabbit-of-caerbannog.html">about</a> using the <a href="http://lcamtuf.coredump.cx/afl/">AFL fuzzer</a> to find bugs in several NetSurf libraries. I have repeated this exercise a couple of times since then and thought I would summarise what I found with my latest run.<br />
<br />
I started by downloading the latest version of AFL (2.52b) and compiling it. This went as smoothly as one could hope for and I experienced no issues although having done this several times before probably helps.<br />
<h2>
libnsbmp</h2>
I started with <a href="http://source.netsurf-browser.org/libnsbmp.git/">libnsbmp</a> which is used to render windows <a href="https://en.wikipedia.org/wiki/BMP_file_format">bmp</a> and <a href="https://en.wikipedia.org/wiki/ICO_(file_format)">ico</a> files which remains a very popular format for website <a href="https://en.wikipedia.org/wiki/Favicon">Favicons</a>. The library was built with AFL instrumentation enabled, some output directories were created for results and a main and four subordinate fuzzer instances started.<br />
<br />
<div style="background: #f8f8f8; background: #f8f8f8; border-width: 0.1em 0.1em 0.1em 0.8em; border: none; overflow: auto; overflow: auto; padding: 0.2em 0.6em; width: auto; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: navy; font-weight: bold;">vince@workshop:libnsbmp$</span> <span style="color: #19177c;">LD</span><span style="color: #666666;">=</span>afl-gcc <span style="color: #19177c;">CC</span><span style="color: #666666;">=</span>afl-gcc <span style="color: #19177c;">AFL_HARDEN</span><span style="color: #666666;">=</span>1 make <span style="color: #19177c;">VARIANT</span><span style="color: #666666;">=</span>debug <span style="color: green;">test</span>
<span style="color: #888888;">afl-cc 2.52b by <lcamtuf@google.com></span>
<span style="color: #888888;">afl-cc 2.52b by <lcamtuf@google.com></span>
<span style="color: #888888;">afl-cc 2.52b by <lcamtuf@google.com></span>
<span style="color: #888888;"> COMPILE: src/libnsbmp.c</span>
<span style="color: #888888;">afl-cc 2.52b by <lcamtuf@google.com></span>
<span style="color: #888888;">afl-as 2.52b by <lcamtuf@google.com></span>
<span style="color: #888888;">[+] Instrumented 633 locations (64-bit, hardened mode, ratio 100%).</span>
<span style="color: #888888;"> AR: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/libnsbmp.a</span>
<span style="color: #888888;"> COMPILE: test/decode_bmp.c</span>
<span style="color: #888888;">afl-cc 2.52b by <lcamtuf@google.com></span>
<span style="color: #888888;">afl-as 2.52b by <lcamtuf@google.com></span>
<span style="color: #888888;">[+] Instrumented 57 locations (64-bit, hardened mode, ratio 100%).</span>
<span style="color: #888888;"> LINK: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp</span>
<span style="color: #888888;">afl-cc 2.52b by <lcamtuf@google.com></span>
<span style="color: #888888;"> COMPILE: test/decode_ico.c</span>
<span style="color: #888888;">afl-cc 2.52b by <lcamtuf@google.com></span>
<span style="color: #888888;">afl-as 2.52b by <lcamtuf@google.com></span>
<span style="color: #888888;">[+] Instrumented 71 locations (64-bit, hardened mode, ratio 100%).</span>
<span style="color: #888888;"> LINK: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_ico</span>
<span style="color: #888888;">afl-cc 2.52b by <lcamtuf@google.com></span>
<span style="color: #888888;">Test bitmap decode</span>
<span style="color: #888888;">Tests:1053 Pass:1053 Error:0</span>
<span style="color: #888888;">Test icon decode</span>
<span style="color: #888888;">Tests:609 Pass:609 Error:0</span>
<span style="color: #888888;"> TEST: Testing complete</span>
<span style="color: navy; font-weight: bold;">vince@workshop:libnsbmp$</span> mkdir findings_dir graph_output_dir
<span style="color: navy; font-weight: bold;">vince@workshop:libnsbmp$</span> afl-fuzz -i <span style="color: green;">test</span>/ns-afl-bmp/ -o findings_dir/ -S f02 ./build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp @@ /dev/null > findings_dir/f02.log >&1 &
<span style="color: navy; font-weight: bold;">vince@workshop:libnsbmp$</span> afl-fuzz -i <span style="color: green;">test</span>/ns-afl-bmp/ -o findings_dir/ -S f03 ./build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp @@ /dev/null > findings_dir/f03.log >&1 &
<span style="color: navy; font-weight: bold;">vince@workshop:libnsbmp$</span> afl-fuzz -i <span style="color: green;">test</span>/ns-afl-bmp/ -o findings_dir/ -S f04 ./build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp @@ /dev/null > findings_dir/f04.log >&1 &
<span style="color: navy; font-weight: bold;">vince@workshop:libnsbmp$</span> afl-fuzz -i <span style="color: green;">test</span>/ns-afl-bmp/ -o findings_dir/ -S f05 ./build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp @@ /dev/null > findings_dir/f05.log >&1 &
<span style="color: navy; font-weight: bold;">vince@workshop:libnsbmp$</span> afl-fuzz -i <span style="color: green;">test</span>/ns-afl-bmp/ -o findings_dir/ -M f01 ./build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp @@ /dev/null
</pre>
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
The number of subordinate fuzzer instances was selected to allow the system in question (AMD 2600X) to keep all the cores in use with a clock of 4GHz which gave the highest number of <br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtCA-NYzD2miFfEIM0jvidosMfJjpOpP06b_TIoByH7kKWIyWI-ILMR7HIWMIoQ84DdIs_Kg3oHeMZz7be2pseMR6eUwPDOcCV931P1jjE1tU16jpiwh8wxNQRPK6owaPplIBW3b9-fR6c/s1600/afl-libnsbmp-20191107.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="AFL master instance after six days" border="0" data-original-height="535" data-original-width="803" height="213" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtCA-NYzD2miFfEIM0jvidosMfJjpOpP06b_TIoByH7kKWIyWI-ILMR7HIWMIoQ84DdIs_Kg3oHeMZz7be2pseMR6eUwPDOcCV931P1jjE1tU16jpiwh8wxNQRPK6owaPplIBW3b9-fR6c/s320/afl-libnsbmp-20191107.png" title="AFL master instance after six days" width="320" /></a></div>
executions per second. This might be improved with better cooling but I have not investigated this.<br />
<br />
After five days and six hours the "cycle count" field on the master instance had changed to green which the AFL documentation suggests means the fuzzer is unlikely to discover anything new so the run was stopped.<br />
<br />
Just before stopping the afl-whatsup tool was used to examine the state of all the running instances.<br />
<!-- HTML generated using hilite.me --><br />
<div style="background: #f8f8f8; background: #f8f8f8; border-width: 0.1em 0.1em 0.1em 0.8em; border: none; overflow: auto; overflow: auto; padding: 0.2em 0.6em; width: auto; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: navy; font-weight: bold;">vince@workshop:libnsbmp$</span> afl-whatsup -s ./findings_dir/
<span style="color: #888888;">status check tool for afl-fuzz by <lcamtuf@google.com></span>
<span style="color: #888888;">Summary stats</span>
<span style="color: #888888;">=============</span>
<span style="color: #888888;"> Fuzzers alive : 5</span>
<span style="color: #888888;"> Total run time : 26 days, 5 hours</span>
<span style="color: #888888;"> Total execs : 2873 million</span>
<span style="color: #888888;"> Cumulative speed : 6335 execs/sec</span>
<span style="color: #888888;"> Pending paths : 0 faves, 0 total</span>
<span style="color: #888888;"> Pending per fuzzer : 0 faves, 0 total (on average)</span>
<span style="color: #888888;"> Crashes found : 0 locally unique</span>
</pre>
</div>
<br />
Just for completeness there is also the graph of how the fuzzer performed over the run.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiM3Cvr1LAwfcg-pRzfd-ZwCN_kXVHFECVRcxEKln64ILc0hM4H-lSOIo_TxPkY_vhmeYYoQ6C4vhtH0c8wal-AyNZ18ZPSOcs0NPswVVrqGV7bHiFgiMN_daDox6UFvC_-YtLKLr_19tTT/s1600/afl-libnsbmp-20191107-graph.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="AFL fuzzer performance over libnsbmp run" border="0" data-original-height="300" data-original-width="1000" height="96" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiM3Cvr1LAwfcg-pRzfd-ZwCN_kXVHFECVRcxEKln64ILc0hM4H-lSOIo_TxPkY_vhmeYYoQ6C4vhtH0c8wal-AyNZ18ZPSOcs0NPswVVrqGV7bHiFgiMN_daDox6UFvC_-YtLKLr_19tTT/s320/afl-libnsbmp-20191107-graph.png" title="AFL fuzzer performance over libnsbmp run" width="320" /></a></div>
<br />
There were no crashes at all (and none have been detected through fuzzing since the original run) and the 78 reported hangs were checked and all actually decode in a reasonable time. It seems the fuzzer "hang" detection default is simply a little aggressive for larger images.<br />
<h2>
libnsgif</h2>
I went through a similar setup with libnsgif which is used to render the <a href="https://en.wikipedia.org/wiki/GIF">GIF image format</a>. The run was performed on a similar system running for five days and eighteen hours. The outcome was similar to libnsbmp with no hangs or crashes.<br />
<br />
<!-- HTML generated using hilite.me --><br />
<div style="background: #f8f8f8; background: #f8f8f8; border-width: 0.1em 0.1em 0.1em 0.8em; border: none; overflow: auto; overflow: auto; padding: 0.2em 0.6em; width: auto; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: navy; font-weight: bold;">vince@workshop:libnsgif$</span> afl-whatsup -s ./findings_dir/
<span style="color: #888888;">status check tool for afl-fuzz by <lcamtuf@google.com></span>
<span style="color: #888888;">Summary stats</span>
<span style="color: #888888;">=============</span>
<span style="color: #888888;"> Fuzzers alive : 5</span>
<span style="color: #888888;"> Total run time : 28 days, 20 hours</span>
<span style="color: #888888;"> Total execs : 7710 million</span>
<span style="color: #888888;"> Cumulative speed : 15474 execs/sec</span>
<span style="color: #888888;"> Pending paths : 0 faves, 0 total</span>
<span style="color: #888888;"> Pending per fuzzer : 0 faves, 0 total (on average)</span>
<span style="color: #888888;"> Crashes found : 0 locally unique</span>
</pre>
</div>
<h2>
libsvgtiny</h2>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXl2I49whBukCaRAytE3QZ797mmg_yvOJ-n22H0gOMfT5q10bbG7b-pvBY_7vrkKgjri5G2rBoR_l9wxoVNZ87QTDwyjZRVeA2OX8taB9wjAl_kfiq303yUioiluOvQd6g-izxv48GBmMw/s1600/high_freq.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="AFL fuzzer results for libsvgtiny" border="0" data-original-height="300" data-original-width="1000" height="96" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXl2I49whBukCaRAytE3QZ797mmg_yvOJ-n22H0gOMfT5q10bbG7b-pvBY_7vrkKgjri5G2rBoR_l9wxoVNZ87QTDwyjZRVeA2OX8taB9wjAl_kfiq303yUioiluOvQd6g-izxv48GBmMw/s320/high_freq.png" title="AFL fuzzer results for libsvgtiny" width="320" /></a></div>
I then ran the fuzzer on the <a href="https://en.wikipedia.org/wiki/Scalable_Vector_Graphics">SVG</a> render library using a dictionary to help the fuzzer cope with a sparse textural input format. The run was allowed to continue for almost fourteen days with no crashes or hangs detected.<br />
<br />
In an ideal situation this run would have been allowed to continue but the system running it required a restart for maintenance.<br />
<h2>
Conclusion</h2>
The aphorism "<a href="https://en.wikipedia.org/wiki/Evidence_of_absence">absence of proof is not proof of absence</a>" seems to apply to these results. While the new fuzzing runs revealed no additional failures it does not mean there are no defects in the code to find. All I can really say is that the AFL tool was unable to find any failures within the time available.<br />
<br />
Additionally the AFL test corpus produced did not significantly change the code coverage metrics so the existing set was retained.<br />
<br />
Will I spend the time again in future to re-run these tests? perhaps, but I think more would be gained from enabling the fuzzing of the other NetSurf libraries and picking the low hanging fruit from there than expending thousands of hours preforming these runs again.Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com14tag:blogger.com,1999:blog-3711269760993993197.post-87411454374550703552019-07-11T18:15:00.000+01:002019-07-11T18:15:09.267+01:00We can make it better than it was. Better...stronger...faster.It is not a novel observation that computers have become so powerful that a reasonably recent system has a relatively long life before obsolescence. This is in stark contrast to the period between the nineties and the teens where it was not uncommon for users with even moderate needs from their computers to upgrade every few years.<br />
<div>
<br /></div>
<div>
This upgrade cycle was mainly driven by huge advances in processing power, memory capacity and ballooning data storage capability. Of course the software engineers used up more and more of the available resources and with each new release ensured users needed to update to have a reasonable experience.</div>
<div>
<br /></div>
<div>
And then sometime in the early teens this cycle slowed almost as quickly as it had begun as systems had become "good enough". I experienced this at a time I was relocating for a new job and had moved most of my computer use to my laptop which was just as powerful as my desktop but was far more flexible.<br />
<div>
<br /></div>
<div>
As a software engineer I used to have a pretty good computer for myself but I was never prepared to spend the money on "top of the range" equipment because it would always be obsolete and generally I had access to much more powerful servers if I needed more resources for a specific task.</div>
<div>
<br />
<div>
To illustrate, the system specification of my desktop PC at the opening of the millennium was:</div>
<div>
<ul>
<li>Single core <a href="https://en.wikipedia.org/wiki/Pentium_III">Pentium 3 running at 500Mhz</a></li>
<li>Socket 370 motherboard with 100 Mhz Front Side Bus</li>
<li>128 Megabytes of memory</li>
<li>A 25 Gigabyte Deskstar hard drive</li>
<li><a href="https://en.wikipedia.org/wiki/RIVA_TNT2">150 Mhz TNT 2</a> graphics card</li>
<li>10 Megabit network card</li>
<li>Unbranded 150W PSU</li>
</ul>
</div>
<div>
But by 2013 the specification had become:</div>
<div>
<ul>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyldgCpa5VWDbbPGKXjePp3j7ERLRb7fkuROCAgILPJD3nKhG7lJnfgKZRk9jje7xgmFae0BoTxPMbf45cB79oAHFt5Z8ajLihdyzT6uVD-piBnfF6NvblPzBHcpdoDx4yl0lUbQFDmPlZ/s1600/20190711_173627.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="2013 PC build still using an awesome beige case from 1999" border="0" data-original-height="1146" data-original-width="1600" height="143" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyldgCpa5VWDbbPGKXjePp3j7ERLRb7fkuROCAgILPJD3nKhG7lJnfgKZRk9jje7xgmFae0BoTxPMbf45cB79oAHFt5Z8ajLihdyzT6uVD-piBnfF6NvblPzBHcpdoDx4yl0lUbQFDmPlZ/s200/20190711_173627.jpg" title="2013 PC build still using an awesome beige case from 1999" width="200" /></a>
<li>Quad core <a href="https://en.wikipedia.org/wiki/List_of_Intel_Core_i5_microprocessors#%22Ivy_Bridge%22_(quad-core,_22_nm)">i5-3330S Processor running at 2700Mhz</a></li>
<li>FCLGA1155 motherboard running memory at 1333 Mhz</li>
<li>8 Gigabytes of memory</li>
<li>Terabyte HGST hard drive</li>
<li>1,050 Mhz Integrated graphics</li>
<li>Integrated Intel Gigabit network</li>
<li>OCZ 500W 80+ PSU</li>
</ul>
</div>
<div>
The performance change between these systems was more than tenfold in fourteen years with an upgrade roughly once every couple of years.<br />
<br />
I recently started using that system again in my home office mainly for <a href="https://en.wikipedia.org/wiki/Computer-aided_design">Computer Aided Design</a> (CAD), <a href="https://en.wikipedia.org/wiki/Computer-aided_manufacturing">Computer Aided Manufacture</a> (CAM) and <a href="https://en.wikipedia.org/wiki/Electronic_design_automation">Electronic Design Automation</a> (EDA). The one addition was to add a widescreen monitor as there was not enough physical space for my usual dual display setup.</div>
</div>
</div>
<div>
<br /></div>
<div>
To my surprise I increasingly returned to this setup for programming tasks. Firstly because being at my desk acts as an indicator to family members I am concentrating where the laptop was no longer had that effect. Secondly I really like the ultra wide display for coding it has become my preferred display and I had been saving for a UWQHD</div>
<div>
<br /></div>
<div>
Alas last month the system started freezing, sometimes it would be stable for several days and then without warning the mouse pointer would stop, my music would cease and a power cycle was required. I tried several things to rectify the situation: replacing the thermal compound, the CPU cooler and trying different memory, all to no avail.</div>
<div>
<br /></div>
<div>
As fixing the system cheaply appeared unlikely I began looking for a replacement and was immediately troubled by the size of the task. Somewhere in the last six years while I was not paying attention the world had moved on, after a great deal of research I managed to come to an answer.</div>
<div>
<br /></div>
<div>
AMD have recently staged something of a comeback with their Ryzen processors after almost a decade of very poor offerings when compared to Intel. The value for money when considering the processor and motherboard combination is currently very much weighted towards AMD.</div>
<div>
<br /></div>
<div>
My timing also seems fortuitous as the new Ryzen 2 processors have just been announced which has resulted in the current generation being available at a substantial discount. I was also encouraged to see that the new processors use the same <a href="https://en.wikipedia.org/wiki/Socket_AM4">AM4 socket</a> and are supported by the current motherboards allowing for future upgrades if required.<br />
<br />
I Purchased a complete new system for under five hundred pounds, comprising:<br />
<ul><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgixP4M4bHrddg7t4OiwWoJwvolp_qEsbsJzmo1q5bFcxeSs3FSd31YLSZsorzeM3KMCon5iQrKSGesQX8o4shNr7ZFGAQ9u7jf2TO-vCrojRh7lFoTi_BO1mHDxmjMJPiLZ2MIixc0R7nZ/s1600/20190703_153913.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="New PC assembled and wired up" border="0" data-original-height="1240" data-original-width="1600" height="154" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgixP4M4bHrddg7t4OiwWoJwvolp_qEsbsJzmo1q5bFcxeSs3FSd31YLSZsorzeM3KMCon5iQrKSGesQX8o4shNr7ZFGAQ9u7jf2TO-vCrojRh7lFoTi_BO1mHDxmjMJPiLZ2MIixc0R7nZ/s200/20190703_153913.jpg" title="New PC assembled and wired up" width="200" /></a>
<li>Hex core Ryzen 5 2600X Processor 3600Mhz</li>
<li>MSI B450 TOMAHAWK AMD Socket AM4 Motherboard</li>
<li>32 Gigabytes of PC3200 DDR4 memory</li>
<li>Aero Cool Project 7 P650 80+ platinum 650W Modular PSU</li>
<li>Integrated RTL Gigabit networking</li>
<li>Lite-On iHAS124 DVD Writer Optical Drive</li>
<li>Corsair CC-9011077-WW Carbide Series 100R Silent Mid-Tower ATX Computer Case</li>
</ul>
to which I added some recycled parts:<br />
<ul>
<li>250 Gigabyte SSD from laptop upgrade</li>
<li>GeForce GT 640 from a friend</li>
</ul>
I installed a fresh copy of Debian and all my CAD/CAM applications and have been using the system for a couple of weeks with no obvious issues.<br />
<br />
An example of the performance difference is compiling NetSurf from a clean with empty ccache used to take 36 seconds and now takes 16 which is a nice improvement, however a clean build with the results cached has gone from 6 seconds to 3 which is far less noticeable and during development a normal edit, build, debug cycle affecting only of a small number of files has gone from 400 milliseconds to 200 which simply feels instant in both cases.<br />
<br />
My conclusion is that the new system is completely stable but that I have gained very little in common usage. Objectively the system is over twice as fast as its predecessor but aside from compiling large programs or rendering huge CAD drawings this performance is not utilised. Given this I anticipate this system will remain unchanged until it starts failing and I can only hope that will be at least another six years away.</div>
Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com14tag:blogger.com,1999:blog-3711269760993993197.post-366040867013384772019-02-19T21:54:00.001+00:002019-02-19T21:54:48.456+00:00A very productive weekendI just hosted a NetSurf Developer weekend which is an opportunity for us to meet up and make use of all the benefits of working together. We find the ability to plan work and discuss solutions without loosing the nuances of body language generally results in better outcomes for the project.<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjJiAv-E27V0j9O3JPMJSpFge7b8HyPBRH0b-fm5Y6p_EJh5FAniWus1OcLXVtxWcQ8SeFT_mPEh0uP_u7LF7mAB5JL3NXJ9LA34hY-w2rBK6P6p1Wmjec7014OdvH65lqlUHtjnGqFexyh/s1600/netsurfdev3.9.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="NetSurf Development build" border="0" data-original-height="733" data-original-width="1014" height="231" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjJiAv-E27V0j9O3JPMJSpFge7b8HyPBRH0b-fm5Y6p_EJh5FAniWus1OcLXVtxWcQ8SeFT_mPEh0uP_u7LF7mAB5JL3NXJ9LA34hY-w2rBK6P6p1Wmjec7014OdvH65lqlUHtjnGqFexyh/s320/netsurfdev3.9.png" title="NetSurf Development build" width="320" /></a></div>
<div>
Due to other commitments on our time the group has not been able to do more than basic maintenance activities in the last year which has resulted in the developer events becoming a time to catch up on maintenance rather than making progress on features.</div>
<div>
<br /></div>
<div>
Because of this the <a href="http://wiki.netsurf-browser.org/developer-weekend/jul-2018/">July</a> and <a href="http://wiki.netsurf-browser.org/developer-weekend/nov-2018/">November</a> events last year did not feel terribly productive, there were discussions about what we should be doing and bugs considered but a distinct lack of commuted code.</div>
<div>
<br /></div>
<div>
As can be seen from our notes this time was a <a href="http://wiki.netsurf-browser.org/developer-weekend/feb-2019/">refreshing change</a>. We managed to complete a good number of tasks and actually add some features while still having discussions, addressing bugs and socialising.</div>
<div>
<br /></div>
<div>
We opened on the Friday evening by creating a list of topics to look at over the following days and updating the wiki notes. We also reviewed the cross compiler toolchains which had been updated to include the most recent releases for things like openssl, curl etc.</div>
<div>
<br /></div>
<div>
As part of this review we confirmed the decision to remove the Atari platform from active support as its toolchain builds have remained broken for over two years with no sign of any maintainer coming forward.</div>
<div>
<br /></div>
<div>
While it is a little sad to see a platform be removed it has presented a burden on our strained resources by requiring us to maintain a CI worker with a very old OS using tooling that can no longer be replicated. The tooling issue means a developer cannot test changes locally before committing so testing changes that affected all frontends was difficult.</div>
<div>
<br /></div>
<div>
Saturday saw us clear all the topics from our list which included:</div>
<div>
<ul>
<li><a href="http://source.netsurf-browser.org/libwapcaplet.git/commit/?id=57a0bd85416ede86191dd2aed1b18e3899eb7323">Fixing</a> a bug preventing compiling our reference counted string handling library.</li>
<li>Finishing the sanitizer work started the previous <a href="http://wiki.netsurf-browser.org/developer-weekend/jul-2018/">July</a></li>
<li>Fixing several bugs in the Framebuffer frontend installation.</li>
<li>Making the Framebuffer UI use the configured language for resources.</li>
</ul>
<div>
The main achievement of the day however was implementing automated <a href="https://en.wikipedia.org/wiki/Software_testing#System_testing">system testing</a> of the browser. This was a project started by Daniel some eight years ago but worked on by all of us so seeing it completed was a positive boost for the whole group.</div>
<div>
<br /></div>
<div>
The implementation consisted of a frontend named <a href="https://en.wikipedia.org/wiki/Infinite_monkey_theorem">monkey</a>. This frontend to the browser takes textural commands to perform operations (i.e. open a window or navigate to a url) and generates results in a structured text format. Monkey is driven by a python program named monkeyfarmer which runs a test plan ensuring the results are as expected.</div>
</div>
<div>
<br /></div>
<div>
This allows us to run a complete browsing session in an automated way, previously someone would have to manually build the browser and check the tests by hand. This manual process was tedious and was rarely completed across our entire test corpus generally concentrating on just those areas that had been changed such as javascript output.</div>
<div>
<br /></div>
<div>
We have combined the monkey tools and our test <a href="http://source.netsurf-browser.org/netsurf-test.git/">corpus</a> into a <a href="https://ci.netsurf-browser.org/jenkins/view/Categorized/job/test-site-run/">CI job</a> which runs the tests on every commit giving us assurance that the browser as a whole continues to operate correctly without regression. Now we just have the task of creating suitable plans for the remaining tests. Though I remain hazy as to why, we became inordinately <a href="https://www.youtube.com/watch?v=szhs8BjgYH8">amused</a> by the naming scheme for the tools.</div>
<div>
<br /></div>
<div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOjh4IvSx5QAy2mISn226dOXlv_AfxaJ68sJrc4PX7yN_Z3TjkjN2nVdoKsCg9axVGgHhcTJc67bP6ili3PQ2IyH9faIj0ABEx1grRedPmpwMFujExU3tpxWAvtLFmFPT5Ubxw0f4_aaoz/s1600/webpgallery.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Google webp library gallery rendered in NetSurf" border="0" data-original-height="825" data-original-width="1045" height="251" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOjh4IvSx5QAy2mISn226dOXlv_AfxaJ68sJrc4PX7yN_Z3TjkjN2nVdoKsCg9axVGgHhcTJc67bP6ili3PQ2IyH9faIj0ABEx1grRedPmpwMFujExU3tpxWAvtLFmFPT5Ubxw0f4_aaoz/s320/webpgallery.png" title="Google webp library gallery rendered in NetSurf" width="320" /></a>We rounded the Saturday off by going out for a very pleasant meal with some mutual friends. Sunday started by adding a bunch of additional topics to consider and we made good progress addressing these. </div>
<div>
<br /></div>
<div>
We performed a bug triage and managed to close several issues and commit to fixing a few more. We even managed to create a statement of work of things we would like to get done before the next meetup.</div>
<div>
<br /></div>
<div>
My main achievement on the Sunday was to add WEBP image support. This uses the <a href="https://developers.google.com/speed/webp/">Google libwebp library</a> to do all the heavy lifting and adding a new image content handler to NetSurf is pretty straightforward.</div>
Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com2tag:blogger.com,1999:blog-3711269760993993197.post-69206947869336585292018-09-30T13:03:00.003+01:002018-09-30T13:03:49.042+01:00All i wanted to do is check an error codeI was feeling a little <a href="https://dictionary.cambridge.org/dictionary/english/under-the-weather">under the weather </a>last week and did not have enough concentration to work on developing a new NetSurf feature as I had planned. Instead I decided to look at <a href="https://bugs.netsurf-browser.org/mantis/view.php?id=2595">a random bug</a> from our worryingly large collection.<br />
<br />
This lead me to consider the HTML form submission function at which point it was "<a href="https://dictionary.cambridge.org/dictionary/english/can-of-worms">can open, worms everywhere</a>". The code in question has a fairly simple job to explain:<br />
<ol>
<li>A user submits a form (by clicking a button or such) and the <a href="https://en.wikipedia.org/wiki/Document_Object_Model">Document Object Model</a> (DOM) is used to create a list of information in the web form.</li>
<li>The list is then converted to the appropriate format for sending to the web site server.</li>
<li>An HTTP request is made using the correctly formatted information to the web server.</li>
</ol>
<ul>
</ul>
<div>
However the code I was faced with, while generally functional, was impenetrable having accreted over a long time.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcTIjVrM2PI7mh0FHEs6LIIFjK7Q0ZmmPqEBcONANO9EP1AwwGnlmiGvDUJzAdHWvkyRSVtzbk6gLju0R7otEQZ8lx8rF555f4E3oV7Yq0yotuacQOb7vaFO1nAv4nUHbg57ArggJ3AG2C/s1600/nsform.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="screenshot of NetSurf test form" border="0" data-original-height="672" data-original-width="835" height="256" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcTIjVrM2PI7mh0FHEs6LIIFjK7Q0ZmmPqEBcONANO9EP1AwwGnlmiGvDUJzAdHWvkyRSVtzbk6gLju0R7otEQZ8lx8rF555f4E3oV7Yq0yotuacQOb7vaFO1nAv4nUHbg57ArggJ3AG2C/s320/nsform.png" title="screenshot of NetSurf test form" width="320" /></a></div>
At this point I was forced into a diversion to <a href="http://git.netsurf-browser.org/netsurf.git/commit/?id=9100fcb4095cf8858d4cd2c613bff69ceb4f71ec">fix up the core URL library</a> handling of query strings (this is used when the form data is submitted as part of the requested URL) which was necessary to simplify some complicated string handling and make the implementation more compliant with <a href="https://url.spec.whatwg.org/">the specification</a>.<br />
<br />
My next step was to <a href="http://git.netsurf-browser.org/netsurf.git/commit/?id=5c96acd6f119b71fc75e5d48465afca9fd13e87f">add some basic error reporting</a> instead of warning the user the system was out of memory for every failure case which was making debugging somewhat challenging. I was beginning to think I had discovered a <a href="https://en.wiktionary.org/wiki/yak_shaving">series of very hairy yaks</a> although at least I was not <a href="https://www.youtube.com/watch?v=AbSehcT19u0">trying to change a light bulb</a> which can get very complicated.<br />
<br />
At this point I ran into the <tt>form_successful_controls_dom()</tt> function which performs step one of the process. This function had six hundred lines of code, hundreds of conditional branches 26 local variables and five levels of indentation in places. These properties combined resulted in a <a href="https://en.wikipedia.org/wiki/Cyclomatic_complexity">cyclomatic complexity metric</a> of 252. For reference programmers generally try to keep a single function to no more than a hundred lines of code with as few local variables as possible resulting in a CCM of 20.<br />
<br />
I now had a choice:<br />
<br />
<ul>
<li>I could abandon investigating the bug, because even if I could find the issue changing such a function without adequate testing is likely to introduce several more.</li>
<li>I could refactor the function into multiple simpler pieces.</li>
</ul>
<div>
I slept on this decision and decided to at least try to refactor the code in an attempt to pay back a little of the technical debt in the browser (and maybe let me fix the bug). After several hours of work <a href="http://git.netsurf-browser.org/netsurf.git/commit/?id=7a61c957243f8f4fe4d8b89dc19e90aa98e98a25">the refactored source</a> has the desirable properties of:</div>
<br />
<div>
<ul>
<li>multiple straightforward functions</li>
<li>no function much more than a hundred lines long</li>
<li>resource lifetime is now obvious and explicit</li>
<li>errors are correctly handled and reported</li>
</ul>
</div>
<br />
<div>
I carefully examined the change in generated code and was pleased to see the compiler output had become more compact. This is an important point that less experienced programmers sometimes miss, if your source code is written such that a compiler can reason about it easily you often get much better results than the compact alternative. However even if the resulting code had been larger the improved source would have been worth it.</div>
<div>
<br /></div>
<div>
After spending over ten hours working on this bug I have not resolved it yet, indeed one might suggest I have not even directly considered it yet! I wanted to use this to explain a little to users who have to wait a long time for their issues to get resolved (in any project not just NetSurf) just how much effort is sometimes involved in a simple bug.</div>
<div>
<br /></div>
</div>
Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com6tag:blogger.com,1999:blog-3711269760993993197.post-72733371347125455302018-08-07T16:01:00.000+01:002018-08-07T18:49:21.006+01:00The brain is a wonderful organ; it starts working the moment you get up in the morning and does not stop until you get into the office.I fear that I may have worked in a similar office environment to <a href="https://en.wikipedia.org/wiki/Robert_Frost">Robert Frost</a>. Certainly his description is familiar to those of us who have been subjected to modern "<a href="https://en.wikipedia.org/wiki/Open_plan">open plan</a>" offices. Such settings may work for some types of job but for myself, as a programmer, it has a huge negative effect.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4wFQDTx6Bxsm336WwrcajLFnGiAu8hSRYhTni-Tlc9hWrJ8ERmJHTWryT4hR4niCp7_auVceSf3szwH9DVndDJjyLYkOXiBO0N0bJeAB_njyFIp-qyth0n6d4b3Q-FaxOxYddHQXlF_pG/s1600/IMG_0921.JPG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="My old basement office" border="0" data-original-height="1600" data-original-width="1184" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4wFQDTx6Bxsm336WwrcajLFnGiAu8hSRYhTni-Tlc9hWrJ8ERmJHTWryT4hR4niCp7_auVceSf3szwH9DVndDJjyLYkOXiBO0N0bJeAB_njyFIp-qyth0n6d4b3Q-FaxOxYddHQXlF_pG/s200/IMG_0921.JPG" title="My old basement office" width="147" /></a>When I decided to move on from my previous job my new position allowed me to work remotely. I have worked from home before so knew what to expect. My experience led me to believe the main aspects to address when home working were:<br />
<dl>
<dt>Isolation</dt>
<dd>This is difficult to mitigate but frequent face to face meetings and video calls with colleagues can address this providing you are aware that some managers have a terrible habit of "out of sight, out of mind" management</dd>
<dt>Motivation</dt>
<dd>You are on your own a lot of the time which means you must motivate yourself to work. Mainly this is achieved through a routine. I get dressed properly, start work the same time every day and ensure I take breaks at regular times.</dd>
<dt>Work life balance</dt>
<dd>This is often more of a problem than you might expect and not in the way most managers assume. A good motivated software engineer can have a terrible habit of suddenly discovering it is long past when they should have finished work. It is important to be strict with yourself and finish at a set time.</dd>
<dt>Distractions</dt>
<dd>In my previous office testers, managers, production and support staff were all mixed in with the developers resulting in a lot of distractions however when you are at home there are also a great number of possible distractions. It can be difficult to avoid friends and family assuming you are available during working hours to run errands. I find I need to carefully budget time to such tasks and take it out of my working time like i was actually in an office.</dd>
<dt>Environment</dt>
<dd>My previous office had "tired" furniture and decoration in an open plan which often had a negative impact on my productivity. When working from home I find it beneficial to partition my working space from the rest of my life and ensure family know that when I am in that space I am unavailable. You inevitably end up spending a great deal of time in this workspace and it can have a surprisingly large effect on your productivity.</dd>
</dl>
Being confident I was aware of what I was letting myself into I knew I required a suitable place to work. In our previous home the only space available for my office was a four by ten foot cellar room with artificial lighting. Despite its size I was generally productive there as there were few distractions and the door let me "leave work" at the end of the day.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfTo44YVWLkAuEm0lQBIRIMdd1QkNHqPwDJA3LyrXke7HdP8mrmVmFJxRKJ3zz_TMkLQt7CwJZElMnJiRFXcY5kw6G3buYDt_VkhJrxvsV82ukOZiiSmJuS1UHD_UjQxg1HwEXJ-XwRe51/s1600/IMG_20170608_083300.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="Garden office was assembled June 2017" border="0" data-original-height="1096" data-original-width="1600" height="136" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfTo44YVWLkAuEm0lQBIRIMdd1QkNHqPwDJA3LyrXke7HdP8mrmVmFJxRKJ3zz_TMkLQt7CwJZElMnJiRFXcY5kw6G3buYDt_VkhJrxvsV82ukOZiiSmJuS1UHD_UjQxg1HwEXJ-XwRe51/s200/IMG_20170608_083300.jpg" title="Garden office was assembled June 2017" width="200" /></a></div>
This time my resources to create the space are larger and I wanted a place I would be comfortable to spend a lot of time in. Initially I considered using the spare bedroom which my wife was already using as a study. This was quickly discounted as it would be difficult to maintain the necessary separation of work and home.<br />
<br />
Instead we decided to replace the garden shed with a garden office. The contractor ensured the structure selected met all the local planning requirements while remaining within our budget. The actual construction was surprisingly rapid. The previous structure was removed and a concrete slab base was placed in a few hours on one day and the timber building erected in an afternoon the next.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhM9rORyqIkyJzd5JQehhyQFlcgSJahbk8G3ZeT202YjAqno0qEEcShjwzP6tdxV7YK2Oi4WqC8LRBBqK-t9BJ_ZplsINKL1T2kQcuC0kP5rdGCSGsUk5tGRCF2YVv4vW_XAXLlbyZ182MU/s1600/20180807_115648.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Completed office in August 2018" border="0" data-original-height="1040" data-original-width="1600" height="130" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhM9rORyqIkyJzd5JQehhyQFlcgSJahbk8G3ZeT202YjAqno0qEEcShjwzP6tdxV7YK2Oi4WqC8LRBBqK-t9BJ_ZplsINKL1T2kQcuC0kP5rdGCSGsUk5tGRCF2YVv4vW_XAXLlbyZ182MU/s200/20180807_115648.jpg" title="Completed office in August 2018" width="200" /></a></div>
The building arrived separated into large sections on a truck which the workmen assembled rapidly. They then installed wall insulation, glazing and roof coverings. I had chosen to have the interior finished in a hardwood plywood being hard wearing and easy to apply finish as required.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvlxjCv1YIPdQdv_l-fVrjI5I1wXGn989-qmwOOCUuRjFStm021CFmdGXX7pM1GRvZ6TTK3CdtggKZJWzhPjQT34kPtO4VmuY7AIk60A8wDahq9AcxVYUG_m1X2rhTDLCamba60fx7hN-A/s1600/IMG_20170727_115033.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="Work desk in July 2017" border="0" data-original-height="768" data-original-width="1024" height="150" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvlxjCv1YIPdQdv_l-fVrjI5I1wXGn989-qmwOOCUuRjFStm021CFmdGXX7pM1GRvZ6TTK3CdtggKZJWzhPjQT34kPtO4VmuY7AIk60A8wDahq9AcxVYUG_m1X2rhTDLCamba60fx7hN-A/s200/IMG_20170727_115033.jpg" title="Work desk in July 2017" width="200" /></a></div>
Although the structure could have been painted at the factory Melodie and I applied this ourselves to keep the project in budget. I laid a laminate floor suitable for high moisture areas (the UK is not generally known as a dry country) and Steve McIntyre and Andy Simpkins assisted me with various additional tasks to turn it into a usable space.<br />
<br />
To begin with I filled the space with furniture I already had, for example the desk was my old IKEA Jerker which I have had for over twenty years.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXWIYpP4QTPSwQtWEoVEFcJe4DfjgOLcvY0Tm-7XPWksnKboaAUoiH4aD-gYGzdmfGKWDQFQrfv_bO5Zo9MT_vL2GPDvMBF_3DusauyCGfQfANv7IjeDnfrg2ud0YFZVB7McXFl7-6YJqN/s1600/20180807_123336.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Work desk in August 2018" border="0" data-original-height="808" data-original-width="1024" height="157" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXWIYpP4QTPSwQtWEoVEFcJe4DfjgOLcvY0Tm-7XPWksnKboaAUoiH4aD-gYGzdmfGKWDQFQrfv_bO5Zo9MT_vL2GPDvMBF_3DusauyCGfQfANv7IjeDnfrg2ud0YFZVB7McXFl7-6YJqN/s200/20180807_123336.jpg" title="Work desk in August 2018" width="200" /></a></div>
Since then I have changed the layout a couple of times but have finally returned to having my work desk in the corner looking out over the garden. I replaced the Jerka with a new <a href="https://www.ikea.com/gb/en/products/desks/desk-computer-desks/skarsta-desk-sit-stand-white-spr-49084965/">IKEA Skarsta</a> standing desk, PEXIP bought me a nice work laptop and I <a href="https://www.etsy.com/uk/shop/SiyahKediPhotography">acquired a nice print from Lesley Mitchell</a> but overall little has changed in my professional work area in the last year and I have a comfortable environment.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEit9PWa4nCHeIgtX_OVEvmoLFhCFeEFlN2MY0JolChdpLFhmNOyDKRnKWEInaB61ls9IHuTngvphh0Bvq9Jdmh0y0oUs7PoiPMFEm46REThRpCmdNGgMwfU5O9ujygn0ZRspIYaVjwlBsqP/s1600/20180807_131514.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="Cluttered personal work area" border="0" data-original-height="900" data-original-width="1600" height="112" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEit9PWa4nCHeIgtX_OVEvmoLFhCFeEFlN2MY0JolChdpLFhmNOyDKRnKWEInaB61ls9IHuTngvphh0Bvq9Jdmh0y0oUs7PoiPMFEm46REThRpCmdNGgMwfU5O9ujygn0ZRspIYaVjwlBsqP/s200/20180807_131514.jpg" title="Cluttered personal work area" width="200" /></a></div>
In addition the building is large enough that there is space for my electronics bench. The bench itself was given to me by Andy. I purchased some inexpensive kitchen cabinets and worktop (white is cheapest) to obtain a little more bench space and storage. Unfortunately all those flat surfaces seem to accumulate stuff at an alarming rate and it looks like I need a clear out again.<br />
<br />
In conclusion I have a great work area which was created at a reasonable cost.<br />
<br />
There are a couple of minor things I would do differently next time:<br />
<ul>
<li>Position the building better with respect to the boundary fence. I allowed too much distance on one side of the structure which has resulted in an unnecessary two foot wide strip of unusable space.</li>
<li>Ensure the door was made from better materials. The first winter in the space showed that the door was a poor fit as it was not constructed to the same standard as the rest of the building.</li>
<li>The door should have been positioned on the end wall instead of the front. Use of the building showed moving the door would make the internal space more flexible.</li>
<li>Planned the layout more effectively ahead of time, ensuring I knew where services (electricity) would enter and where outlets would be placed.</li>
<li>Ensure I have an electrician on site for the first fix so electrical cables could be run inside the walls instead of surface trunking.</li>
<li>Budget for air conditioning as so far the building has needed heating in winter and cooling in summer.</li>
</ul>
<div>
In essence my main observation is better planning of the details matters. If i had been more aware of this a year ago perhaps I would not not be budgeting to replace the door and fit air conditioning now.</div>
Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com3tag:blogger.com,1999:blog-3711269760993993197.post-10082420136826121652018-08-01T10:20:00.000+01:002018-08-01T10:20:07.095+01:00Irony is the hygiene of the mindWhile <a href="https://en.wikipedia.org/wiki/Elizabeth_Bibesco">Elizabeth Bibesco</a> might well be right about the mind software cleanliness requires a different approach.<br />
<br />
Previously I have <a href="http://vincentsanders.blogspot.com/2017/03/a-rose-by-any-other-name-would-smell-as.html">written about code smells</a> which give a programmer hints where to clean up source code. A different technique, which has recently become readily available, is using tool-chain based instrumentation to perform run time analysis.<br />
<br />
At a recent <a href="http://wiki.netsurf-browser.org/developer-weekend/jul-2018/">NetSurf developer weekend</a> Michael Drake mentioned a <a href="https://2018.guadec.org/pages/talks-and-events.html#abstract-10-simple_tricks_to_assess_and_improve_the_security_o">talk he had seen at the Guadec</a> conference which reference the use of sanitizers for improving the security and correctness of programs.<br />
<br />
Santizers differ from other code quality metrics such as compiler warnings and static analysis in that they detect issues when the program is executed rather than on the source code. There are currently two commonly used instrumentation types:<br />
<dl>
<dt><a href="https://github.com/google/sanitizers/wiki/AddressSanitizer">address sanitizer</a></dt>
<dd>This instrumentation detects several common errors when using memory such as "use after free"</dd>
<dt>undefined behaviour sanitizer</dt>
<dd>This instruments computations where the language standard has behaviour which is not clearly specified. For example left shifts of negative values (ISO 9899:2011 6.5.7 Bit-wise shift operators)</dd>
</dl>
As these are runtime checks it is necessary to actually execute the instrumented code. Fortunately most of the NetSurf components have good unit test coverage so Daniel Silverstone used this to add a build target which runs the tests with the sanitizer options.<br />
<br />
The previous investigation of this technology had been unproductive because of the immaturity of support in our CI infrastructure. This time the tool chain could be updated to be sufficiently robust to implement the technique.<br />
<div>
<br /></div>
Jobs were then added to the CI system to build this new target for each component in a similar way to how the existing coverage reports are generated. This resulted in failed jobs for almost every component which we proceeded to correct.<br />
<br />
An example of how most issues were addressed is provided by Daniel <a href="http://git.netsurf-browser.org/libnsbmp.git/commit/?id=de65417b28d4b9fe4bf23cc57fd56450755878f8">fixing the bitmap library</a>. Most of the fixes ensured correct type promotion in bit manipulation, however the address sanitizer did find a real out of bounds access when a malformed BMP header is processed. This is despite this library being <a href="http://vincentsanders.blogspot.com/2016/08/down-rabbit-hole.html">run with a fuzzer</a> and electric fence for many thousands of CPU hours previously.<br />
<br />
Although we did find a small number of real issues the majority of the fixes were to tests which failed to correctly clean up the resources they used. This seems to parallel what I observed with the other run time testing, like AFL and Valgrind, in that often the test environment has the largest impact on detected issues to begin with.<br />
<br />
In conclusion it appears that an instrumented build combined with our existing unit tests gives another tool to help us improve our code quality. Given the very low amount of engineering time the NetSurf project has available automated checks like these are a good way to help us avoid introducing issues.Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com1tag:blogger.com,1999:blog-3711269760993993197.post-36368704349995721672018-06-01T13:27:00.000+01:002018-06-01T13:27:30.442+01:00You can't make a silk purse from a sow's ear<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUrD2OHdGcy6acgEFY3ij18c2JkA2txkZddbtvROhVayaxcOCAqPoJuBCLWI-bwmgtDaKs9edxoMlkgRIpk_iQxQDmJZwzU1Vtc5NvLFnE0IbpMCYafboxFh4yaAZ20ZiipNTrUx8XecBt/s1600/20180202_165821.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Pile of network switches" border="0" data-original-height="900" data-original-width="1600" height="112" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUrD2OHdGcy6acgEFY3ij18c2JkA2txkZddbtvROhVayaxcOCAqPoJuBCLWI-bwmgtDaKs9edxoMlkgRIpk_iQxQDmJZwzU1Vtc5NvLFnE0IbpMCYafboxFh4yaAZ20ZiipNTrUx8XecBt/s200/20180202_165821.jpg" title="Pile of network switches" width="200" /></a></div>
I needed a small Ethernet network switch in my office so went to my pile of devices and selected an old Dell PowerConnect 2724 from the stack. This seemed the best candidate as the others were intended for data centre use and known to be very noisy.<br />
<br />
I installed it into place and immediately ran into a problem, the switch was not quiet enough, in fact I could not concentrate at all with it turned on.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQ19ry4hr1bjY1NjOtu9l78ARJuMKGPL9-YBN0qlZbQw0FbKdRtb7-RbVIHDvoyegx0syBbTJKoyWIqGXV2ClMUjhH_6yhKrboeXRKY_gshHGpPd4OFsYssbjJiaax6noe-1KX46Q09why/s1600/Screenshot_20180316-091308.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="Graph of quiet office sound pressure" border="0" data-original-height="1027" data-original-width="1059" height="193" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQ19ry4hr1bjY1NjOtu9l78ARJuMKGPL9-YBN0qlZbQw0FbKdRtb7-RbVIHDvoyegx0syBbTJKoyWIqGXV2ClMUjhH_6yhKrboeXRKY_gshHGpPd4OFsYssbjJiaax6noe-1KX46Q09why/s200/Screenshot_20180316-091308.png" title="Graph of quiet office sound pressure" width="200" /></a></div>
Believing I could not fix what I could not measure I decided to download an app for my phone that measured raw <a href="https://en.wikipedia.org/wiki/Sound_pressure">sound pressure</a>. This would allow me to <a href="https://en.wikipedia.org/wiki/Empirical_evidence">empirically</a> examine what effects any changes to the switch made.<br />
<br />
The app is not calibrated so can only be used to examine relative changes so a reference level is required. I took a reading in the office with the switch turned off but all other equipment operating to obtain a baseline measurement.<br />
<br />
All measurements were made with the switch and phone in the same positions about a meter apart. The resulting yellow curves are the average for a thirty second sample period with the peak values in red.<br />
<br />
The peak between 50Hz and 500Hz initially surprised me but after researching how a human perceives sound it appears we must apply the <a href="https://en.wikipedia.org/wiki/Equal-loudness_contour">equal loudness curve</a> to correct the measurement.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieT7tViD9XvFfsbacHd3eimUcBiqA-y_n07-3Yy0uS691FprYa6ISpLxTa1G8HI6Liga9FJMp0bEg7BBHj54hNzgX0xG4Ulj6hfb5z5dscTkALgdaHIaIYkBq8BlI_dFqmB8E_UpvcncEl/s1600/Screenshot_20180202-150538.png" imageanchor="1" style="clear: right; display: inline !important; float: right; margin-bottom: 1em; margin-left: 1em; text-align: center;"><img alt="Graph of office sound pressure with switch turned on" border="0" data-original-height="1031" data-original-width="1058" height="194" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieT7tViD9XvFfsbacHd3eimUcBiqA-y_n07-3Yy0uS691FprYa6ISpLxTa1G8HI6Liga9FJMp0bEg7BBHj54hNzgX0xG4Ulj6hfb5z5dscTkALgdaHIaIYkBq8BlI_dFqmB8E_UpvcncEl/s200/Screenshot_20180202-150538.png" title="Graph of office sound pressure with switch turned on" width="200" /></a>With this in mind we can concentrate on the data between 200Hz and 6000Hz as the part of the frequency spectrum with the most impact. So in the reference sample we can see that the audio pressure is around the -105dB level.<br />
<br />
I turned the switch on and performed a second measurement which showed a level around the -75dB level with peaks at the -50dB level. This is a difference of some 30dB, if we assume our reference is a "<a href="https://en.wikipedia.org/wiki/Sound_pressure#Examples_of_sound_pressure">calm room</a>" at 25dB(<a href="https://en.wikipedia.org/wiki/Sound_pressure#Sound_pressure_level">SPL</a>) then the switch is causing the ambient noise level to similar to a "<a href="https://en.wikipedia.org/wiki/Sound_pressure#Examples_of_sound_pressure">normal conversation</a>" at 55dB(SPL).<br />
<br />
Something had to be done if I were to keep using this device so I opened the switch to examine the possible sources of noise.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_ksZuAfdY69vegGFXf9KMsiNLt1a9BsHCqbqwSdV6JPVOO244dCz1h3NEBptuowkmtaOJKA10otQaI66aPwKOY1fgQdHvIhgMMs2EwiPuRxA1OzFTjmBuIWv3HD9p20ZwsZtTra5IUG2L/s1600/20180202_144752.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="Dell PowerConnect 2724 with replacement Noctua fan" border="0" data-original-height="900" data-original-width="1600" height="112" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_ksZuAfdY69vegGFXf9KMsiNLt1a9BsHCqbqwSdV6JPVOO244dCz1h3NEBptuowkmtaOJKA10otQaI66aPwKOY1fgQdHvIhgMMs2EwiPuRxA1OzFTjmBuIWv3HD9p20ZwsZtTra5IUG2L/s200/20180202_144752.jpg" title="Dell PowerConnect 2724 with replacement Noctua fan" width="200" /></a></div>
There was a single 40x40x20mm 5v high capacity <a href="http://www.sunon.com/">sunon</a> brand fan in the rear of the unit. I unplugged the fan and the noise level immediately returned to ambient indicating that all the noise was being produced by this single device, unfortunately the switch soon overheated without the cooling fan operating.<br />
<br />
I thought the fan might be defective so purchased a high quality "quiet" NF-A4x20 replacement from <a href="https://noctua.at/">Noctua</a>. The fan has rubber mounting fixings to further reduce noise and I was hopeful this would solve the issue.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgaPVmrZfuEytD5rtoC2hr9XtXugZggpv0Qu-Tbt2hDXDkRL6IhnELp6z8ECOf7W7eZAH9IrOmqmQKkb5Wlf8R3ue0JTKAKpIuhiPLcz6XtBIgkAOpmWch59PRrd0rTZo5GMk5RGalPknm/s1600/Screenshot_20180202-135342.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Graph of office sound pressure with modified switch turned on" border="0" data-original-height="1031" data-original-width="1058" height="194" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgaPVmrZfuEytD5rtoC2hr9XtXugZggpv0Qu-Tbt2hDXDkRL6IhnELp6z8ECOf7W7eZAH9IrOmqmQKkb5Wlf8R3ue0JTKAKpIuhiPLcz6XtBIgkAOpmWch59PRrd0rTZo5GMk5RGalPknm/s200/Screenshot_20180202-135342.png" title="Graph of office sound pressure with modified switch turned on" width="200" /></a></div>
The initial results were promising with noise above 2000Hz largely being eliminated. However the way the switch enclosure was designed caused airflow to make sound which produce a level around 40dB(SPL) between 200Hz and 2000Hz.<br />
<br />
I had the switch in service for several weeks in this configuration eventually the device proved impractical on several points:<br />
<br />
<ul>
<li>The management interface was dreadful to use.</li>
<li>The network performance was not very good especially in trunk mode.</li>
<li>The lower frequency noise became a distraction for me in an otherwise quiet office.</li>
</ul>
<br />
In the end I purchased an <a href="https://www.zyxel.com/uk/en/products_services/8-10-16-24-48-port-GbE-Smart-Managed-Switch-GS1900-Series/">8 port zyxel switch</a> which is passively cooled and otherwise silent in operation and has none of the other drawbacks.<br />
<br />
From this experience I have learned some things:<br />
<br />
<ul>
<li>Higher frequency noise (2000Hz and above) is much more difficult to ignore than other types of noise.</li>
<li>As I have become older my tolerance for equipment noise has decreased and it actively affects my concentration levels.</li>
<li>Some equipment has a design which means its audio performance cannot be improved sufficiently.</li>
<li>Measuring and interpreting noise sources is quite difficult.</li>
</ul>
Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com23tag:blogger.com,1999:blog-3711269760993993197.post-84217504188440313052017-03-18T13:01:00.000+00:002017-03-18T13:01:15.926+00:00A rose by any other name would smell as sweetOften I end up dealing with code that works but might not be of the highest quality. While quality is subjective I like to use the idea of <a href="https://en.wikipedia.org/wiki/Code_smell">"code smell"</a> to convey what I mean, these are a list of indicators that, in total, help to identify code that might benefit from some improvement.<br />
<br />
Such smells may include:<br />
<ul>
<li>Complex code lacking comments on intended operation</li>
<li>Code lacking API documentation comments especially for interfaces used outside the local module</li>
<li>Not following style guide</li>
<li>Inconsistent style</li>
<li>Inconsistent indentation</li>
<li>Poorly structured code</li>
<li>Overly long functions</li>
<li>Excessive use of pre-processor</li>
<li>Many nested loops and control flow clauses</li>
<li>Excessive numbers of parameters</li>
</ul>
I am most certainly not alone in using this approach and <a href="http://martinfowler.com/bliki/CodeSmell.html">Fowler</a> et al have covered this subject in the literature much better than I can here. One point I will raise though is some programmers dismiss code that exhibits these traits as "legacy" and immediately suggest a fresh implementation. There are varying opinions on when a rewrite is the appropriate solution from <a href="https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/">never</a> to always but in my experience making the old <b>working</b> code smell nice is almost always less effort and risk than a re-write.<br />
<h3>
Tests</h3>
When I come across smelly code, and I decide it is worthwhile improving it, I often discover the biggest smell is lack of test coverage. Now do remember this is just one code smell and on its own might not be indicative, my experience is smelly code seldom has effective test coverage while fresh code often does.<br />
<br />
<a href="https://en.wikipedia.org/wiki/Code_coverage">Test coverage</a> is generally understood to be the percentage of source code lines and decision paths used when instrumented code is exercised by a set of tests. Like many metrics developer tools produce, "coverage percentage" is often misused by managers as a proxy for code quality. Both <a href="https://martinfowler.com/bliki/TestCoverage.html">Fowler</a> and <a href="http://www.exampler.com/testing-com/writings/coverage.pdf">Marick</a> have written about this but sufficient to say that for a developer test coverage is a useful tool but should not be misapplied.<br />
<br />
Although refactoring without tests is possible the chances for unintended consequences are proportionally higher. I often approach such a refactor by enumerating all the callers and constructing a description of the used interface beforehand and check that that interface is not broken by the refactor. At which point is is probably worth writing a unit test to automate the checks.<br />
<br />
Because of this I have changed my approach to such refactoring to start by ensuring there is at least basic API code coverage. This may not yield the fashionable 85% coverage target but is useful and may be extended later if desired.<br />
<br />
It is widely known and equally widely ignored that for maximum effectiveness unit tests must be run frequently and developers take action to rectify failures promptly. A test that is not being run or acted upon is a waste of resources both to implement and maintain which might be better spent elsewhere.<br />
<br />
For projects I contribute to frequently I try to ensure that the CI system is running the coverage target, and hence the unit tests, which automatically ensures any test breaking changes will be highlighted promptly. I believe the slight extra overhead of executing the instrumented tests is repaid by having the coverage metrics available to the developers to aid in spotting areas with inadequate tests.<br />
<h3>
Example</h3>
A short example will help illustrate my point. When a web browser receives an object over HTTP the server can supply a MIME type in a content-type header that helps the browser interpret the resource. However this meta-data is often problematic (sorry that should read "a misleading lie") so the actual content must be examined to get a better answer for the user. This is known as mime sniffing and of course there is a living <a href="https://mimesniff.spec.whatwg.org/">specification</a>.<br />
<br />
The <a href="http://source.netsurf-browser.org/netsurf.git/tree/content/mimesniff.c?id=6b645664fe9c8f8d8a46493a6e00ef32b753a642">source code that provides this API</a> (Linked to it rather than included for brevity) has a few smells:<br />
<ul>
<li>Very few comments of any type</li>
<li>The API are not all well documented in its header</li>
<li>A lot of global context</li>
<li>Local static strings which should be in the global string table</li>
<li>Pre-processor use</li>
<li>Several long functions</li>
<li>Exposed API has many parameters</li>
<li>Exposed API uses complex objects</li>
<li>The git log shows the code has not been significantly updated since its implementation in 2011 but the spec has.</li>
<li>No test coverage</li>
</ul>
While some of these are obvious the non-use of the global string table and the API complexity needed detailed knowledge of the codebase, just to highlight how subjective the sniff test can be. There is also one huge air freshener in all of this which definitely comes from experience and that is the modules author. Their name at the top of this would ordinarily be cause for me to move on, but I needed an example!<br />
<br />
First thing to check is the API use<br />
<!-- HTML generated using hilite.me --><br />
<div style="background: #f8f8f8; border-width: 0.1em 0.1em 0.1em 0.8em; border: none; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #408080; font-style: italic;">$ git grep -i -e mimesniff_compute_effective_type --or -e mimesniff_init --or -e mimesniff_fini</span>
content<span style="color: #666666;">/</span>hlcache.c<span style="color: #666666;">:</span> error <span style="color: #666666;">=</span> mimesniff_compute_effective_type(handle, <span style="color: green;">NULL</span>, <span style="color: #666666;">0</span>,
content<span style="color: #666666;">/</span>hlcache.c<span style="color: #666666;">:</span> error <span style="color: #666666;">=</span> mimesniff_compute_effective_type(handle,
content<span style="color: #666666;">/</span>hlcache.c<span style="color: #666666;">:</span> error <span style="color: #666666;">=</span> mimesniff_compute_effective_type(handle,
content<span style="color: #666666;">/</span>mimesniff.c<span style="color: #666666;">:</span>nserror mimesniff_init(<span style="color: #b00040;">void</span>)
content<span style="color: #666666;">/</span>mimesniff.c<span style="color: #666666;">:</span><span style="color: #b00040;">void</span> mimesniff_fini(<span style="color: #b00040;">void</span>)
content<span style="color: #666666;">/</span>mimesniff.c<span style="color: #666666;">:</span>nserror mimesniff_compute_effective_type(llcache_handle <span style="color: #666666;">*</span>handle,
content<span style="color: #666666;">/</span>mimesniff.h<span style="color: #666666;">:</span>nserror mimesniff_compute_effective_type(<span style="color: green; font-weight: bold;">struct</span> llcache_handle <span style="color: #666666;">*</span>handle,
content<span style="color: #666666;">/</span>mimesniff.h<span style="color: #666666;">:</span>nserror mimesniff_init(<span style="color: #b00040;">void</span>);
content<span style="color: #666666;">/</span>mimesniff.h<span style="color: #666666;">:</span><span style="color: #b00040;">void</span> mimesniff_fini(<span style="color: #b00040;">void</span>);
desktop<span style="color: #666666;">/</span>netsurf.c<span style="color: #666666;">:</span> ret <span style="color: #666666;">=</span> mimesniff_init();
desktop<span style="color: #666666;">/</span>netsurf.c<span style="color: #666666;">:</span> mimesniff_fini();
</pre>
</div>
<br />
This immediately shows me that this API is used in only a very small area, this is often not the case but the general approach still applies.<br />
<br />
After a little investigation the usage is effectively that the mimesniff_init API must be called before the mimesniff_compute_effective_type API and the mimesniff_fini releases the initialised resources.<br />
<br />
A simple test case was added to cover the API, this exercised the behaviour both when the init was called before the computation and not. Also some simple tests for a limited number of well behaved inputs.<br />
<br />
By changing to using the global string table the initialisation and finalisation API can be removed altogether along with a large amount of global context and pre-processor macros. This single change removes a lot of smell from the module and raises test coverage both because the global string table already has good coverage and because there are now many fewer lines and conditionals to check in the mimesniff module.<br />
<br />
I stopped the refactor at this point but were this more than an example I probably would have:<br />
<ul>
<li>made the compute_effective_type interface simpler with fewer, simpler parameters</li>
<li>ensured a solid set of test inputs</li>
<li>examined using a fuzzer to get a better test corpus.</li>
<li>added documentation comments</li>
<li>updated the implementation to 2017 specification.</li>
</ul>
<h3>
Conclusion</h3>
The approach examined here reduce the smell of code in an incremental, <b>testable</b> way to improve the codebase going forward. This is mainly necessary on larger complex codebases where technical debt and bit-rot are real issues that can quickly overwhelm a codebase if not kept in check.<br />
<br />
This technique is subjective but helps a programmer to quantify and examine a piece of code in a structured fashion. However it is only a tool and should not be over applied nor used as a metric to proxy for code quality.Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com4tag:blogger.com,1999:blog-3711269760993993197.post-40472451724863616042017-02-13T23:01:00.002+00:002017-02-13T23:01:29.288+00:00The minority yields to the majority!<a href="https://en.wikipedia.org/wiki/Deng_Xiaoping" rel="nofollow">Deng Xiaoping</a> (who succeeded Mao) expounded this view and obviously did not depend on a minority to succeed. In open source software projects we often find ourselves implementing features of interest to a minority of users to keep our software relevant to a larger audience.<br />
<br />
As previously mentioned I contribute to the NetSurf project and the browser natively supports numerous toolkits for numerous platforms. This produces many challenges in development to obtain the benefits of a more diverse user base. As part of the recent NetSurf developer weekend we took the opportunity to review all the frontends to make a decision on their future sustainability.<br />
<br />
Each of the nine frontend toolkits were reviewed in turn and the results of that <a href="https://listmaster.pepperfish.net/pipermail/netsurf-dev-netsurf-browser.org/2017-February/003886.html">discussion published</a>. This task was greatly eased because we we able to hold the discussion face to face, over time I have come to the conclusion some tasks in open source projects greatly benefit from this form of interaction.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7FgVJUohgBEQ-5ci4nneOXy9_L1VYBmRir4JIGNezpL9ikhyphenhyphenPzFAy-9RoOi5m_HRxfjdjk03y86y7pJBNu8HOThrgjrnT_5Q61WHPGtDXfVg3NVT2rwV7ZrqTFGAF1aLhbDGV0Q9RKavK/s1600/nswin32-minority.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Netsurf running on windows showing this blog post" border="0" height="151" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7FgVJUohgBEQ-5ci4nneOXy9_L1VYBmRir4JIGNezpL9ikhyphenhyphenPzFAy-9RoOi5m_HRxfjdjk03y86y7pJBNu8HOThrgjrnT_5Q61WHPGtDXfVg3NVT2rwV7ZrqTFGAF1aLhbDGV0Q9RKavK/s200/nswin32-minority.png" title="Netsurf running on windows showing this blog post" width="200" /></a></div>
Coding and day to day discussions around it can be easily accommodated va IRC and email. Decisions affecting a large area of code are much easier with the subtleties of direct interpersonal communication. An example of this is our decision to abandon the cocoa frontend (toolkit used on Mac OS X) against that to keep the windows frontend.<br />
<br />
The cocoa frontend was implemented by Sven Weidauer in 2011, unfortunately Sven did not continue contributing to this frontend afterwards and it has become the responsibility of the core team to maintain. Because NetSuf has a comprehensive CI system that compiles the master branch on every commit any changes that negatively affected the cocoa frontend were immediately obvious.<br />
<br />
Thus issues with the compilation were fixed promptly but because these fixes were only ever compile tested and at some point the Mac OS X build environments changed resulting in an application that crashes when used. Despite repeatedly asking for assistance to fix the cocoa frontend over the last eighteen months no one had come forward.<br />
<br />
And when the topic was discussed amongst the developers it quickly became apparent that no one had any objections to removing the cocoa support. In contrast the windows frontend, which despite having many similar issues to cocoa, we decided to keep. These were almost immediate consensus on the decision, despite each individual prior to the discussion not advocating any position.<br />
<br />
This was a single example but it highlights the benefits of a disparate development team having a physical meeting from time to time. However this was not the main point I wanted to discuss, this incident highlights that supporting a feature only useful to a minority of users can have a disproportionate cost.<br />
<br />
The cost of a feature for an open source project is usually a collection of several factors:<br />
<dl>
<dt>Developer time</dt>
<dd>Arguably the greatest resource of a project is the time its developers can devote to it. Unless it is a very large, well supported project like the Kernel or libreoffice almost all developer time is voluntary.</dd>
<dt>Developer focus</dt>
<dd>Any given developer is likely to work on an area of code that interests them in preference to one that does not. This means if a developer must do work which does not interest them they may loose focus and not work on the project at all.</dd>
<dt>Developer skillset</dt>
<dd>A given developer may not have the skillset necessary to work on a feature, this is especially acute when considering minority platforms which often have very, very few skilled developers available.</dd>
<dt>Developer access</dt>
<dd>It should be obvious that software that only requires commodity hardware and software to develop is much cheaper than that which requires special hardware and software. To use our earlier example the cocoa frontend required an apple computer running MAC OS X to compile and test, this resource was very limited and the project only had access to two such systems via remote desktop. These systems also had to serve as CI builders and required physical system administration as they could not be virtualized.</dd>
<dt>Support</dt>
<dd>Once a project releases useful software it generally gains users outside of the developers. Supporting users consumes developer time and generally causes them to focus on things other than code that interests them.<br />
<br />
While most developers have enough pride in what they produce to fix bugs, users must always remember that the main freedom they get from OSS is they recived the code and can change it themselves, <a href="http://jeremyckahn.github.io/blog/2014/10/19/open-source-does-not-mean-free-labor/">there is no requirement for a developer to do anything for them</a>.</dd>
<dt>Resources</dt>
<dd>A project requires a website, code repository, wiki, CI systems etc. which must all be paid for. Netsurf for example is fortunate to have Pepperfish look after our website hosting at favorable rates, Mythic beasts provide exceptionally good rates for the CI system virtual machine along with hardware donations (our apple macs were donated by them) and Collabora for providing physical hosting for our virtual machine server.<br />
<br />
Despite these incredibly good deals the project still spends around 200gbp (250usd) a year on overheads, these services obviously benefit the whole project including minority platforms but are generally donated by users of the more popular platforms.</dd></dl>
The benefits of a feature are similarly varied:<br />
<dl>
<dt>Developer learning</dt>
<dd>A developer may implement a feature to allow them to learn a new technology or skill</dd>
<dt>Project diversity</dt>
<dd>A feature may mean the project gets built in a new environment which reveals issues or opportunities in unconnected code. For example the Debian OS is built on a variety of hardware platforms and sometimes reveals issues in software by compiling it on big endian systems. These issues are often underlying bugs that are causing errors which are simply not observed on a little endian platform.</dd>
<dt>More users</dt>
<dd>Gaining users of the software is often a benefit and although most OSS developers are contributing for personal reasons having their work appreciated by others is often a factor. This might be seen as the other side of the support cost.</dd>
</dl>
<br />
In the end the maintainers of a project often have to consider all of these factors and more to arrive at a decision about a feature, especially those only useful to a minority of users. Such decisions are rarely taken lightly as they often remove another developers work and the question is often what would I think about my contributions being discarded?<br />
<br />
As a postscript, if anyone is willing to pay the costs to maintain the NetSurf cocoa frontend I have not removed the code just yet.Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com8tag:blogger.com,1999:blog-3711269760993993197.post-83823224014377180562016-10-23T22:27:00.000+01:002016-10-23T22:27:56.002+01:00Rabbit of CaerbannogSubsequent to <a href="http://vincentsanders.blogspot.co.uk/2016/08/down-rabbit-hole.html">my previous use of American Fuzzy Lop (AFL</a>) on the NetSurf bitmap image library I applied it to the <a href="http://source.netsurf-browser.org/libnsgif.git/">gif library</a> which, after fixing the test runner, failed to produce any crashes but did result in a better test corpus improving coverage above 90%<br />
<br />
I then turned my attention to the <a href="http://source.netsurf-browser.org/libsvgtiny.git/">SVG processing library</a>. This was different to the bitmap libraries in that it required parsing a much lower density text format and performing operations on the resulting tree representation.<br />
<br />
The test program for the SVG library needed some improvement but is very basic in operation. It takes the test SVG, parses it using libsvgtiny and then uses the parsed output to write out an <a href="http://www.imagemagick.org/script/magick-vector-graphics.php">imagemagick mvg</a> file.<br />
<br />
The libsvg processing uses the <a href="http://source.netsurf-browser.org/libdom.git/">NetSurf DOM library</a> which in turn uses an expat binding to parse the SVG XML text. To process this with AFL required instrumenting not only the XVG library but the DOM library. I did not initially understand this and my first run resulted in a "map coverage" indicating an issue. Helpfully the AFL docs do cover this so it was straightforward to rectify.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfW14sfxxoLM19Uz27Jo5xR-GRLiVlXzo_OcdNIeylrdB-_IOM0noRYsT4er1i7k_QUU4CLmEsXm9S7D6iQPYl2oCyL0z9ZnSuMfYZT3A2uV0cKO6I19nKuwMBZ8qndqHc6-mwAvCiC_hm/s1600/decode-svg-nodict.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="187" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfW14sfxxoLM19Uz27Jo5xR-GRLiVlXzo_OcdNIeylrdB-_IOM0noRYsT4er1i7k_QUU4CLmEsXm9S7D6iQPYl2oCyL0z9ZnSuMfYZT3A2uV0cKO6I19nKuwMBZ8qndqHc6-mwAvCiC_hm/s320/decode-svg-nodict.png" width="320" /></a></div>
Once the test program was written and environment set up an AFL run was started and left to run. The next day I was somewhat alarmed to discover the fuzzer had made almost no progress and was running very slowly. I asked for help on the AFL mailing list and got a polite and helpful response, basically I needed to RTFM.<br />
<br />
I must thank the members of the AFL mailing list for being so helpful and tolerating someone who ought to know better asking dumb questions.<br />
<br />
After reading the fine manual I understood I needed to ensure all my test cases were as small as possible and further that the fuzzer needed a dictionary as a hint to the file format because the text file was of such low data density compared to binary formats.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://en.wikipedia.org/wiki/Rabbit_of_Caerbannog" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="Rabbit of Caerbannog. Death awaits you with pointy teeth" border="0" height="123" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDtvuK95tkFt3zZgUomoyCwumO2sh365Bt074u0joTj7_8oiPT0Nm9DPjlHBaY9Qkc5lpFxGUTS5e2iNBxEBRH86Cdj77y8biPb6Lwc8Fd2N362i-OtzVih3l0nXHtscvbO7eJ0OqwqQFw/s200/The_Rabbit_of_Caerbannog.jpg" title="Rabbit of Caerbannog. Death awaits you with pointy teeth" width="200" /></a></div>
I crafted an SVG dictionary based on the XML one, ensured all the seed SVG files were as small as possible and tried again. The immediate result was thousands of crashes, nothing like being savaged by a rabbit to cause a surprise.<br />
<br />
Not being in possession of the appropriate holy hand grenade I resorted instead to GDB and electric fence. Unlike the bitmap library crashes memory bounds issues simply did not feature in the crashes.Instead they mainly centered around actual logic errors when constructing and traversing the data structures.<br />
<br />
For example Daniel Silverstone <a href="http://source.netsurf-browser.org/libdom.git/commit/?id=a1cb751bb8579a9071b255aa3c89abce0394b206">fixed an interesting </a>bug where the XML parser binding would try and go "above" the root node in the tree if the source closed more tags than it opened which resulted in wild pointers and NULL references.<br />
<br />
I found and squashed several others including <a href="http://source.netsurf-browser.org/libsvgtiny.git/commit/?id=f293a45e808b444b5a3b8b989b76fdc20566d3c9">dealing with SVG which has no valid root element</a> and <a href="http://source.netsurf-browser.org/libsvgtiny.git/commit/?id=988e0d0819c7e6b068b1c1741a50b547f8414cf7">division by zero errors</a> when things like colour gradients have no points.<br />
<br />
I find it interesting that the type and texture of the crashes completely changed between the SVG and binary formats. Perhaps it is just the nature of the textural formats that causes this although it might be due to the techniques used to parse the formats.<br />
<br />
Once all the immediately reproducible crashes were dealt with I performed a longer run. I used my monster system as previously described and ran the fuzzer for a whole week.<br />
<br />
<pre>Summary stats
=============
Fuzzers alive : 10
Total run time : 68 days, 7 hours
Total execs : 9268 million
Cumulative speed : 15698 execs/sec
Pending paths : 0 faves, 2501 total
Pending per fuzzer : 0 faves, 250 total (on average)
Crashes found : 9 locally unique</pre>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWJKr4NYYACMidc7TPC8TV5UFWrPSIBSJwuYe2RXYzo7oSWmITo5a45b7WovBGziEUbHUixVE0FM6s3asCmS7F7Yq15BO8uf49EsiLq7Hln-T5cRzhCe8Ml3W8o6EZW_zHLxTkbQNRxO5G/s1600/high_freq.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="120" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWJKr4NYYACMidc7TPC8TV5UFWrPSIBSJwuYe2RXYzo7oSWmITo5a45b7WovBGziEUbHUixVE0FM6s3asCmS7F7Yq15BO8uf49EsiLq7Hln-T5cRzhCe8Ml3W8o6EZW_zHLxTkbQNRxO5G/s400/high_freq.png" width="400" /></a></div>
After burning almost seventy days of processor time AFL found me another nine crashes and possibly more importantly a test corpus that generates over 90% coverage.<br />
<br />
A useful tool that AFL provides is afl-cmin. This reduces the number of test files in a corpus to only those that are required to exercise all the code paths reached by the test set. In this case it reduced the number of files from 8242 to 2612<br />
<br />
<pre>afl-cmin -i queue_all/ -o queue_cmin -- test_decode_svg @@ 1.0 /dev/null
corpus minimization tool for afl-fuzz by <lcamtuf google.com="">
[+] OK, 1447 tuples recorded.
[*] Obtaining traces for input files in 'queue_all/'...
Processing file 8242/8242...
[*] Sorting trace sets (this may take a while)...
[+] Found 23812 unique tuples across 8242 files.
[*] Finding best candidates for each tuple...
Processing file 8242/8242...
[*] Sorting candidate list (be patient)...
[*] Processing candidates and writing output files...
Processing tuple 23812/23812...
[+] Narrowed down to 2612 files, saved in 'queue_cmin'.</lcamtuf></pre>
<br />
Additionally the actual information within the test files can be minimised with the afl-tmin tool. This must be run on each file individually and can take a relatively long time. Fortunately with <a href="https://www.gnu.org/software/parallel/">GNU parallel</a> one can run many of these jobs simultaneously which merely required another three days of CPU time to process. The resulting test corpus weighs in at a svelte 15 Megabytes or so against the 25 Megabytes before minimisation.<br />
<br />
The result is yet another NetSurf library significantly improved by the use of AFL both from finding and squashing crashing bugs and from having a greatly improved test corpus to allow future library changes with a high confidence there will not be any regressions.Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com2tag:blogger.com,1999:blog-3711269760993993197.post-79282515282321907622016-10-11T13:19:00.001+01:002016-10-11T13:19:11.171+01:00The pine stays green in winter... wisdom in hardship.In December 2015 I saw the <a href="https://www.kickstarter.com/projects/pine64/pine-a64-first-15-64-bit-single-board-super-comput/">kickstarter for the Pine64</a>. The project seemed to have a viable hardware design and after my <a href="http://vincentsanders.blogspot.co.uk/2016/03/hope-is-tomorrows-veneer-over-todays.html">experience with the hikey</a> I decided it could not be a great deal worse.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3kI2i3BxLxPXtM0Cl8OU8T3mcEK87kT-5EDE4aVGQ9_PmOZ2r1AT9Pbi1oyBlPJvwF3HNkD1eQzn-bFW86yO5IXOGfRUm20ObtWpyQefEiEXOi6-OVAKZp1uZnvocTu4_-oWa9_ehEF7Y/s1600/casedpine54.JPG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Pine64 board in my case design" border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3kI2i3BxLxPXtM0Cl8OU8T3mcEK87kT-5EDE4aVGQ9_PmOZ2r1AT9Pbi1oyBlPJvwF3HNkD1eQzn-bFW86yO5IXOGfRUm20ObtWpyQefEiEXOi6-OVAKZp1uZnvocTu4_-oWa9_ehEF7Y/s200/casedpine54.JPG" title="Pine64 board in my case design" width="180" /></a></div>
The system I acquired comprises of:<br />
<ul>
<li>Quad core Allwinner A64 processor clocked at 1.2GHz </li>
<li>2 Gigabytes of DDR3 memory</li>
<li>Gigabit Ethernet</li>
<li>two 480Mbit USB 2.0 ports</li>
<li>HDMI type A</li>
<li>micro SD card for storage.</li>
</ul>
Hardware based kickstarter projects are susceptible to several issues and the usual suspects occurred causing delays:<br />
<ul>
<li>Inability to scale, several thousand backers instead of the hundred they were aiming for</li>
<li>Issues with production</li>
<li>Issues with shipping</li>
</ul>
My personal view is that PINE 64 inc. handled it pretty well, much better than several other projects I have backed and as my Norman Douglas quotation suggests I think they have gained some wisdom from this.<br />
<br />
I received my hardware at the beginning of April only a couple of months after their initial estimated shipping date which as these things go is not a huge delay. I understand some people who had slightly more complex orders were just receiving their orders in late June which is perhaps unfortunate but still well within kickstarter project norms.<br />
<br />
As an aside: I fear that many people simply misunderstand the crowdfunding model for hardware projects and fail to understand that they are not buying a finished product, on the other side of the debate I think many projects need to learn expectation management much better than they do. Hyping the product to get interest is obviously the point of the crowdfunding platform, but over promising and under delivering <b>always </b>causes unhappy customers.<br />
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://raw.githubusercontent.com/kyllikki/designs/master/Pine64_Board/pine64-board.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="Pine64 board dimensions" border="0" height="150" src="https://raw.githubusercontent.com/kyllikki/designs/master/Pine64_Board/pine64-board.png" title="Pine64 board dimensions" width="200" /></a></div>
Despite the delays in production and shipping the <a href="http://wiki.pine64.org/index.php/Main_Page">information available for the board</a> was (and sadly remains) inadequate. As usual I wanted to case my board and there were no useful dimension drawings so I had to <a href="https://github.com/kyllikki/designs/tree/master/Pine64_Board">make my own from direct measurements</a> together with a STL 3D model.<br />
<br />
Also a mental sigh for "yet another poor form factor decision" so another <a href="https://github.com/kyllikki/designs/tree/master/Pine64_Slim">special case size and design</a>. After putting together a design and fabricating with the laser cutter I moved on to the software.<br />
<br />
Once more this is where, once again, the story turns bleak. We find a very pretty website but no obvious link to the software (hint scroll to the bottom and find the "support" wiki link) once you find <a href="http://wiki.pine64.org/index.php/Main_Page">the wiki </a>you will eventually discover that the provided software is either an Android 5.1.1 image (which failed to start on my board) or relies on some random guy from the <a href="http://forum.pine64.org/index.php">forums</a> who has put together his own OS images using a hacked up Allwinner Board Support Package (BSP) kernel.<br />
<br />
Now please do not misunderstand me, I think the work by <a href="https://www.stdin.xyz/">Simon Eisenmann</a> (longsleep) to get a <a href="https://github.com/longsleep/linux-pine64">working kernel</a> and <a href="http://forum.pine64.org/showthread.php?tid=497">Lenny Raposo</a> to get viable OS images is outstanding and useful. I just feel that Allwinner and vendors like Pine64 Inc. should have provided something much, much better than they have. Even the <a href="https://linux-sunxi.org/Pine64">efforts to get mainline support</a> for this hardware are all completely volunteer community efforts and are are making slow progress as a result.<br />
<br />
Assuming I wanted to run a useful OS on this hardware and not just use it as a modern work of art I installed a basic Debian arm64 using Lenny Raposo's <a href="https://www.pine64.pro/">pine64 pro site</a> downloads. I was going to use the system for compiling and builds so used the "Debian Base" image to get a minimal setup. After generating unique ssh keys, renaming the default user and checking all the passwords and permissions I convinced myself the system was reasonably trustworthy.<br />
<br />
The standard Debian Jessie OS runs as expected with few surprises. The main concern I have is that there are a number of unpackaged scripts installed (prefixed with pine64_) which perform several operations from reporting system health (using sysfs entries) to upgrading the kernel and bootloader.<br />
<br />
While I understand these scripts have been provided for the novice users to reduce support burden, doing even more of the vendors job, I would much rather have had proper packages for these scripts, kernel and bootloader which apt could manage. This would have reduced image creation to a simple debootstrap giving much greater confidence in the images provenance.<br />
<br />
The 3.10 based kernel is three years old at the time of writing and lacks a great number of features for the aarch64 ARM processors introduced since release. However I was pleasantly surprised at kvm apparently being available.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: "andale mono" , "lucida console" , "monaco" , "fixed" , monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;"># dmesg|grep -i kvm
[ 7.592896] kvm [1]: Using HYP init bounce page @b87c4000
[ 7.593361] kvm [1]: interrupt-controller@1c84000 IRQ25
[ 7.593778] kvm [1]: timer IRQ27
[ 7.593801] kvm [1]: Hyp mode initialized successfully</code></pre>
<br />
I installed the libvirt packages (and hence all their dependencies like qemu) and created a bridge ready for the virtual machines.<br />
<br />
I needed access to storage for the host disc images and while I could have gone the route of using USB attached SATA as with the hikey I decided to try and use network attached storage instead. Initially I investigated iSCSI but it seems the Linux target (iSCSI uses initiator for client and target for server) support is either <a href="http://stgt.sourceforge.net/">old</a>, <a href="http://linux-iscsi.org/wiki/Main_Page">broken</a> or <a href="http://scst.sourceforge.net/">unpackaged</a>.<br />
<br />
I turned to <a href="http://nbd.sourceforge.net/">network block device</a> (nbd) which is packaged and seems to have reasonable stability out of the box on <a href="https://packages.qa.debian.org/n/nbd.html">modern distributions</a>. This appeared to work well, indeed over the gigabit Ethernet interface I managed to get a sustained 40 megabytes a second read and write rate in basic testing. This is better performance than a USB 2.0 attached SSD on the hikey<br />
<br />
I fired up the guest and perhaps I should have known better than to expect a 3.10 vendor kernel to cope. The immediate hard crashes despite tuning many variables convinced me that virtualisation was not viable with this kernel.<br />
<br />
So abandoning that approach I attempted to run the CI workload directly on the system. To my dismay this also proved problematic. The processor has the bad habit of throttling due to thermal issues (despite a substantial heatsink) and because the storage is network attached throttling the CPU also massively impacts I/O.<br />
<br />
The limitations meant that the workload caused the system to move between high performance and almost no progress on a roughly ten second cycle. This caused a simple NetSurf recompile CI job to take over fifteen minutes. For comparison the same task takes the armhf builder (CubieTruck) four minutes and a 64 bit x86 build which takes around a minute.<br />
<br />
If the workload is tuned to a single core which does not trip thermal throttling the build took seven minutes. which is almost identical to the existing single core virtual machine instance running on the hikey.<br />
<br />
In conclusion the Pine64 is an interesting bit of hardware with fatally flawed software offering. Without Simon and Lenny providing their builds to the community the device would be practically useless rather than just performing poorly. There appears to have been no progress whatsoever on the software offering from Pine64 in the six months since I received the device and no prospect of mainline Allwinner support for the SoC either.<br />
<br />
Effectively I have spent around 50usd (40 for the board and 10 for the enclosure) on a failed experiment. Perhaps in the future the software will improve sufficiently for it to become useful but I do not hold out much hope that this will come from Pine64 themselves.Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com6tag:blogger.com,1999:blog-3711269760993993197.post-77190819623186827482016-10-01T13:39:00.001+01:002016-10-01T13:39:21.093+01:00Paul Hollywood and the pistoris stoneThere has been a great deal of comment among my friends recently about a particularly British cookery program called "<a href="https://en.wikipedia.org/wiki/The_Great_British_Bake_Off">The Great British Bake Off</a>". There has been some controversy as the program is moving from the BBC to a commercial broadcaster.<br />
<br />
Part of this discussion comes from all the presenters, excepting Paul Hollywood, declining to sign with the new broadcaster and partly because of speculation the BBC might continue with a similar format show with a new name.<br />
<br />
Rob Kendrick provided the start to this conversation by passing on a satirical link suggesting Samuel L Jackson might host "<a href="http://newsthump.com/2016/09/26/bbc-to-launch-bake-off-rival-with-samuel-l-jackson-called-cakes-on-a-plain/">cakes on a plane</a>"<br />
<br />
This caused a large number of suggestions for alternate names which I will be reporting but Rob Kendrick, Vivek Das Mohapatra, Colin Watson, Jonathan McDowell, Oki Kuma, Dan Alderman, Dagfinn Ilmari Mannsåke, Lesley Mitchell and Daniel Silverstone are the ones to blame.<br />
<br />
<br />
<ul>
<li>Strictly come baking</li>
<li>Stars and their pies</li>
<li>Baking with the stars</li>
<li>Bake/Off.</li>
<li>Blind Cake</li>
<li>Cake or no cake?</li>
<li>The cake is a lie</li>
<li>Bake That.</li>
<li>Bake Me On</li>
<li>Bake On Me</li>
<li>Bakin' Stevens.</li>
<li>The Winner Bakes It All</li>
<li>Bakerloo</li>
<li>Bake Five</li>
<li>Every breath you bake</li>
<li>Every bread you bake</li>
<li>Unbake my heart</li>
<li>Knead and let prove</li>
<li>Bake me up before you go-go</li>
<li>I want to bake free</li>
<li>Another bake bites the dust</li>
<li>Cinnamon whorl is not enough</li>
<li>The pie who loved me</li>
<li>The yeast you can do.</li>
<li>Total collapse of the tart</li>
<li>Bake and deliver</li>
<li>You Gotta Bake</li>
<li>Bake's Seven</li>
<li>Natural Born Bakers</li>
<li>Bake It Or Leaven It</li>
<li>Driving the last pikelet</li>
<li>Pie crust on the dancefloor</li>
<li>Tomorrow never pies</li>
<li>Murder on the pie crust</li>
<li>The pie who came in from the cold.</li>
<li>You only bake twice (Every body has to make one sweet and one savoury dish).</li>
</ul>
<br />
<br />
So that is our list, anyone else got better ideas?Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com4tag:blogger.com,1999:blog-3711269760993993197.post-25700691151733878402016-09-26T00:23:00.002+01:002016-09-26T00:23:10.485+01:00I'll huff, and I'll puff, and I'll blow your house inSometimes it really helps to have a different view on a problem and after <a href="http://vincentsanders.blogspot.co.uk/2016/09/if-i-see-ending-i-can-work-backward.html">my recent writings</a> on my Public Suffix List (PSL) <a href="http://source.netsurf-browser.org/libnspsl.git/">library</a> I was fortunate to receive a suggestion from my friend <a href="https://www.enricozini.org/">Enrico Zini</a>.<br />
<br />
I had asked for suggestions on reducing the size of the library further and Enrico simply suggested <a href="https://en.wikipedia.org/wiki/Huffman_coding">Huffman coding</a>. This was a technique I had learned about long ago in connection with data compression and the intervening years had made all the details fuzzy which explains why it had not immediately sprung to mind.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAc-oX1XSeX_vfGh9i5eCL4sfHwra31fs4FhgblwSZ9LM_8wMTFtOM-WQJYN6-95QVboWwIraG3mJBMzm7dilxIWh1y3RdwW49ugHSnwaSmRyGClmGxCoianzUTZ7QBVGMWW2qaoSYHoiH/s1600/psltree.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="A small subset of the Public Suffix List as stored within libnspsl" border="0" height="220" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAc-oX1XSeX_vfGh9i5eCL4sfHwra31fs4FhgblwSZ9LM_8wMTFtOM-WQJYN6-95QVboWwIraG3mJBMzm7dilxIWh1y3RdwW49ugHSnwaSmRyGClmGxCoianzUTZ7QBVGMWW2qaoSYHoiH/s400/psltree.png" title="A small subset of the Public Suffix List as stored within libnspsl" width="400" /></a>Huffman coding named for <a href="https://en.wikipedia.org/wiki/David_A._Huffman">David A. Huffman</a> is an algorithm that enables a representation of data which is very efficient. In a normal array of characters every character takes the same eight bits to represent which is the best we can do when any of the 256 values possible is equally likely. If your data is not evenly distributed this is not the case for example if the data was english text then the value is fifteen times more likely to be that for e than k.<br />
<br />
<a href="http://source.netsurf-browser.org/libnspsl.git/plain/docs/huffing.svg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="every step of huffman encoding tree build for the example string table" border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCTF04jjY_ybpocelSAqt3R-xawT2eSqi5icnH6t6bd_zbDeLebX5yuv8iYPqtGr6uqJem05t1aMq5ubLTFeOBj5CQRDZi79kmV2fUzBPn0S1kfnS1yPvYFePFHmctpy3wfZceyBVaKSJJ/s320/huffing.png" title="every step of huffman encoding tree build for the example string table" width="28" /></a>So if we have some data with a non uniform distribution of probabilities we need a way the data be encoded with fewer bits for the common values and more bits for the rarer values. To be efficient we would need some way of having variable length representations without storing the length separately. The term for this data representation is a <a href="https://en.wikipedia.org/wiki/Prefix_code">prefix code</a> and there are several ways to generate them.<br />
<br />
Such is the influence of Huffman on the area of prefix codes they are often called Huffman codes even if they were not created using his algorithm. One can dream of becoming immortalised like this, to join the ranks of those whose names are given to units or whole ideas in a field must be immensely rewarding, however given Huffman invented his algorithm and proved it to be optimal to answer a question on a term paper in his early twenties I fear I may already be a bit too late.<br />
<br />
The algorithm itself is relatively straightforward. First a frequency analysis is performed, a fancy way of saying count how many of each character is in the input data. Next a binary tree is created by using a priority queue initialised with the nodes sorted by frequency.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkfD1hLgAkhyphenhyphenAUhi8jDCWLiuZffpBeYO9EvnFleMVl-AZNCz3YPIJcVGVwds1SBImqp38E99foDV_ZqN_yu6SzGpygneBKxo1I-Kuqm51e4iKpri8Obo3Aj_hU6c5AKIjEBD414BlwzBRV/s1600/completetree.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="The resulting huffman tree and the binary representation of the input symbols" border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkfD1hLgAkhyphenhyphenAUhi8jDCWLiuZffpBeYO9EvnFleMVl-AZNCz3YPIJcVGVwds1SBImqp38E99foDV_ZqN_yu6SzGpygneBKxo1I-Kuqm51e4iKpri8Obo3Aj_hU6c5AKIjEBD414BlwzBRV/s320/completetree.png" title="The resulting huffman tree and the binary representation of the input symbols" width="103" /></a></div>
The two least frequent items count is summed together and a node placed in the tree with the two original entries as child nodes. This step is repeated until a single node exists with a count value equal to the length of the input.<br />
<br />
To encode data once simply walks the tree outputting a 0 for a left node or 1 for right node until reaching the original value. This generates a mapping of values to bit output, the input is then simply converted value by value to the bit output. To decode the data the data is used bit by bit to walk the tree to arrive at values.<br />
<br />
If we perform this algorithm on the example string table <span style="font-size: 16px; line-height: 20px; white-space: pre;"><span style="font-family: inherit;">*!asiabvcomcoopitamazonawsarsaves-the-whalescomputebasilicata</span></span> we can reduce the 488 bits (61 * 8 bit characters) to 282 bits or 40% reduction. Obviously in a real application the huffman tree would need to be stored which would probably exceed this saving but for larger data sets it is probable this technique would yield excellent results on this kind of data.<br />
<br />
Once I proved this to myself I implemented the encoder within the existing conversion program. Although my perl encoder is not very efficient it can process the entire PSL string table (around six thousand labels using 40KB or so) in less than a second, so unless the table grows massively an inelegant approach will suffice.<br />
<br />
The resulting bits were packed into 32bit values to improve decode performance (most systems prefer to deal with larger memory fetches less frequently) and resulted in 18KB of output or 47% of the original size. This is a great improvement in size and means the statically linked test program is now 59KB and is actually smaller than the gzipped source data.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: "andale mono" , "lucida console" , "monaco" , "fixed" , monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;">$ ls -alh test_nspsl
-rwxr-xr-x 1 vince vince 59K Sep 25 23:58 test_nspsl
$ ls -al public_suffix_list.dat.gz
-rw-r--r-- 1 vince vince 62K Sep 1 08:52 public_suffix_list.dat.gz</pre>
<br />
To be clear the statically linked program can determine if a domain is in the PSL with no additional heap allocations and includes the entire PSL ordered tree, the domain label string table and the huffman decode table to read it.<br />
<br />
An unexpected side effect is that because the decode loop is small it sits in the processor cache. This appears to cause the string comparison function huffcasecmp() (which is not locale dependant because we know the data will be limited ASCII) performance to be close to using strcasecmp() indeed on ARM32 systems there is a very modest improvement in performance.<br />
<br />
I think this is as much work as I am willing to put into this library but I am pleased to have achieved a result which is on par with the best of breed (libpsl still has a data representation 20KB smaller than libnspsl but requires additional libraries for additional functionality) and I got to (re)learn an important algorithm too.Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com2tag:blogger.com,1999:blog-3711269760993993197.post-82102008877421875932016-09-20T22:12:00.001+01:002016-09-20T22:12:37.675+01:00If I see an ending, I can work backward.Now while I am sure <a href="https://en.wikipedia.org/wiki/Arthur_Miller">Arthur Miller</a> was referring to writing a play when he said those words they have an oddly appropriate resonance for my topic.<br />
<br />
In the early nineties <a href="https://en.wikipedia.org/wiki/Lou_Montulli">Lou Montulli</a> applied the idea of <a href="https://en.wikipedia.org/wiki/Magic_cookie">magic cookies</a> to HTTP to make the web stateful, I imagine he had no idea of the issues he was going to introduce for the future. Like most of the web technology it was a solution to an immediate problem which it has never been possible to subsequently improve.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgd_nRzv0so9JJgRAQQe7XoVWZ9zn2f9PnsT1HSVjkJ4hVmxL1DjQC6lqOVnYktYhU_88pktOQeVidSoc0Ytg5zxlnI7i8Ls4fy5ddFSC3SbBFcu5eCakDszKv3vLZ9FXHgyqI-Qvbwei2i/s1600/800px-Choc-Chip-Cookie.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Chocolate chip cookie are much tastier than HTTP cookies" border="0" height="128" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgd_nRzv0so9JJgRAQQe7XoVWZ9zn2f9PnsT1HSVjkJ4hVmxL1DjQC6lqOVnYktYhU_88pktOQeVidSoc0Ytg5zxlnI7i8Ls4fy5ddFSC3SbBFcu5eCakDszKv3vLZ9FXHgyqI-Qvbwei2i/s200/800px-Choc-Chip-Cookie.jpg" title="Chocolate chip cookie are much tastier than HTTP cookies" width="200" /></a>The <a href="https://en.wikipedia.org/wiki/HTTP_cookie">HTTP cookie</a> is simply a way for a website to identify a connecting browser session so that state can be kept between retrieving pages. Due to shortcomings in the design of cookies and implementation details in browsers this has lead to a selection of unwanted side effects. The specific issue that I am talking about here is the <a href="https://en.wikipedia.org/wiki/HTTP_cookie#Supercookie">supercookie</a> where the super prefix in this context has similar connotations as to when applied to the word villain.<br />
<br />
Whenever the browser requests a resource (web page, image, etc.) the server may return a cookie along with the resource that your browser remembers. The cookie has a domain name associated with it and when your browser requests additional resources if the cookie domain matches the requested resources domain name the cookie is sent along with the request.<br />
<br />
As an example the first time you visit a page on <tt>www.example.foo.invalid</tt> you might receive a cookie with the domain <tt>example.foo.invalid</tt> so next time you visit a page on <tt>www.example.foo.invalid</tt> your browser will send the cookie along. Indeed it will also send it along for any page on <tt>another.example.foo.invalid</tt><br />
<br />
A supercookies is simply one where instead of being limited to one sub-domain (<tt>example.foo.invalid</tt>) the cookie is set for a <a href="https://en.wikipedia.org/wiki/Top-level_domain">top level domain</a> (<tt>foo.invalid</tt>) so visiting any such domain (I used the invalid name in my examples but one could substitute <tt>com</tt> or <tt>co.uk</tt>) your web browser gives out the cookie. Hackers would love to be able to set up such cookies and potentially control and hijack many sites at a time.<br />
<br />
This problem was noted early on and browsers were not allowed to set cookie domains with fewer than two parts so <tt>example.invalid</tt> or <tt>example.com</tt> were allowed but <tt>invalid</tt> or <tt>com</tt> on their own were not. This works fine for top level domains like <tt>.com</tt>, <tt>.org</tt> and <tt>.mil</tt> but not for countries where the domain registrar had rules about second levels like the uk domain (uk domains must have a second level like <tt>.co.uk</tt>).<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinLQtWkF-8-C7Sgqhxw01CmN2pu7E6MdBLFNyM5QPgf5yFD2m0sqSL2TN04KWyAYJoLsxgLoxili3fzposJB5zhwYAcj0E-AL-cDqnwPkXmbwrQalCLSgGKn5AeXH1oXNGltDvQomCf4yu/s1600/baddomaincookie.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="NetSurf cookie manager showing a supercookie" border="0" height="145" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinLQtWkF-8-C7Sgqhxw01CmN2pu7E6MdBLFNyM5QPgf5yFD2m0sqSL2TN04KWyAYJoLsxgLoxili3fzposJB5zhwYAcj0E-AL-cDqnwPkXmbwrQalCLSgGKn5AeXH1oXNGltDvQomCf4yu/s200/baddomaincookie.png" title="NetSurf cookie manager showing a supercookie" width="200" /></a>There is no way to generate the correct set of top level domains with an algorithm so a database is required and is called the <a href="https://en.wikipedia.org/wiki/Public_Suffix_List">Public Suffix List</a> (PSL). This database is a simple text formatted list with wildcard and inversion syntax and is at time of writing around 180Kb of text including comments which compresses down to 60Kb or so with deflate.<br />
<br />
A few years ago with ICANN allowing the great expansion of top level domains the existing NetSurf supercookie handling was found to be wanting and I decided to implement a solution using the PSL. At this point in time the database was only 100Kb source or 40Kb compressed.<br />
<br />
I started by looking at limited existing libraries. In fact only the <a href="https://github.com/usrflo/registered-domain-libs/">regdom</a> library was adequate but used 150Kb of heap to load the pre-processed list. This would have had the drawback of increasing NetSurf heap usage significantly (we still have users on 8Mb systems). Because of this and the need to run PHP script to generate the pre-processed input it was decided the library was not suitable.<br />
<br />
Lacking other choices I came up with my own implementation which used a perl script to construct a tree of domains from the PSL in a static array with the label strings in a separate table. At the time my implementation added 70Kb of read only data which I thought reasonable and allowed for direct lookup of answers from the database.<br />
<br />
This solution still required a pre-processing step to generate the C source code but perl is much more readily available, is a language already used by our tooling and we could always simply ship the generated file. As long as the generated file was updated at release time as we already do for our fallback SSL certificate root set this would be acceptable.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_07a8XOVKMc2IwSOhJpvtpYhXhIbBxD0Dqt1j1yd90imIO_weR6Q-ccIkwbZOf2kr12mE2YM0E7m4CN7tKIEMtSmr1GZI_5QnQ3AX755vwP3oh6Xfp_UoNDgifOokakLWigb9zR1x_YhE/s1600/wiresharkbaddomaincookie.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="wireshark session shown NetSurf sending a co.uk supercookie to bbc.co.uk" border="0" height="146" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_07a8XOVKMc2IwSOhJpvtpYhXhIbBxD0Dqt1j1yd90imIO_weR6Q-ccIkwbZOf2kr12mE2YM0E7m4CN7tKIEMtSmr1GZI_5QnQ3AX755vwP3oh6Xfp_UoNDgifOokakLWigb9zR1x_YhE/s200/wiresharkbaddomaincookie.png" title="wireshark session shown NetSurf sending a co.uk supercookie to bbc.co.uk" width="200" /></a></div>
I put the solution into NetSurf, was <a href="https://www.youtube.com/watch?v=edCqF_NtpOQ">pleased no-one seemed to notice</a> and moved on to other issues. Recently while fixing a completely unrelated issue in the display of session cookies in the management interface and I realised I had some test supercookies present in the display. After the initial "thats odd" I realised with horror there might be a deeper issue.<br />
<br />
It quickly became evident the PSL generation was broken and had been for a long time, even worse somewhere along the line the "redundant" empty generated source file had been removed and the ancient fallback code path was all that had been used.<br />
<br />
This issue had escalated somewhat from a trivial display problem. I took a moment to asses the situation a bit more broadly and came to the conclusion there were a number of interconnected causes, centered around the lack of automated testing, which could be solved by extracting the PSL handling into a "support" library.<br />
<br />
NetSurf has several of these support libraries which could be used separately to the main browser project but are principally oriented towards it. These libraries are shipped and built in releases alongside the main browser codebase and mainly serve to make API more obvious and modular. In this case my main aim was to have the functionality segregated into a separate module which could be tested, updated and monitored directly by <a href="http://ci.netsurf-browser.org/jenkins/view/Libraries/">our CI system</a> meaning the embarrassing failure I had found can never occur again.<br />
<br />
Before creating my own library I did consider a library called <a href="https://github.com/rockdaboot/libpsl">libpsl</a> had been created since I wrote my original implementation. Initially I was very interested in using this library given it managed a data representation within a mere 32Kb.<br />
<br />
Unfortunately the library integrates a great deal of IDN and punycode handling which was not required in this use case. NetSurf already has to handle IDN and punycode translations and uses punycode encoded domain names internally only translating to unicode representations for display so duplicating this functionality using other libraries requires a great deal of resource above the raw data representation.<br />
<br />
I put <a href="http://source.netsurf-browser.org/libnspsl.git/">the library</a> together based on the existing code generator Perl program and integrated the test set that comes along with the PSL. I was a little alarmed to discover that the PSL had almost doubled in size since the implementation was originally written and now the trivial test program of the library was weighing in at a hefty 120Kb.<br />
<br />
This stemmed from two main causes:<br />
<ol>
<li>there were now many more domain label strings to be stored</li>
<li>there now being many, many more nodes in the tree.</li>
</ol>
To address the first cause the length of the domain label strings was moved into the unused padding space within each tree node removing a byte from each domain label saving 6Kb. Next it occurred to me that while building the domain label string table that if the label to be added already existed as a substring within the table it could be elided.<br />
<br />
The domain labels were sorted from longest to shortest and added in order searching for substring matches as the table was built this saved another 6Kb. I am sure there are ways to reduce this further I have missed (if you see them let me know!) but a 25% saving (47Kb to 35Kb) was a good start.<br />
<br />
The second cause was a little harder to address. The structure representing nodes in the tree I started with was at first look reasonable.<br />
<br />
<div style="background: #f8f8f8; border: none; overflow: auto; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: green; font-weight: bold;">struct</span> pnode {
<span style="color: #b00040;">uint16_t</span> label_index; <span style="color: #408080; font-style: italic;">/* index into string table of label */</span>
<span style="color: #b00040;">uint16_t</span> label_length; <span style="color: #408080; font-style: italic;">/* length of label */</span>
<span style="color: #b00040;">uint16_t</span> child_node_index; <span style="color: #408080; font-style: italic;">/* index of first child node */</span>
<span style="color: #b00040;">uint16_t</span> child_node_count; <span style="color: #408080; font-style: italic;">/* number of child nodes */</span>
};
</pre>
</div>
<br />
I examined the generated table and observed that the majority of nodes were leaf nodes (had no children) which makes sense given the type of data being represented. By allowing two types of node one for labels and a second for the child node information this would halve the node size in most cases and requiring only a modest change to the tree traversal code.<br />
<br />
The only issue with this would be that a way to indicate a node has child information. It was realised that the domain labels can have a maximum length of 63 characters meaning their length can be represented in six bits so a uint16_t was excessive. The space was split into two uint8_t parts one for the length and one for a flag to indicate child data node followed.<br />
<br />
<div style="background: #f8f8f8; border: none; overflow: auto; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: green; font-weight: bold;">union</span> pnode {
<span style="color: green; font-weight: bold;">struct</span> {
<span style="color: #b00040;">uint16_t</span> index; <span style="color: #408080; font-style: italic;">/* index into string table of label */</span>
<span style="color: #b00040;">uint8_t</span> length; <span style="color: #408080; font-style: italic;">/* length of label */</span>
<span style="color: #b00040;">uint8_t</span> has_children; <span style="color: #408080; font-style: italic;">/* the next table entry is a child node */</span>
} label;
<span style="color: green; font-weight: bold;">struct</span> {
<span style="color: #b00040;">uint16_t</span> node_index; <span style="color: #408080; font-style: italic;">/* index of first child node */</span>
<span style="color: #b00040;">uint16_t</span> node_count; <span style="color: #408080; font-style: italic;">/* number of child nodes */</span>
} child;
};
<span style="color: green; font-weight: bold;">static</span> <span style="color: green; font-weight: bold;">const</span> <span style="color: green; font-weight: bold;">union</span> pnode pnodes[<span style="color: #666666;">8580</span>] <span style="color: #666666;">=</span> {
<span style="color: #408080; font-style: italic;">/* root entry */</span>
{ .label <span style="color: #666666;">=</span> { <span style="color: #666666;">0</span>, <span style="color: #666666;">0</span>, <span style="color: #666666;">1</span> } }, { .child <span style="color: #666666;">=</span> { <span style="color: #666666;">2</span>, <span style="color: #666666;">1553</span> } },
<span style="color: #408080; font-style: italic;">/* entries 2 to 1794 */</span>
{ .label <span style="color: #666666;">=</span> {<span style="color: #666666;">37</span>, <span style="color: #666666;">2</span>, <span style="color: #666666;">1</span> } }, { .child <span style="color: #666666;">=</span> { <span style="color: #666666;">1795</span>, <span style="color: #666666;">6</span> } },
...
<span style="color: #408080; font-style: italic;">/* entries 8577 to 8578 */</span>
{ .label <span style="color: #666666;">=</span> {<span style="color: #666666;">31820</span>, <span style="color: #666666;">6</span>, <span style="color: #666666;">1</span> } }, { .child <span style="color: #666666;">=</span> { <span style="color: #666666;">8579</span>, <span style="color: #666666;">1</span> } },
<span style="color: #408080; font-style: italic;">/* entry 8579 */</span>
{ .label <span style="color: #666666;">=</span> {<span style="color: #666666;">0</span>, <span style="color: #666666;">1</span>, <span style="color: #666666;">0</span> } },
};
</pre>
</div>
<br />
This change reduced the node array size from 63Kb to 33Kb almost a 50% saving. I considered using bitfields to try and reduce the label length and has_children flag into a single byte but such packing will not reduce the length of a node below 32bits because it is unioned with the child structure.<br />
<br />
A possibility of using the spare uint8_t derived by bitfield packing to store an additional label node in three other nodes was considered but added a great deal of complexity to node lookup and table construction for saving around 4Kb so was not incorporated.<br />
<br />
With the changes incorporated the test program was a much more acceptable 75Kb reasonably close to the size of the compressed source but with the benefits of direct lookup. Integrating the libraries single API call into NetSurf was straightforward and resulted in correct operation when tested.<br />
<br />
This episode just reminded me of the dangers of code that can fail silently. It exposed our users to a security problem that we thought had been addressed almost six years ago and squandered the limited resources of the project. Hopefully a lesson we will not have to learn again any time soon. If there is a positive to take away it is that the new implementation is more space efficient, automatically built and importantly <b>tested</b>Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com0tag:blogger.com,1999:blog-3711269760993993197.post-17394254050144882412016-08-22T13:13:00.001+01:002016-08-22T13:24:23.702+01:00Down the rabbit holeMy descent began with a user <a href="http://bugs.netsurf-browser.org/mantis/view.php?id=2446">reporting a bug</a> and I fear I am still on my way down.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBoRILWj_xvj1IBXPd3iMMT3dZSDu61V4n_0pJGQLKwaZd9fgyCEHmPksrFDvhqLBdWw0LQU1l5KNMFNdGVuhuoFe3J_JEnjXTguPb3zQLSV7aXhnYL-dmn4GsOUj9sntPz6H65tAHn8Rv/s1600/Rabbit_burrow_entrance.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Like Alice I headed down the hole. https://commons.wikimedia.org/wiki/File:Rabbit_burrow_entrance.jpg" border="0" height="178" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBoRILWj_xvj1IBXPd3iMMT3dZSDu61V4n_0pJGQLKwaZd9fgyCEHmPksrFDvhqLBdWw0LQU1l5KNMFNdGVuhuoFe3J_JEnjXTguPb3zQLSV7aXhnYL-dmn4GsOUj9sntPz6H65tAHn8Rv/s200/Rabbit_burrow_entrance.jpg" title="Like Alice I headed down the hole. https://commons.wikimedia.org/wiki/File:Rabbit_burrow_entrance.jpg" width="200" /></a></div>
The bug was simple enough, a windows bitmap file caused NetSurf to crash. Pretty quickly this was tracked down to the libnsbmp library attempting to decode the file. As to why we have a heavily used library for bitmaps? I am afraid they are part of every <a href="https://en.wikipedia.org/wiki/ICO_(file_format)">icon file</a> and many websites still have<a href="https://en.wikipedia.org/wiki/Favicon"> favicons</a> using that format.<br />
<br />
Some time with a hex editor and the file format specification soon showed that the image in question was malformed and had a bad offset header entry. So I was faced with two issues, firstly that the decoder crashed when presented with badly encoded data and secondly that it failed to deal with incorrect header data.<br />
<br />
This is typical of bug reports from real users, the obvious issues have already been encountered by the developers and unit tests formed to prevent them, what remains is harder to produce. After a debugging session with Valgrind and electric fence I discovered the crash was actually caused by running off the front of an allocated block due to an incorrect bounds check. Fixing the <a href="http://source.netsurf-browser.org/libnsbmp.git/commit/?id=6dadfdcac3331d8f0a56342b973c59872f954e3c">bounds check</a> was simple enough as was <a href="http://source.netsurf-browser.org/libnsbmp.git/commit/?id=b019dacb93b82c43e9f521af496fdf45cd091469">working round the bad header value</a> and after <a href="http://source.netsurf-browser.org/libnsbmp.git/commit/?id=b017792feaa0c28ef22d0d60e11612846e8e1db5">adding a unit test for the issue</a> I almost moved on.<br />
<br />
Almost...<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsEXu1zJH_S24FhZarsXb9i9Y0Q3fatoacJYOJJ_46Yp9Bwe_YpUqlLqK8EttwxgiRTxL-k7FLNC3dk7VpfFWR-CMDFKOIbsy61ZR_ZhaCLLgF35KeyR_MwDrG6HG3EM_xs6X8-eCAxKIj/s1600/Rabbit_american_fuzzy_lop_buck_white.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="american fuzzy lop are almost as cute as cats https://commons.wikimedia.org/wiki/File:Rabbit_american_fuzzy_lop_buck_white.jpg" border="0" height="104" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsEXu1zJH_S24FhZarsXb9i9Y0Q3fatoacJYOJJ_46Yp9Bwe_YpUqlLqK8EttwxgiRTxL-k7FLNC3dk7VpfFWR-CMDFKOIbsy61ZR_ZhaCLLgF35KeyR_MwDrG6HG3EM_xs6X8-eCAxKIj/s200/Rabbit_american_fuzzy_lop_buck_white.jpg" title="american fuzzy lop are almost as cute as cats https://commons.wikimedia.org/wiki/File:Rabbit_american_fuzzy_lop_buck_white.jpg" width="200" /></a></div>
We already used the <a href="http://entropymine.com/jason/bmpsuite/bmpsuite/html/bmpsuite.html">bitmap test suite of images</a> to check the library decode which was giving us a good 75% or so line coverage (I long ago added <a href="http://ci.netsurf-browser.org/jenkins/view/Categorized/job/coverage-libnsbmp/">coverage testing to our CI system</a>) but I wondered if there was a test set that might increase the coverage and perhaps exercise some more of the bounds checking code. A bit of searching turned up the <a href="http://lcamtuf.coredump.cx/afl/">american fuzzy lop</a> (AFL) projects <a href="http://lcamtuf.coredump.cx/afl/demo/">synthetic corpora</a> of bmp and ico images.<br />
<br />
After checking with the AFL authors that the images were usable in our project I <a href="http://source.netsurf-browser.org/libnsbmp.git/commit/?id=f04838b04eda130197c66a5ccccd9b4420557b95">added them to our test corpus</a> and discovered a whole heap of trouble. After fixing more bounds checks and signed issues I finally had a library I was pretty sure was solid with over 85% test coverage.<br />
<br />
Then I had the idea of actually running AFL on the library. I had been avoiding this because my previous experimentation with other fuzzing utilities had been utter frustration and very poor return on investment of time. Following the quick start guide looked straightforward enough so I thought I would spend a short amount of time and maybe I would learn a useful tool.<br />
<br />
I downloaded the AFL source and built it with a simple make which was an encouraging start. The library was compiled in debug mode with AFL instrumentation simply by changing the compiler and linker environment variables.<br />
<br />
<pre>$ LD=afl-gcc CC=afl-gcc AFL_HARDEN=1 make VARIANT=debug test
afl-cc 2.32b by <lcamtuf@google.com>
afl-cc 2.32b by <lcamtuf@google.com>
COMPILE: src/libnsbmp.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 751 locations (64-bit, hardened mode, ratio 100%).
AR: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/libnsbmp.a
COMPILE: test/decode_bmp.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 52 locations (64-bit, hardened mode, ratio 100%).
LINK: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp
afl-cc 2.32b by <lcamtuf@google.com>
COMPILE: test/decode_ico.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 65 locations (64-bit, hardened mode, ratio 100%).
LINK: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_ico
afl-cc 2.32b by <lcamtuf@google.com>
Test bitmap decode
Tests:606 Pass:606 Error:0
Test icon decode
Tests:392 Pass:392 Error:0
TEST: Testing complete</pre>
<br />
I stuffed the AFL build directory on the end of my PATH, created a directory for the output and ran afl-fuzz<br />
<br />
<pre>afl-fuzz -i test/bmp -o findings_dir -- ./build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp @@ /dev/null</pre>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLLHBvQGEeaW-ubwBnbANcW9IUAtYxD_v0_8D7Il4bmw1brWB1NTXX_0OrMg5JVW3N5uQ3xOOaKScZBJeTsP9bRT2RP8dUe-qqwE3rla-gY59XK9AjdHG61xnGv5MpzIJycm-cGfU2mjZ-/s1600/aflrun.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="182" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLLHBvQGEeaW-ubwBnbANcW9IUAtYxD_v0_8D7Il4bmw1brWB1NTXX_0OrMg5JVW3N5uQ3xOOaKScZBJeTsP9bRT2RP8dUe-qqwE3rla-gY59XK9AjdHG61xnGv5MpzIJycm-cGfU2mjZ-/s320/aflrun.png" width="320" /></a></div>
The result was immediate and not a little worrying, within seconds there were crashes and lots of them! Over the next couple of hours I watched as the unique crash total climbed into the triple digits.<br />
<br />
I was forced to abort the run at this point as, despite clear warnings in the AFL documentation of the demands of the tool, my laptop was clearly not cut out to do this kind of work and had become distressingly hot.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipXlIoEP1fw1DgggBaX8Y2r6-OoKlMualgASFtx7hLJRxR-LKp7vw53KKIJ9KHtfZjkKD_IowEJTLJJJ-XK-kfTYFfgLs5jb_K8xTBKRFTKNU7FMrCE3lFu0aCGsO3q55kpfCAa_dnpPmA/s1600/low_freq.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="64" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipXlIoEP1fw1DgggBaX8Y2r6-OoKlMualgASFtx7hLJRxR-LKp7vw53KKIJ9KHtfZjkKD_IowEJTLJJJ-XK-kfTYFfgLs5jb_K8xTBKRFTKNU7FMrCE3lFu0aCGsO3q55kpfCAa_dnpPmA/s320/low_freq.png" width="320" /></a></div>
AFL has a visualisation tool so you can see what kind of progress it is making which produced a graph that showed just how fast it managed to produce crashes and how much the return plateaus after just a few cycles. Although it was finding a new unique crash every ten minutes or so when aborted.<br />
<br />
I dove in to analyse the crashes and it immediately became obvious the main issue was caused when the test tool attempted allocations of absurdly large bitmaps. The browser itself uses a heuristic to determine the maximum image size based on used memory and several other values. I simply applied an upper bound of 48 megabytes per decoded image which fits easily within the fuzzers default heap limit of 50 megabytes.<br />
<br />
The main source of "hangs" also came from large allocations so once the test was fixed afl-fuzz was re-run with a timeout parameter set to 100ms. This time after several minutes no crashes and only a single hang were found which came as a great relief, at which point my laptop had a hard shutdown due to thermal event!<br />
<br />
Once the laptop cooled down I spooled up a more appropriate system to perform this kind of work a 24way 2.1GHz Xeon system. A Debian Jessie guest vm with 20 processors and 20 gigabytes of memory was created and the build replicated and instrumented.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjD1sy8-b10_JUCER-fdLhwS7kA4sL_-zrHHkMXBLVa6dde5a12k0xu6L66Iz8YDyKcl18Xi2ItCbLsDLFgmeN3h3fkEMiDp_u7IasDS7LZu5d_m7v4gJSPKBL37_XzimF30kfiOjzA-5C6/s1600/afl-master.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="AFL master node display" border="0" height="188" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjD1sy8-b10_JUCER-fdLhwS7kA4sL_-zrHHkMXBLVa6dde5a12k0xu6L66Iz8YDyKcl18Xi2ItCbLsDLFgmeN3h3fkEMiDp_u7IasDS7LZu5d_m7v4gJSPKBL37_XzimF30kfiOjzA-5C6/s320/afl-master.png" title="AFL master node display" width="320" /></a></div>
To fully utilise this system the next test run would utilise AFL in parallel mode. In this mode there is a single "master" running all the deterministic checks and many "secondary" instances performing random tweaks.<br />
<br />
If I have one tiny annoyance with AFL, it is that breeding and feeding a herd of rabbits by hand is annoying and something I would like to see a convenience utility for.<br />
<br />
The warren was left overnight with 19 instances and by morning had generated crashes again. This time though the crashes actually appeared to be real failures.<br />
<br />
<pre>$ afl-whatsup sync_dir/
Summary stats
=============
Fuzzers alive : 19
Total run time : 5 days, 12 hours
Total execs : 214 million
Cumulative speed : 8317 execs/sec
Pending paths : 0 faves, 542 total
Pending per fuzzer : 0 faves, 28 total (on average)
Crashes found : 554 locally unique</pre>
<br />
All the crashing test cases are available and a simple file command immediately showed that all the crashing test files had one thing in common the height of the image was -2147483648 This seemingly odd number is actually meaningful to a programmer, it is the largest negative number which can be stored in a 32bit integer (INT32_MIN) I immediately examined the source code that processes the height in the image header.<br />
<br />
<pre>if ((width <= 0) || (height == 0))
return BMP_DATA_ERROR;
if (height < 0) {
bmp->reversed = true;
height = -height;
}</pre>
<br />
The bug is where the height is made a positive number and results in height being set to 0 after the existing check for zero and results in a crash later in execution. A <a href="http://source.netsurf-browser.org/libnsbmp.git/commit/?id=4fd92297e0a144881f37ffdb1c19fab6b0d3e47d">simple fix was applied</a> and test case added removing the crash and any possible future failure due to this.<br />
<br />
Another AFL run has been started and after a few hours has yet to find a crash or non false positive hang so it looks like if there are any more crashes to find they are much harder to uncover.<br />
<br />
Main lessons learned are:<br />
<ul>
<li>AFL is an easy to use and immensely powerful and effective tool. State of the art has taken a massive step forward.</li>
<li>The test harness is part of the test! make sure it does not behave in a poor manner and cause issues itself.</li>
<li>Even a library with extensive test coverage and real world users can benefit from this technique. But it remains to be seen how quickly the rate of return will reduce after the initial fixes.</li>
<li>Use the right tool for the job! Ensure you head the warnings in the manual as AFL uses a lot of resources including CPU, disc and memory.</li>
</ul>
<div>
I will of course be debugging any new crashes that occur and perhaps turning my sights to all the projects other unit tested libraries. I will also be investigating the generation of our own custom test corpus from AFL to replace the demo set, this will hopefully increase our unit test coverage even further.<br />
<br />
Overall this has been my first successful use of a fuzzing tool and a very positive experience. I would wholeheartedly recommend using AFL to find errors and perhaps even integrate as part of a CI system.</div>
Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com1tag:blogger.com,1999:blog-3711269760993993197.post-41876953171974755412016-03-13T23:07:00.002+00:002016-03-13T23:07:33.795+00:00I changed my mind, Erase and rewindMy <a href="http://vincentsanders.blogspot.co.uk/2016/02/stack-em-pack-em-and-rack-em.html">recent rack design</a> turned out to simply not be practical. It did not hold all the SBC I needed it to and most troubling accessing connectors was impractical. I was forced to remove the enclosure from the rack and go back to piles of SBC on a shelf.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQG_OiTNp_cwfzNvlyXtTs98HqxeTG3uNSjH2FIDIvZs8y1mBkuuP5lYzhEFiqZnhHwc9agMeke7dXdpRAzqHASrRLo0ZUI4wOob9FUZI84VTHTuBwcIhBmrRQA7-G_L742IYzfwEj4oBU/s1600/IMG_5744.JPG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="View of the acrylic being laser cut through the heavily tinted window" border="0" height="116" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQG_OiTNp_cwfzNvlyXtTs98HqxeTG3uNSjH2FIDIvZs8y1mBkuuP5lYzhEFiqZnhHwc9agMeke7dXdpRAzqHASrRLo0ZUI4wOob9FUZI84VTHTuBwcIhBmrRQA7-G_L742IYzfwEj4oBU/s200/IMG_5744.JPG" title="View of the acrylic being laser cut through the heavily tinted window" width="200" /></a></div>
This sent me back to the beginning of the design process. The requirement for easy access to connectors had been compromised on in my first solution because I wanted a compact 1U size. This time I returned to my initial toast rack layout but retaining the SBC inside their clip cases.<br />
<br />
By facing the connectors downwards and providing basic cable management the design should be much more practical.<br />
<br style="clear: both;" />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKjKqPBJs-86CX3x9orpus2xn3Q5mjIKy8N58OusWe7XO5-aWowNryw61v3LYyHm6AJrBABUzDs2JDIPTXpmJzuBMtAF0XKfaeVoRXqyaGwnCRqDao93tfMhTetcm8V4MenR0pyDto_qEf/s1600/IMG_5748.JPG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="118" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKjKqPBJs-86CX3x9orpus2xn3Q5mjIKy8N58OusWe7XO5-aWowNryw61v3LYyHm6AJrBABUzDs2JDIPTXpmJzuBMtAF0XKfaeVoRXqyaGwnCRqDao93tfMhTetcm8V4MenR0pyDto_qEf/s200/IMG_5748.JPG" width="200" /></a>My design process is to use the <a href="http://www.ribbonsoft.com/en/">QCAD package</a> to create layered 2D outlines which are then converted from DXF into toolpaths with Lasercut CAM software. The toolpaths are then uploaded to the <a href="http://wiki.makespace.org/Equipment/Laser_Cutter">laser cutter</a> directly from the PC running Lasercut.<br />
<br style="clear: both;" />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipsTkyGo8D9cCSRbnEk3YefKFaxpG3sdm7Ub2a2K8-jqz2FD952VMPuqPZpgX-OE6y5JxwZNtDrgm6_mqPVRknHpqmWUXh2POyG5d2OTabb_lKAetrRrasi9g0OrsFioEUicuGb2Fb2Q7m/s1600/vertical_rack_slots.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Assembled sub rack enclosure" border="0" height="113" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipsTkyGo8D9cCSRbnEk3YefKFaxpG3sdm7Ub2a2K8-jqz2FD952VMPuqPZpgX-OE6y5JxwZNtDrgm6_mqPVRknHpqmWUXh2POyG5d2OTabb_lKAetrRrasi9g0OrsFioEUicuGb2Fb2Q7m/s200/vertical_rack_slots.jpg" title="Assembled sub rack enclosure" width="200" /></a>Despite the laser cutters being professional grade systems the Lasercut software is a continuous cause of issues for many users, it is the only closed source piece of software in the production process and it has a pretty poor user interface. On this occasion my main issue with it was my design was quite large at 700mm by 400mm which caused the software to crash repeatedly. I broke the design down into two halves and this allowed me to continue.<br />
<br style="clear: both;" />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixPvQfulD7jQfURT7d3dh6aaDQ_7uMTSZI12nSuGLeeULfZtGPIWil1BBBKFEQDlaommYSmCBkuwvN0tJWrHmyCPJasO5UJFXxRELXbufy0Em0j2n8z__TwgvTV6X_GfQ6wNpdUGC_D7hL/s1600/IMG_5770.JPG" imageanchor="1" style="clear: both; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="87" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixPvQfulD7jQfURT7d3dh6aaDQ_7uMTSZI12nSuGLeeULfZtGPIWil1BBBKFEQDlaommYSmCBkuwvN0tJWrHmyCPJasO5UJFXxRELXbufy0Em0j2n8z__TwgvTV6X_GfQ6wNpdUGC_D7hL/s200/IMG_5770.JPG" width="200" /></a>Once I defeated the software the design was laser cut from 3mm clear extruded acrylic. The assembled is secured with 72 off M3 nuts and bolts. The resulting construction is very strong and probably contains much more material than necessary.<br />
<br />
One interesting thing I discovered is that in going from a 1U enclosure holding 5 units to a 2U design holding 11 units I had increased the final weight from 320g to 980g and when all 11 SBC are installed that goes up to a whopping 2300g. Fortunately this is within the mechanical capabilities of the material but it is the heaviest thing I have ever constructed from 3mm acrylic.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYSZhgQKiA7PSWl2JKgKv3kqvytnV_8CvVLvZXhHYvwIZZJBNPNIaTiXGrpsp9k_d46pysxvKvKvwVDYRyD8YgWwg8p2ThKil83QWr3lW5B3VX7rGxGXcutdSBTie_LyuooDRZ4HrUbD-6/s1600/IMG_5771.JPG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="bolted into the rack and operating" border="0" height="106" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYSZhgQKiA7PSWl2JKgKv3kqvytnV_8CvVLvZXhHYvwIZZJBNPNIaTiXGrpsp9k_d46pysxvKvKvwVDYRyD8YgWwg8p2ThKil83QWr3lW5B3VX7rGxGXcutdSBTie_LyuooDRZ4HrUbD-6/s200/IMG_5771.JPG" title="bolted into the rack and operating" width="200" /></a>Once installed in the rack with all SBC inserted and connected this finally actually works and provides a practical solution. The self is finally clear of SBC and has enough space for all the other systems I need to accommodate for various projects.<br />
<br />
As usual the <a href="https://github.com/kyllikki/designs/tree/master/vertical_rack_slots">design files</a> are all freely available though I really cannot see anyone else needing to replicate this.Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com1tag:blogger.com,1999:blog-3711269760993993197.post-85293939203894972772016-03-01T20:20:00.001+00:002016-03-01T20:20:56.014+00:00Hope is tomorrow's veneer over today's disappointment.Recently I have been very hopeful about the <a href="https://www.96boards.org/products/ce/hikey/">96boards Hikey</a> SBC and as <a href="https://en.wikipedia.org/wiki/Evan_Esar">Evan Esar</a> predicted I have therefore been very disappointed. I was given a Hikey after a Linaro connect event some time ago by another developer who could not get the system working usefully and this is the tale of what followed.<br />
<h2>
The Standard Design</h2>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5NDh1G4EDevf8kWJ8saDzbOe_I9k1sLy78wnFVJ3wB8EEm3t36DlZape9ypFyzPaxKBzHlBT8R7PQtHHsRI2O7sLYLkhYhAXQVMIOONB2Cgfr2rsNDFi516EPuK7lArp58isLt8PeyZd6/s1600/jikey.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="137" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5NDh1G4EDevf8kWJ8saDzbOe_I9k1sLy78wnFVJ3wB8EEm3t36DlZape9ypFyzPaxKBzHlBT8R7PQtHHsRI2O7sLYLkhYhAXQVMIOONB2Cgfr2rsNDFi516EPuK7lArp58isLt8PeyZd6/s200/jikey.jpg" width="200" /></a></div>
This board design was presented as Linaro creating a standard for the 64bit Single Board Computer (SBC) market. So I had expected that a project with such lofty goals would have considered many factors and provided at least as good a solution as the existing 32bit boards.<br />
<br />
The lamentable hubris of creating a completely new form factor unfortunately sets a pattern for the whole enterprise. Given the aim towards "makers" I would have accepted that the system would not be a ATX PC size motherboard, but <a href="https://en.wikipedia.org/wiki/Computer_form_factor">mini/micro/nano and pico ITX</a> have been available for several years.<br />
<br />
If opting for a smaller "credit card" form factor why not use one of the common ones that have already been defined by systems such as the Raspberry Pi B+? Instead now every 96Board requires special cases and different expansion boards.<br />
<br />
Not content with defining their own form factor the design also uses a 8-18V supply, this is the only SBC I own that is not fed from a 5V supply. I understand that a system might require more current than a micro USB connector can provide, but for example the Banana Pi manages with a <a href="https://en.wikipedia.org/wiki/Coaxial_power_connector">DC barrel jack</a> readily capable of delivering 25W which would seem more than enough<br />
<br />
The new form factor forced the I/O connectors to be placed differently to other SBC, given the opportunity to concentrate all connectors on one edge (like ATX designs) and avoid the issues where two or three sides are used. The 96board design instead puts connectors on two edges removing any possible benefit this might have given.<br />
<br />
The actual I/O connectors specified are rather strange. There is a mandate for HDMI removing the possibility of future display technology changes. The odd USB arrangement of two single sockets instead of a stacked seems to be an attempt to keep height down but the expansion headers and CPU heatsink mean this is largely moot.<br />
<br />
The biggest issue though is mandating WIFI but not Ethernet (even as an option), everything else in the design I could logically understand but this makes no sense. It means the design is not useful for many applications without adding USB devices.<br />
<br />
Expansion is presented as a 2mm pitch DIL socket for "low speed" signals and a high density connector for "high speed" signals. The positioning and arrangement of these connectors proffered an opportunity to improve upon previous SBC designs which was not taken. The use of 2mm pitch and low voltage signals instead of the more traditional 2.54mm pitch 3.3v signals means that most maker type applications will need adapting from the popular Raspberry Pi and Arduino style designs.<br />
<br />
In summary the design appears to have been a Linaro project to favour one of their members which took a Hisilicon Android phone reference design and put it onto a board with no actual thought beyond getting it done. Then afterwards attempted to turn that into a specification, this has simply not worked as an approach.<br />
<br />
My personal opinion is that this specification is fatally flawed and, is a direct cause of, the bizarre situation where the "consumer" specification exists alongside the "enterprise" edition which itself has an option of microATX form factor anyhow!<br />
<h2>
The Implementation</h2>
If we ignore the specification appearing to be nothing more than a codification of the original HiKey design we can look at the HiKey as an implementation.<br />
<br />
Initially the board required modifying to add headers to attach a USB to 1.8V LVTTL serial adaptor on the UART0 serial port. Once Andy Simpkins had made this change for me I was able to work through the instructions and attempt to install a bootloader and OS image.<br />
<br />
The initial software was essentially HiSilicon vendor code using the Android fastboot system to configure booting. There was no source and the Ubuntu OS images were an obvious afterthought to the Android images. Just getting these images installed required a great deal of effort, repetition and debugging. It was such a dreadful experience this signalled the commencement one of the repeated hiatuses throughout this project, the allure of 64 bit ARM computing has its limits even for me.<br />
<br />
When I returned to the project I attempted to use the system from the on-board eMMC but the pre-built binary only kernel and OS image were very limited. Building a replacement kernel , or even modules for the existing one proved fruitless and the system was dreadfully unstable.<br />
<br />
I wanted to use the system as a builder for some Open Source projects but the system instability ruled this out. I considered attempting to use virtualisation which would also give better system isolation for builder systems. By using KVM running a modern host kernel and OS as a guest this would also avoid issues with the host systems limitations. At which point I discovered the system had no virtualisation enabled apparently because the bootloader lacked support.<br />
<br />
In addition to these software issues there were hardware problems, despite forcing the use of USB for all additional connectivity the USB implementation was dreadful. For a start all USB peripherals have to run at the same speed! One cannot mix full (12Mbit) and high speed (480Mbit) devices which makes adding a USB Ethernet and SATA device challenging when you cannot use a keyboard.<br />
<br />
And because I needed more challenges only one of the USB root hubs was functional. In effect this made the console serial port critical as it was the only reliable way to reconfigure the system without a keyboard or network link (and WIFI was not reliable either)<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_63JsoCvVu2Wup-eN5fJN5cKIh3uhc-JIRhUhLBmxGHGXwKG_e3AEjeT06fTmyeapWWKg9jr7zJpBWGF0XqrfrSo-yDOlTnO9G8ZM_cR1ihAFmezqE2WaPD7ckwiGp2qd-Vw8n-c-pzI-/s1600/HiKey_box.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="191" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_63JsoCvVu2Wup-eN5fJN5cKIh3uhc-JIRhUhLBmxGHGXwKG_e3AEjeT06fTmyeapWWKg9jr7zJpBWGF0XqrfrSo-yDOlTnO9G8ZM_cR1ihAFmezqE2WaPD7ckwiGp2qd-Vw8n-c-pzI-/s200/HiKey_box.jpg" width="200" /></a>After another long pause in proceedings I decided that I should house all the components together and that perhaps being out on my bench might be the cause of some instability. I purchased a powered Amazon basics USB 2 hub, an Ethernet adaptor and a USB 2 SATA interface in the hope of accessing some larger mass storage.<br />
<br />
The USB hub power supply was 12V DC which matched the Hikey requirements so I worked out I could use a single 4A capable supply and run a 3.5inch SATA hard drive too. <a href="https://github.com/kyllikki/designs/tree/master/96boards-ce">I designed a laser cut enclosure</a> and mounted all the components. As it turned out I only had a 2.5inch hard drive handy so the enclosure is a little over size. If I were redoing this design I would attempt to make it fit in 1U of height and be mountable in a 19inch rack instead it is an 83mm high (under 2U) box.<br />
<br />
A new software release had also become available which purported to use an UEFI bootloader after struggling to install this version unsuccessfully, made somewhat more problematic by the undocumented change from UART0 (unpopulated header) to UART3 on the low speed 2mm pitch header. The system seemed to start the kernel which panicked and hung either booting from eMMC or SD card. Once again the project went on hold after spending tens of hours trying to make progress.<br />
<h2>
Third time's a charm</h2>
As the year rolled to a close I once again was persuaded to look at the hikey, I followed the much <a href="https://github.com/96boards/documentation/wiki/HiKeyGettingStarted">improved instructions </a>and installed the shiny new <a href="https://builds.96boards.org/releases/hikey/linaro/debian/latest/">November software release</a> which appears to have been made for the <a href="http://www.lemaker.org/page/hikey.html">re-release of the Hikey through LeMaker</a>. This time I obtained a Debian "jessie" system that booted from the eMMC.<br />
<br />
Having a booted system meant I could finally try and use it. I had decided to try and use the system to host virtual machines used as builders within the NetSurf CI system.<br />
<br />
The basic OS uses a mixture of normal Debian packages with some replacements from Linaro repositories. I would have prefered to see more use of Debain packages even if they were from the backports repositories but on the positive side it is good to see the use of Debian instead of Ubuntu.<br />
<br />
The kernel is a heavily patched 3.18 built in a predominantly monolithic (without modules) manner with the usual exceptions such as the mali and wifi drivers (both of which appear to be binary blobs). The use of a non-mainline capable SoC means the standard generic distribution kernels cannot be used and unless Linaro choose to distribute a kernel with the feature built in it is necessary to compile your own from sources.<br />
<br />
The default install has a linaro user which I renamed to my user and ensured all the ssh keys and passwords on the system were changed. This is an important step when using this pre-supplied images as often a booted system is identical to every other copy.<br />
<br />
To access mass storage my only option was via USB, indeed to add any additional expansion that is the only choice. The first issue here is that the USB host support is compiled in so when the host ports are initialised it is not possible to select a speed other than 12MBit. The speed is changed to 480Mbit by using a user space application found in the users home directory (why this is not a tool provided by a package and held in sbin I do not know).<br />
<br />
When the usb_speed tool is run there is a chance that the previously enumerated devices will be rescanned and what was /dev/sda has become /dev/sdb if this happens there is a high probability that the system must be rebooted to prevent random crashes due to the "zombie" device.<br />
<br />
Because the speed change operation is unreliable it cannot be reliably placed in the boot sequence so this must be executed by hand on each boot to get access to the mass storage.<br />
<br />
NetSurf project already uses a x86_64 virtual host system which runs an LLVM physical volume from which we allocate logical volumes for each VM. I initially hoped to do this with the hikey but as soon as I tried to use the logical volume with a VM the system locked up with nothing shown on console. I did not really try very hard to discover why and instead simply used files on disc for virtual drives which seemed to work.<br />
<br />
To provide reliable network access I used a USB attached Ethernet device, this like the mass storage suffered from unreliable enumeration and for similar reasons could not be automated requiring manually using the serial console to start the system.<br />
<br />
Once the system was started I needed to install the guest VM. I had hoped I might be able to install locally from Debian install media as I do for x86 using the libvirt tools. After a great deal of trial and error I finally was forced to abandon this approach when I discovered the Linaro kernel is lacking iso9660 support so installing from standard media was not possible.<br />
<br />
Instead I used the <a href="http://blog.eciton.net/uefi/qemu-aarch64-jessie.html">instructions provided by Leif Lindholm</a> to create a virtual machine image on my PC and copied the result across. These instructions are great except I used version 2.5 of Qemu instead of 2.2 which had no negative effect. I also installed the Debian backports for Jessie to get an up to date 4.3 kernel.<br />
<br />
After copying the image to the Hikey I started it by hand from the command line as a four core virtual machine and was successfully able to log in. The guest would operate for up to a day before stopping with output such as<br />
<br />
<pre>$
Message from syslogd@ciworker13 at Jan 29 07:45:28 ...
kernel:[68903.702501] BUG: soft lockup - CPU#0 stuck for 27s! [mv:24089]
Message from syslogd@ciworker13 at Jan 29 07:45:28 ...
kernel:[68976.958028] BUG: soft lockup - CPU#2 stuck for 74s! [swapper/2:0]
Message from syslogd@ciworker13 at Jan 29 07:47:39 ...
kernel:[69103.199724] BUG: soft lockup - CPU#3 stuck for 99s! [swapper/3:0]
Message from syslogd@ciworker13 at Jan 29 07:53:21 ...
kernel:[69140.321145] BUG: soft lockup - CPU#3 stuck for 30s! [rs:main Q:Reg:505]
Message from syslogd@ciworker13 at Jan 29 07:53:21 ...
kernel:[69192.880804] BUG: soft lockup - CPU#0 stuck for 21s! [jbd2/vda3-8:107]
Message from syslogd@ciworker13 at Jan 29 07:53:21 ...
kernel:[69444.805235] BUG: soft lockup - CPU#3 stuck for 22s! [swapper/3:0]
Message from syslogd@ciworker13 at Jan 29 07:55:21 ...
kernel:[69570.177600] BUG: soft lockup - CPU#1 stuck for 112s! [systemd:1]
Timeout, server 192.168.7.211 not responding.</pre>
<br />
After this output the host system would not respond and had to be power cycled never mind the guest!<br />
<br />
Once I changed to single core operation the system would run for some time until the host suffered from the dreaded kernel OOM killer. I was at a loss as to why the oom killer was running as the VM was only allocated half the physical memory (512MB) allowing the host what I presumed to be an adequate amount.<br />
<br />
By adding a 512MB swapfile the system was able to push the few hundred kilobytes it wanted to swap and the system was now stable! The swapfile of course has to be started by hand as the external storage is unreliable and unavailable at boot.<br />
<br />
I converted the qemu command line to a libvirt config using the virsh tool<br />
<pre>virsh domxml-from-native qemu-argv cmdln.args</pre>
<pre></pre>
The converted configuration required manual editing to get a working system but now I have a libvirt based VM guest I can control along with all my other VM using the virt-manager tool.<br />
<br />
This system is now stable and has been in production use for a month at time of writing. The one guest VM is a single core 512MB aarch64 system which takes over 1100 seconds (19 minutes) to do what a Banana Pi 2 dual core 1GB memory 32bit native ARM system manages in 300 seconds.<br />
<br />
It seems the single core limited memory system with USB SATA attached storage is very, very slow.<br />
<br />
I briefly attempted to run the CI system job natively within the host system but within minutes it crashed hard and required a power cycle to retrieve, it had also broken the UEFI boot. I must thank Leif for walking me through recovering the system otherwise I would have needed to start over.<br />
<h2>
Conclusions</h2>
I must stress these conclusions and observations are my own and do not represent anyone else.<br />
<br />
My main conclusions are:<br />
<br />
<ul>
<li>My experience is of a poorly conceived, designed and implemented product rushed to market before it was ready.</li>
<li>It was the first announced 64bit ARM single board computer to market but that lead was squandered with issues around availability, reliability and software.</li>
<li>Value for money appears poor. Product is £70 plus an additional £50 for USB hubs, power supplies, USB Ethernet and USB SATA. Other comparable SBC are around the £30 mark and require fewer extras.</li>
<li>The limited I/O within the core product yields a heavy reliance on USB</li>
<li>The USB system is poorly implemented resulting in large additional administrative burdens.</li>
<li>Limited memory of 1Gigabyte reduces the scope for making use of the benefits of 64bit processors.</li>
<li>Recent change to UEFI bootloader is very welcome and something I would like to see across all aarch64 platforms. This is the time for there to be a single unified boot method, having to deal with multiple bad copies of bootloaders in the 32bit world was painful, perhaps this mistake can be avoided in the future.</li>
<li>The kernel provision is abysmal and nothing like the quality I would expect from Linaro. The non-upstream kernel combined with missing features after almost a year of development is inexplicable.</li>
<li>The Pine64 with 2G of memory and the Raspberry Pi 3 with its de-facto standard form factor are both preferable despite their limitations in other areas.</li>
</ul>
<div>
This project has taken almost a year to get to the final state and has been one of the least enjoyable within that time. The only reason I have a running system at the end of it is sheer bloody mindedness because after spending hundreds of hours of my free time I was not prepared to see it all go to waste.</div>
<div>
<br /></div>
<div>
To be fair, the road I travelled is now much smoother and if the application is suited to having a mobile phone without a screen then the Hikey probably works as a solution. For me, however, the Hikey product with the current hardware and software limitations is not something I would recommend in preference to other options.</div>
Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com1tag:blogger.com,1999:blog-3711269760993993197.post-2002046596652143042016-02-21T10:29:00.000+00:002016-02-21T10:29:15.019+00:00Stack 'em, pack 'em and rack 'em.As you may be aware I have <a href="http://vincentsanders.blogspot.co.uk/2015/10/raspberries-are-not-only-fruit.html">a bit of a problem with Single Board Computers</a> in that I have a lot of them. Keeping them organised has turned into a bit of a problem.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEildSPiP2X_zKdbzR0tO6r_RsntnaHjgYU3RpEurYASKsvBhLJ08Th9SfG39hJvJzoQHDjlU0LWYnm5tweWBXYFCltesB9xqNiGNCREXmWVH6M6oNFyHk-bzqOdr5povoPDE1c_T7SVAVBU/s1600/sbc-psu-inuse.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="cluttered shelf of SBC" border="0" height="97" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEildSPiP2X_zKdbzR0tO6r_RsntnaHjgYU3RpEurYASKsvBhLJ08Th9SfG39hJvJzoQHDjlU0LWYnm5tweWBXYFCltesB9xqNiGNCREXmWVH6M6oNFyHk-bzqOdr5povoPDE1c_T7SVAVBU/s200/sbc-psu-inuse.jpg" title="cluttered shelf of SBC" width="200" /></a></div>
I designed clip cases for many of these systems giving me a higher storage density on my rack shelves and <a href="http://vincentsanders.blogspot.co.uk/2016/01/ampere-was-newton-of-electricity.html">built a power supply</a> to reduce the cabling complexity. These helped but I still ended up with a cluttered shelf full of SBC.<br />
<br />
I decided I would make a rack enclosure to hold the SBC, I was limited to material I could easily CNC machine which limited me to acrylic plastics or wood.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0gimsR20gy5ikBEOTgruQ0HuxPB_qFR_IW4yglPL3qSFPVMQ5hvmArTNdZ2w0cASZhaYeo441S1s8jeAc33X4McpZjh4wc7nbg2u4PgDqF6tJALQKvKojcMPPw0meV1u7JoqnvYWmsVCk/s1600/IMG_5721.JPG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="laser cutting the design, viewed through heavily tinted filter" border="0" height="158" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0gimsR20gy5ikBEOTgruQ0HuxPB_qFR_IW4yglPL3qSFPVMQ5hvmArTNdZ2w0cASZhaYeo441S1s8jeAc33X4McpZjh4wc7nbg2u4PgDqF6tJALQKvKojcMPPw0meV1u7JoqnvYWmsVCk/s200/IMG_5721.JPG" title="laser cutting the design, viewed through heavily tinted filter" width="200" /></a>Initially I started with the idea of housing the individual boards in a <a href="https://en.wikipedia.org/wiki/Toast_rack">toast rack</a> arrangement. This would mean that the enclosure would have to be at least <a href="https://en.wikipedia.org/wiki/Rack_unit">2U</a> high to fit the boards all the existing cases would have to be discarded. This approach was dropped when the drawbacks of having no flexibility and only being able to fit the units that were selected at design time became apparent (connector cutouts and mounting hole placement.<br />
<br />
Instead I changed course to try and incorporate the existing cases which already solved the differing connector and mounting placement problem and gave me a uniform size to consider. Once I had this approach the design came pretty quickly. I used a <a href="https://en.wikipedia.org/wiki/Box_girder">tube girder</a> construction 1U in height to get as much strength as possible from the 3mm acrylic plastic I would use.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCbcSAX5LBeOeaIyhyphenhyphenKBxfFeJUQm9V9J_V7asK9YnpU5o4IMc2ox0tB9s7DkBet4USV1Vw5JgWyca_HkkVQ7a_1C69lNxshROpg6Js4eB6KYBmB-nqdZTDTJWwQatWbbJA8GGcap-Pcb0x/s1600/IMG_5724.JPG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="laser cut pieces arranged for assembly still with protective film on" border="0" height="126" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCbcSAX5LBeOeaIyhyphenhyphenKBxfFeJUQm9V9J_V7asK9YnpU5o4IMc2ox0tB9s7DkBet4USV1Vw5JgWyca_HkkVQ7a_1C69lNxshROpg6Js4eB6KYBmB-nqdZTDTJWwQatWbbJA8GGcap-Pcb0x/s200/IMG_5724.JPG" title="laser cut pieces arranged for assembly still with protective film on" width="200" /></a></div>
The design was simply laser cut from sheet stock and fastened together with M3 nut and bolts. Once I corrected the initial design errors (I managed to get almost every important dimension wrong on the first attempt) the result was a success.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEix8u4Bw0R1olYnGTVAr0yOnOffImzb6AH6GBMGnnePcbaBnIK3jagua-98NsHGPstZ89va_5ZofHQwoCdM5QhvrsXG967x7ioEdnVd3g4U_x3TeuWYYBLB-lOHHZhMbzZGyZ4TcszyzISA/s1600/rack_slots.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="working prototype resting on initial version" border="0" height="133" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEix8u4Bw0R1olYnGTVAr0yOnOffImzb6AH6GBMGnnePcbaBnIK3jagua-98NsHGPstZ89va_5ZofHQwoCdM5QhvrsXG967x7ioEdnVd3g4U_x3TeuWYYBLB-lOHHZhMbzZGyZ4TcszyzISA/s200/rack_slots.jpg" title="working prototype resting on initial version" width="200" /></a></div>
The prototype is a variety of colours because <a href="http://makespace.org/">makespace</a> ran out of suitably sized clear acrylic stock but the colouring has no effect on the result other than aesthetical. The structure gives a great deal of rigidity and there is no sagging or warping, indeed testing on the prototype got to almost 50Kg loading without a failure (one end clamped and the other end loaded at 350mm distance)<br />
<br />
I added some simple rotating latches at the front which keep the modules held in place and allow units to be removed quickly if necessary.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiF2T8AxLXYx2XIkY_5qJv9ZcSfzcQ75Yp_bs-_YQjtBqRR01mfKp4kz0_Z7BqlZB_ZoZeGX7rvSKsAdqSPAKhgpaSHHIIgaf8316NjkpouxxsaXkwVjQnS12C04XTA_H8E3qs4T-hVc7V8/s1600/IMG_5735.JPG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="rack slots installed and in use" border="0" height="50" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiF2T8AxLXYx2XIkY_5qJv9ZcSfzcQ75Yp_bs-_YQjtBqRR01mfKp4kz0_Z7BqlZB_ZoZeGX7rvSKsAdqSPAKhgpaSHHIIgaf8316NjkpouxxsaXkwVjQnS12C04XTA_H8E3qs4T-hVc7V8/s200/IMG_5735.JPG" title="rack slots installed and in use" width="200" /></a></div>
Overall this project was successful and I can now pack five SBC per U neatly. It does limit me to using systems cased in my "slimline" designs (68x30x97mm) which currently means the <a href="https://github.com/kyllikki/designs/tree/master/RPi2B_Bplus_Slim">Raspberry Pi B+</a> style and the <a href="https://github.com/kyllikki/designs/tree/master/OPi_PC_Slim">Orange Pi PC</a>.<br />
<br />
Once small drawback is access to I/O and power connectors. These need to be right angled and must be unplugged before unit removal which can be a little fiddly. Perhaps a toast rack design of cases would have given easier connector access but I am happy with this trade off of space for density.<br />
<br />
As usual the design files are <a href="https://github.com/kyllikki/designs/tree/master/rack_slots">freely available</a>, perhaps they could be useful as a basis for other laser cut rack enclosure designs.Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com0tag:blogger.com,1999:blog-3711269760993993197.post-21874236215588717312016-01-26T00:20:00.001+00:002016-01-26T00:38:54.251+00:00Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.It seems Scott Adams insights sometimes reach beyond his disturbingly accurate satire. I have written before about my iterative approach to designing the things I make. such as my attempts at furniture and more recently enclosures for various projects.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRbiOZhuFpesbc8hysJyAPF34LQnLhtsSONhibmE8OrW2s0-VH60dONMNzrZkGJ1xeXb8lAiC83drpfYf5UIrCKsErjZrR9iDFk60_Y0l-Ag-mm2P0WI9-HX3l6Iss2svY7z-pwjKQ0Rhv/s1600/IMG_20160124_164533.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Raspberry Pi B case design with the faliures" border="0" height="150" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRbiOZhuFpesbc8hysJyAPF34LQnLhtsSONhibmE8OrW2s0-VH60dONMNzrZkGJ1xeXb8lAiC83drpfYf5UIrCKsErjZrR9iDFk60_Y0l-Ag-mm2P0WI9-HX3l6Iss2svY7z-pwjKQ0Rhv/s200/IMG_20160124_164533.jpg" title="Raspberry Pi B case design with the faliures" width="200" /></a></div>
In the workshop today I had a selection of freshly laser cut.completed cases for several single board computers out on the desk. I was asked by a new member of the space how I was able to produce these with no failures?<br />
<br />
I was slightly taken aback at the question and had to explain that the designs I was happily running off on the laser cutter are all the result of mistakes, lots of them. The person I was talking to was surprised when I revealed that I was not simply generating fully formed working designs first time.<br />
<br />
We chatted for a while and it became apparent that they had been holding themselves back from actually making something because they were afraid the result would be wrong. I went to my box and retrieved the failures from my most recent <a href="https://github.com/kyllikki/designs/tree/master/RPiB">case design for a Raspberry Pi model B</a> to put alongside the successful end product to try and encourage them.<br />
<br />
I explained that my process was fairly iterative, sure I attempted to get it right first time by reusing existing working solutions as a basis but that when the cost of iterating is relatively small it is sometimes worthwhile to just accept the failures.<br />
<br />
For example in this latest enclosure:<br />
<br />
<ul>
<li>my first attempt (in the semi opaque plastic) resulted in a correct top and bottom but the height was a couple of mm short and the audio connector cutout was too small</li>
<li>second attempt was in clear acrylic and omitting the top and bottom. I stuffed the laser cutter setup and the resulting cutouts would not actually fit together properly.</li>
<li>third attempt went together ok but my connector cutouts were 0.5mm high so the board did not sit properly, this case would have been usable but I like to publish refined designs so I fixed all the small issues.</li>
<li>Fourth version is pretty much correct and I have tried all three different Raspberry Pi model B boards (mine and the spaces) and they all fit so I am confident I have a design I can now use anytime I want a case for this SBC.</li>
</ul>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyLg7wYPWSZOOdMRfHH9UumDZyWbamiZUxk2sKCirNjsyRMybPK1y4KYku2jxMOxAEAawvZCAIlucyemrZYMpo0n7NqXVdpnr94DfeI_QS1FUNV6UVHgWaMRSQuJoPmnaAOhQnEUEC-4G3/s1600/IMG_5698.JPG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="My collection of failed cases" border="0" height="143" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyLg7wYPWSZOOdMRfHH9UumDZyWbamiZUxk2sKCirNjsyRMybPK1y4KYku2jxMOxAEAawvZCAIlucyemrZYMpo0n7NqXVdpnr94DfeI_QS1FUNV6UVHgWaMRSQuJoPmnaAOhQnEUEC-4G3/s200/IMG_5698.JPG" title="My collection of failed cases" width="200" /></a></div>
<div>
Generally I do not need this many iterations and get it right second time, however experience caused me to use offcuts and scrap material for the initial versions expecting to have issues. The point is that I was willing to make the iterations and not see them as failures.</div>
<div>
<br /></div>
<div>
The person I was talking to could not get past the possibility of having a pile of scrap material, it was wasteful in their view and my expectation to fail was unfathomable. They left with somewhat of a bad view of me and my approach.</div>
<div>
<br /></div>
<div>
I pondered this turn of events for a time and they did have a point in that I have a collection of thirty or so failures from all my various designs most of which is unusable. I then realised I have produced over fifty copies of those designs not just for myself but for other people and published them for anyone else to replicate, so on balance I think I am doing ok on wastage.</div>
<div>
<br /></div>
<div>
The stronger argument for me personally is that I have made something. I love making things, be that software, electronics or physical designs. It may not always be the best solution but I usually end up with something that works.</div>
<div>
<br /></div>
<div>
That makespace member may not like my approach but in the final reckoning, I have made something, their idea is still just an idea. So Scott I may not be an artist but I am at least creative and that is halfway there.</div>
Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com0tag:blogger.com,1999:blog-3711269760993993197.post-36336262982600023702016-01-14T00:08:00.001+00:002016-01-14T00:08:39.048+00:00Ampere was the Newton of Electricity.I think <a href="https://en.wikipedia.org/wiki/James_Clerk_Maxwell">Maxwell</a> was probably right, certainly the unit of current <a href="https://en.wikipedia.org/wiki/Andr%C3%A9-Marie_Amp%C3%A8re">Ampere</a> gives his name to has been a concern of mine recently.<br />
<br />
Regular readers may have possibly noticed my <a href="http://vincentsanders.blogspot.co.uk/2015/10/raspberries-are-not-only-fruit.html">unhealthy obsession with single board computers</a>. I have recently rehomed all the systems into <a href="http://vincentsanders.blogspot.co.uk/2015/12/i-said-it-was-wired-like-christmas-tree.html">my rack</a> which threw up a small issue of powering them all. I had been using an ad-hoc selection of USB wall warts and adapters but this ended up needing nine mains sockets and short of purchasing a very expensive PDU for the rack would have needed a lot of space.<br />
<br />
Additionally having nine separate convertors from mains AC to low voltage DC was consuming over 60Watts for 20W of load! The majority of these supplies were simply delivering 5V either via micro USB or DC barrel jack.<br />
<br />
Initially I considered using a ten port powered USB hub but this seemed expensive as I was not going to use the data connections, it also had a limit of 5W per port and some of my systems could potentially use more power than that so I decided to build my own supply.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZWAhukdNRMfE6PTx9IPakWE1cEaRxWbCiZRbfH1P5gekEhiwcOmr88AbvwVpFqpw5Tz_4qBa9S9H2fphneXR5w8hsi_gUQyk6ex2_H-7uSVYMG0CsfYsXbp6Ky5aelLaTzsHSbz3VC0vX/s1600/psu-module.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="PSU module from ebay" border="0" height="135" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZWAhukdNRMfE6PTx9IPakWE1cEaRxWbCiZRbfH1P5gekEhiwcOmr88AbvwVpFqpw5Tz_4qBa9S9H2fphneXR5w8hsi_gUQyk6ex2_H-7uSVYMG0CsfYsXbp6Ky5aelLaTzsHSbz3VC0vX/s200/psu-module.jpg" title="PSU module from ebay" width="200" /></a></div>
A quick look on ebay revealed that a 150W (30A at 5V) switching supply could be had from a UK vendor for £9.99 which seemed about right. An enclosure, fused and switched IEC inlet, ammeter/voltmeter with shunt and suitable cables were acquired for another £15<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeTSK6pPMvRSUhsKG8ja4KfRWyc1TCuKn8ThUChCAYe16elRlARQIk4tIHrrkscS6jPE2TT9b_NlfiteIY7T7vLvsfJeZsB3oKoBNrB351QSkaRw1UL9Rf9swXPKa_MjhlBR6oextXIHpF/s1600/psu-top-open.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Top view of the supply all wired up" border="0" height="95" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeTSK6pPMvRSUhsKG8ja4KfRWyc1TCuKn8ThUChCAYe16elRlARQIk4tIHrrkscS6jPE2TT9b_NlfiteIY7T7vLvsfJeZsB3oKoBNrB351QSkaRw1UL9Rf9swXPKa_MjhlBR6oextXIHpF/s200/psu-top-open.jpg" title="Top view of the supply all wired up" width="200" /></a></div>
A little careful drilling and cutting of the enclosure made openings for the inlets, cables and display. These were then wired together with crimped and insulated spade and ring connectors. I wanted this build to be safe and reliable so care was taken to get the neatest layout I could manage with good separation between the low and high voltage cabling.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBlZv_GfyVxUtK6uWF1DwcP2FQ4EpVPxZy0-1JvX-5x_4f5HuWorDO7C8-Df7yOrVXp1H9hzeK_ehNjeachFgBNkS0hK8NOOiLybP0M_cgzLriVnEbBv8eEGqsjc3teQec7mnLDD0zG-Wq/s1600/psu-front-open.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="Completed supply with all twelve outputs wired up" border="0" height="108" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBlZv_GfyVxUtK6uWF1DwcP2FQ4EpVPxZy0-1JvX-5x_4f5HuWorDO7C8-Df7yOrVXp1H9hzeK_ehNjeachFgBNkS0hK8NOOiLybP0M_cgzLriVnEbBv8eEGqsjc3teQec7mnLDD0zG-Wq/s200/psu-front-open.jpg" title="Completed supply with all twelve outputs wired up" width="200" /></a></div>
The result is a neat supply with twelve outputs which i can easily extend to eighteen if needed. I was pleasantly surprised to discover that even with twelve SBC connected generating 20W load the power drawn by the supply was 25W or about 80% efficiency instead of the 33% previously achieved.<br />
<br />
The inbuilt meter allows me to easily see the load on the supply which so far has not risen above 5A even at peak draw, despite the cubitruck and BananaPi having spinning rust hard drives attached, so there is plenty of room for my SBC addiction to grow (I already pledged for a <a href="https://www.kickstarter.com/projects/pine64/pine-a64-first-15-64-bit-single-board-super-comput">Pine64</a>).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_WGRY16ET_g8DFbHzxYo1Vfc5_QmRaeS5U6FwfWTE19QRYxW_DwHO1zrIGSwxaT5ImXFmR-_IM87SzreCS9ERhqjjcXBSnTHUNs_IM441bkKOzqlgRSopzD9sl-bW8lV1DcBgOHIA4nB5/s1600/sbc-psu-inuse.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Supply installed in the rack with some of the SBC connected" border="0" height="97" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_WGRY16ET_g8DFbHzxYo1Vfc5_QmRaeS5U6FwfWTE19QRYxW_DwHO1zrIGSwxaT5ImXFmR-_IM87SzreCS9ERhqjjcXBSnTHUNs_IM441bkKOzqlgRSopzD9sl-bW8lV1DcBgOHIA4nB5/s200/sbc-psu-inuse.jpg" title="Supply installed in the rack with some of the SBC connected" width="200" /></a></div>
Overall I am pleased with how this turned out and while there are no detailed design files for this project it should be easy to follow if you want to repeat it. One note of caution though, this project has mains wiring and while I am confident in my own capabilities dealing with potentially lethal voltages I cannot be responsible for anyone else so caveat emptor!Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com4tag:blogger.com,1999:blog-3711269760993993197.post-89847559968931749612015-12-27T16:31:00.001+00:002015-12-27T16:31:13.881+00:00The only pleasure I get from moving house is stumbling across books I had forgotton I ownedI have to agree with <a href="https://en.wikipedia.org/wiki/John_Burnside">John Burnside</a> on that statement, after having recently moved house again rediscovering our book collection has been a salve for an otherwise exhausting undertaking. I returned to Cambridge four years ago, initially on my own and then subsequently the family moved down to be with me.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjsVNLc6tRc0lHypfPPIukclid6mr_nkDzKg9s0gZ8-cd-Gg9WNZZikx9cIC7UN2QgO3hXQzsc94kLe1unJVlWAxyIgSiHl4EJDETM3XmHW-e6FqmKT7Yr3h_8Qeh7rVvhOjp6bOv2PSoWp/s1600/IMG_5678.JPG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="150" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjsVNLc6tRc0lHypfPPIukclid6mr_nkDzKg9s0gZ8-cd-Gg9WNZZikx9cIC7UN2QgO3hXQzsc94kLe1unJVlWAxyIgSiHl4EJDETM3XmHW-e6FqmKT7Yr3h_8Qeh7rVvhOjp6bOv2PSoWp/s200/IMG_5678.JPG" width="200" /></a>We rented a house but, with two growing teenagers, the accommodation was becoming a little crowded. Melodie and I decided the relocation was permanent and started looking for our own property, eventually finding something to our liking in <a href="https://en.wikipedia.org/wiki/Cottenham">Cottenham village</a>.<br />
<br />
Melodie took the opportunity to have the house cleaned and decorated while empty because of overlapping time with our rental property. This meant we had to be a little careful while moving in as there was still wet paint in places.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrAV76LT-96jDfWWGNJX9o6nUkL69CrQ7A3patbtdn1e6nM4eQnsrTqEowBT4G6T2EGq6OOSbRlAu5BsktbSZUqNwHDgOmXCfRw2uv_rZRGr69yppEAdfFJBe5WRC2UZZHmc3OB8AuQQ4_/s1600/IMG_5682.JPG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="Some of our books" border="0" height="159" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrAV76LT-96jDfWWGNJX9o6nUkL69CrQ7A3patbtdn1e6nM4eQnsrTqEowBT4G6T2EGq6OOSbRlAu5BsktbSZUqNwHDgOmXCfRw2uv_rZRGr69yppEAdfFJBe5WRC2UZZHmc3OB8AuQQ4_/s200/IMG_5682.JPG" title="Some of our books" width="200" /></a></div>
Moving weekend was made bearable by Steve, Jonathan and Jo lending a hand especially on the trips to Yorkshire to retrieve, amongst other things, the aforementioned book collection. We were also fortunate to have Andy and Jane doing many other important jobs around the place while the rest of us were messing about in vans.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFdORxQX8wYOv3t5dCAoB-VcHz3V0TznGEPaHfweDtgU0TlpRSfSGndc6povAIPtGXNhSrw127xGX-UfDJGssDLYJaMywvImJMoUIrtOcWgbpl5ez9B3jrBnT594QZGiOurNTnway84ZXh/s1600/IMG_5668.JPG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="The desk in the study" border="0" height="141" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFdORxQX8wYOv3t5dCAoB-VcHz3V0TznGEPaHfweDtgU0TlpRSfSGndc6povAIPtGXNhSrw127xGX-UfDJGssDLYJaMywvImJMoUIrtOcWgbpl5ez9B3jrBnT594QZGiOurNTnway84ZXh/s200/IMG_5668.JPG" title="The desk in the study" width="200" /></a></div>
The seemingly obligatory trip to IKEA to acquire furniture was made much more fun by trying to park a luton van which was only possible because Steve and Jonathan helped me. Though it turns out IKEA ship mattresses rolled up so tight they can be moved in an estate car so taking the van was unnecessary.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh88fiuFstTLjd6JBSduYDNw0_nRN-NgERZ-VMFA8sCXNKhFzZVshE8plh1pirnQq-xAD6P3bn0rkE41KiU7URNy3IermJeknEsLbfyhAL_OfI1RvTJWpOYObpX0Gi1Jr_tWO7kF1plKsq6/s1600/IMG_5679.JPG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="Alex under his loft bed" border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh88fiuFstTLjd6JBSduYDNw0_nRN-NgERZ-VMFA8sCXNKhFzZVshE8plh1pirnQq-xAD6P3bn0rkE41KiU7URNy3IermJeknEsLbfyhAL_OfI1RvTJWpOYObpX0Gi1Jr_tWO7kF1plKsq6/s200/IMG_5679.JPG" title="Alex under his loft bed" width="79" /></a></div>
Having moved in it seems like every weekend is filled with a never ending "todo" list of jobs. From clearing gutters to building a desk in the study. Eight weeks on and the list seems to be slowly shrinking meaning I can even do some lower priority things like the <a href="http://vincentsanders.blogspot.co.uk/2015/12/i-said-it-was-wired-like-christmas-tree.html">server rack</a> which was actually a fun project.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqGM5yT4CsbPayoRtSoPzAcKu-8l5IQ6w7GyL8IQU-K-9T9p-EwBrboMq1j4Yg9hWTRvS8d17tQHf2wNxurxqU2m0MvtqBn8Eus9DjOnhEEmFVfClIq8Epwq49mo2AbOS8N3HTbsV3DojT/s1600/IMG_5680.JPG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Joshua in his completed room" border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqGM5yT4CsbPayoRtSoPzAcKu-8l5IQ6w7GyL8IQU-K-9T9p-EwBrboMq1j4Yg9hWTRvS8d17tQHf2wNxurxqU2m0MvtqBn8Eus9DjOnhEEmFVfClIq8Epwq49mo2AbOS8N3HTbsV3DojT/s200/IMG_5680.JPG" title="Joshua in his completed room" width="103" /></a>The holidays this year afforded me some time to finish the boys bedrooms. They both got <a href="https://en.wikipedia.org/wiki/Bunk_bed">loft beds</a> with a substantial area underneath. This allows them both to have double beds along with a desk and plenty of storage. Completing the rooms required the construction of some flat pack furniture which rather than simply do myself I supervised the boys doing it themselves.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAnLSr9gYeIUJ4vLzZIPjYLGmR1a2KIc7y-xY2Hh7rlpGxfKmWkGgIi_ZbjlOIwexcYs6eykuldCLI9isKrePa5AmRaBbegdp-w2l0w2t3L-f1hDKEBK9WcSwYTSl8qwtBlM2YxA8lpJ62/s1600/IMG_5672.JPG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="Alexander building flat pack furniture" border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAnLSr9gYeIUJ4vLzZIPjYLGmR1a2KIc7y-xY2Hh7rlpGxfKmWkGgIi_ZbjlOIwexcYs6eykuldCLI9isKrePa5AmRaBbegdp-w2l0w2t3L-f1hDKEBK9WcSwYTSl8qwtBlM2YxA8lpJ62/s200/IMG_5672.JPG" title="Alexander building flat pack furniture" width="150" /></a></div>
Teaching them by letting them get on with it was a surprisingly effective and both of them got the hang of the construction method pretty quickly. There was only a couple of errors from which they learned immediately and did not repeat (draw bottoms having a finished side and front becomes back when you are constructing upside down)<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqGM5yT4CsbPayoRtSoPzAcKu-8l5IQ6w7GyL8IQU-K-9T9p-EwBrboMq1j4Yg9hWTRvS8d17tQHf2wNxurxqU2m0MvtqBn8Eus9DjOnhEEmFVfClIq8Epwq49mo2AbOS8N3HTbsV3DojT/s1600/IMG_5680.JPG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghmQCYq8onYn8tTevsD3wEqgdXCKZSfWY_pQdJdrfNrajtfggBdeoNQkFMzghyphenhyphenyCz9tTvxmKL6zeWoPg6Cb8DsOX-EkBAPmAkMY_4Il4_AsiGz0HH1uFoF3Y3ZgwyotVOoyiJBnZonL64x/s1600/IMG_5673.JPG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Joshua assembling flat pack furniture" border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghmQCYq8onYn8tTevsD3wEqgdXCKZSfWY_pQdJdrfNrajtfggBdeoNQkFMzghyphenhyphenyCz9tTvxmKL6zeWoPg6Cb8DsOX-EkBAPmAkMY_4Il4_AsiGz0HH1uFoF3Y3ZgwyotVOoyiJBnZonL64x/s200/IMG_5673.JPG" title="Joshua assembling flat pack furniture" width="150" /></a></div>
The house is starting to feel like home and soon all the problems will fade from memory while the good will remain. Certainly our first holiday season has been comfortable here and I look forward to many more re-reading our books.Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com13tag:blogger.com,1999:blog-3711269760993993197.post-32786420959468967092015-12-08T11:33:00.000+00:002015-12-08T11:33:02.955+00:00I said it was wired like a Christmas tree<div>
I have recently acquired a 27U high <a href="https://en.wikipedia.org/wiki/19-inch_rack">19 inch rack</a> in which I hope to consolidate all the computing systems in my home that do not interact well with humans.</div>
<div>
<br /></div>
<div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXBP2s0qCp7F3FiNyEjl1XUFI9zflo__R3yuD9i4p_Z2m02CaSdzD6b6QkZBxXwcSF55nPI7897zNHFMEgcr8AYYRNhmVWFcemsuFQskgBHZC1YFsik1Ya2rfFaeD919VjXmRRo2yjSKHE/s1600/new-rack.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXBP2s0qCp7F3FiNyEjl1XUFI9zflo__R3yuD9i4p_Z2m02CaSdzD6b6QkZBxXwcSF55nPI7897zNHFMEgcr8AYYRNhmVWFcemsuFQskgBHZC1YFsik1Ya2rfFaeD919VjXmRRo2yjSKHE/s320/new-rack.jpg" width="216" /></a></div>
My main issue is that modern systems are just plain noisy, often with multiple small fans whining away. I have worked to reduce this noise by using quieter components as replacements but in the end it is simply better to be able to put these systems in a box out of the way.<br />
<br /></div>
<div>
The rack was generously given to me by Andy Simpkins and aside from being a little dirty having been stored for some time was in excellent condition. While the proverbs "<a href="https://en.wiktionary.org/wiki/don%27t_look_a_gift_horse_in_the_mouth">never look a gift horse in the mouth</a>" and "<a href="https://en.wiktionary.org/wiki/beggars_can%27t_be_choosers">beggars cannot be choosers</a>" are very firmly at the front of my mind there were a few minor obstacles to overcome to make it fit in its new role with a very small budget.</div>
<div>
<br /></div>
<div>
The new home for the rack was to be a space under the stairs where, after careful measurement, I determined it would just fit. After an hour or two attempting to manoeuvre a very heavy chunk of steel into place I determined it was simply not possible while it was assembled. So I ended up disassembling and rebuilding the whole rack in a confined space.</div>
<div>
<br /></div>
<div>
The rack is 800mm wide IMRAK 1400 rather than the more common 600mm width which means it employs "cable reducing channels" to allow the mounting of standard width rack units. Most racks these days come with four posts in the corners to allow for longer kit to be supported front and back. This particular rack was not fitted with the rear posts and a brief call to the supplier indicated that any spares from them would be <a href="https://en.wiktionary.org/wiki/eyewatering#English">eyewateringly</a> expensive (almost twice the cost of purchasing a new rack from a different supplier) so I had to get creative.</div>
<div>
<br /></div>
<div>
Shelves that did not require the rear rails were relatively straightforward and I bought two 500mm deep cantilever type from <a href="http://www.rackcabinets.co.uk/accessories/shelves/19-modem-shelf.html">Orion</a> (I have no affiliation with them beyond being a satisfied customer).<br />
<br />
I took a trip to the local hardware store and purchased some angle brackets and <a href="http://www.diy.com/departments/varnished-steel-square-tube-h16mm-w16mm-l2m/254116_BQ.prd">16mm steel square tube</a>. From this I made support rails which means the racked kit has support to its rear rather than relying solely on being supported by its rack ears.<br />
<br />
The next problem was the huge hole in the bottom of the rack where I was hoping to put the UPS and power switching. This hole is intended for use with raised flooring where cables enter from below, when not required it is filled in with a "bottom gland plate". Once again the correct spares for the unit were not within my budget.<br />
<br />
Around a year ago I built several systems for open source projects from <a href="https://www.youtube.com/watch?v=tSDPOvtPi5s">parts generously donated by Mythic Beasts</a> (yes I did recycle servers used to build a fort). I still had some leftover casework from one of those servers so ten minutes with an angle grinder and a drill and I made myself a suitable plate.<br />
<br />
The final problem I faced is that it is pretty dark under the stairs and while putting kit in the rack I could not see what I was doing. After some brief Googling I decided that all real rack lighting solutions were pretty expensive and not terribly effective.<br />
<br />
At this point I was interrupted by my youngest son trying to assemble the Christmas tree and the traditional "none of the lights work" so we went off to the local supermarket to buy some bulbs. Instead we bought a 240 LED string for £10 (15usd) in the vague hope that next year they will not be broken.<br />
<br />
I immediately had <a href="https://en.wikipedia.org/wiki/Epiphany_(feeling)">a light bulb moment</a> and thought how a large number of efficient LED bulbs at a low price would be ideal for lighting a rack. So my rack is indeed both wired like and as a Christmas tree!</div>
<br />
Now I just have to finish putting all the systems in there and I will be able to call the project a success.Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com1tag:blogger.com,1999:blog-3711269760993993197.post-86271586870352113332015-12-01T09:39:00.001+00:002015-12-01T09:39:49.448+00:00HTTP to screenI recently presented a talk at the <a href="https://wiki.debian.org/DebianEvents/gb/2015/MiniDebConfCambridge">Debian miniconf in Cambridge</a>. This was a new talk explaining what goes on in a web browser to get a web page on screen.<br />
<br />
The presentation was <a href="http://meetings-archive.debian.net/pub/debian-meetings/2015/mini-debconf-cambridge/webm/http_screen.webm">filmed</a> and <a href="http://www.kyllikki.org/presentations/httptoscreen.pdf">my slides</a> are also available. I think it went over pretty well despite the venues lighting adding a strobe ambiance to part of proceedings.<br />
<br />
I thought the conference was a great success overall and enjoyed participating. I should like to thank <a href="http://www.cosworth.com/">Cosworth</a> for allowing me time to attend and for providing some sponsorship.Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com0tag:blogger.com,1999:blog-3711269760993993197.post-32813629953728171362015-11-04T13:58:00.001+00:002015-11-04T14:08:37.700+00:00I am not a number I am a free manOnce more the NetSurf developers tried to escape from a mysterious village by writing web browser code.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgw1GN1i2k9Ge-ANa92HT4T4Q9T8MEx2H4kn3aDE_Ks5_fuWebJodKu8TZ0_T1taS1jw3KZTp9a-45_fwUEtbMGIrKeJAFnrOJpVzHTuLzXBsq3EQ_3Bmsa42R7xGUCxCNWcTB8S4VjUiAX/s1600/ns-dev.JPG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Michael Drake, Daniel Silverstone, Dave Higton and Vincent Sanders at NetSurf Developer workshop" border="0" height="273" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgw1GN1i2k9Ge-ANa92HT4T4Q9T8MEx2H4kn3aDE_Ks5_fuWebJodKu8TZ0_T1taS1jw3KZTp9a-45_fwUEtbMGIrKeJAFnrOJpVzHTuLzXBsq3EQ_3Bmsa42R7xGUCxCNWcTB8S4VjUiAX/s320/ns-dev.JPG" title="Michael Drake, Daniel Silverstone, Dave Higton and Vincent Sanders at NetSurf Developer workshop" width="320" /></a></div>
The sixth developer workshop was an opportunity for us to gather together in person to contribute to NetSurf.<br />
<br />
We were hosted by <a href="http://www.codethink.co.uk/">Codethink</a> in their Manchester offices which provided a comfortable and pleasant space to work in.<br />
<br />
Four developers managed to attend in person from around the UK: Michael Drake, Daniel Silverstone, Dave Higton and Vincent Sanders.<br />
<br />
The main focus of the weekends activities was to work on improving our JavaScript implementation. At the previous workshop we had laid the groundwork for a shift to the <a href="http://duktape.org/">Duktape</a> JavaScript engine and since then put several hundred hours of time into completing this transition.<br />
<br />
During this weekend Daniel built upon this previous work and managed to get DOM events working. This was a major missing piece of implementation which will mean NetSurf will be capable of interpreting JavaScript based web content in a more complete fashion. This work revealed several issues with our DOM library which were also resolved.<br />
<br />
We were also able to merge several improvements provided by the Duktape upstream maintainer Sami Vaarala which addressed performance problems with regular expressions which were causing reports of "hangs" on slow processors.<br />
<br />
The responsiveness of Sami and the Ducktape project has been a pleasant surprise making our switch to the library look like an increasingly worthwhile effort.<br />
<br />
Overall some good solid progress was made on JavaScript support. Around half of the DOM interfaces in the specifications have now been implemented leaving around<a href="http://ci.netsurf-browser.org/jenkins/job/docs-netsurf/doxygen/unimplemented.html"> fifteen hundred methods and properties remaining</a>. There is an aim to have this under the thousand mark before the new year which should result in a generally useful implementation of the basic interfaces.<br />
<br />
Once the DOM interfaces have been addressed our focus will move onto the dynamic layout engine necessary to allow rendering of the changing content.<br />
<br />
The 3.4 release is proposed to occur sometime early in the new year and depends on getting the JavaScript work to a suitable stable state.<br />
<br />
Dave joined us for the first time, he was principally concerned with dealing with bugs and the bug tracker. It was agreeable to have a new face at the meeting and some enthusiasm for the RISC OS port which has been lacking an active maintainer for some time.<br />
<div>
<br /></div>
<div>
The turnout for this workshop was the same as the <a href="http://vincentsanders.blogspot.co.uk/2015/07/netsurf-developers-and-order-of-phoenix.html">previous one</a> and the issues raised then are still true. We still have a very small active core team who can commit only limited time which is making progress very slow and are lacking significant maintenance for several frontends.</div>
<div>
<br /></div>
<div>
Overall we managed to pack 16 hours of work into the weekend and addressed several significant problems.</div>
Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com2tag:blogger.com,1999:blog-3711269760993993197.post-69657056241875912272015-10-24T15:36:00.002+01:002015-10-24T17:30:43.133+01:00It takes courage to sit on a jury. How many of us want to decide the fate of another person's life or freedom?I think <a href="http://en.wikipedia.org/wiki/Regina_Brett">Regina Brett</a> has a point although having now experienced being a juror in a British crown court I have a much better understanding of both the process and effectiveness of the jury system.<br />
<br />
The actual process of becoming a juror on a case is something I had not been aware of previously. You simply receive a letter telling you to be at the court on a specific date and that you are required to be available for at least ten days, possibly more. The only <a href="http://en.wikipedia.org/wiki/Juries_in_England_and_Wales#Eligibility_for_jury_service">qualification to receive the letter</a> is to be on the electoral roll and it is an invitation with few options to refuse without serious repercussions.<br />
<br />
When you arrive at the court you are directed to the Jury lounge (practical hint: take a book) where you notice there are over forty people, which would seem odd until you realise there are three courtrooms and they each need a jury, even then there are an excess of people which is because of the selection process.<br />
<br />
The process of jury selection is fairly simple, an usher for a court comes and calls fourteen names which forms the jury in waiting. The group is taken up to the court waiting room (this room gets terribly familiar over the forthcoming weeks) and then twelve names are called.<br />
<br />
As each person is called they enter the jury box in order which persists for the entire trial (practical hint:remember your juror number). Before each person is sworn or affirmed there is the possibility they will be found unsuitable and will be replaced by one of the previously unselected jurors. Any unselected jurors are then sent back to the jury lounge and become available for forming another jury in waiting.<br />
<br />
Anyone unselected at the end of the process has to remain available to return to the court to form a jury in waiting when a previous trial ends until they have exhausted their duty. There were a few of these unfortunate people who were kept in a state of limbo for several days and I am relieved this did not happen to me.<br />
<br />
Being a juror, from a purely practical perspective, felt like working an office job for ten days. Duties consisting of attending a series of meetings with strange rules and the typically understated British approach to mentioning the result of breaking them.<br />
<br />
I participated in two cases, both of which were (almost by definition) unpleasant happenings, these were a case of <a href="http://en.wikipedia.org/wiki/Grievous_bodily_harm">Grievous bodily harm</a> and an <a href="http://www.legislation.gov.uk/ukpga/2003/42/section/7">offence under section 7 of the Sexual Offences Act 2003</a>.<br />
<br />
Both cases were challenging in their own ways the first because of the way the case was presented and the second because of its subject matter. One of the most important rules is "Do not discuss anything with anyone as it might be perjury" so going along with that I will not be discussing any details. Because I cannot be specific this post has become a little impersonal, you will have to forgive me as I found I had to remove a great deal of more content which was not appropriate.<br />
<br />
An important thing to note is that the trials bore no resemblance to TV courtroom drama. The trials proceed in a professional manner with very little theatrics. The prosecution barrister commences outlining the case against the accused, calling witnesses and reading into evidence uncontested material. The defence then gets to present their case, again calling witnesses and placing documents into evidence.<br />
<br />
One of the striking things about this process is that if the barristers do not call a witness or present evidence that would seem to be pertinent, the jury must not draw inference from that omission, which is especially bizarre when a central witness referred to by almost everyone involved with the case is not called.<br />
<br />
Once the case is presented the jury is sequestered in a room and must come to a unanimous decision on each of the charges. This was, for me, the most challenging part of the whole process. Twelve people with unique views on the information presented have to attempt to discuss the evidence and not simply go with their first impressions based on their preconceptions.<br />
<br />
The jury is allowed to ask for some evidence to be repeated and if deliberations take some time the jury may be instructed that a majority of 10 to 2 may be accepted. I imagine at some point the jury will run out of time to make a decision and something else will happen but I did not experience this.<br />
<br />
Overall the experience was enlightening if not enjoyable, I understand the process a lot more and am happy to have discharged my duty and am equally glad the responsibility will not come around again for at least a few years.Vincent Sandershttp://www.blogger.com/profile/02686407477776093281noreply@blogger.com3