Thursday, 7 March 2013

The way to get started is to quit talking and begin doing

When Walt Disney said that he almost certainly did not have software developers in mind. However it is still good advice, especially if you have no experience with a piece of software you want to change.

Others have written extensively on the topic of software as more than engineering and the creative aspects, comparing it to a craft, which is a description I am personally most comfortable with. As with any craft though you have to understand the material you have to work with and an existing codebase is often a huge amount of material.

While developing NetSurf we get a lot of people who talk a lot about what we "ought" or "should" do to improve the browser but few who actually get off their collective posteriors and contribute. In fact according to the ohloh analysis of the revision control system there are roughly six of us who contribute significantly on a regular basis and of those only four who have interests beyond a specific frontend.

It has been mentioned by a couple of new people who have recently compiled the browser from source that it is somewhat challenging to get started. To address this criticism I intend to give a whirlwind introduction to getting started with the NetSurf codebase, perhaps we can even get some more contributors!

This first post will cover the mechanics of acquiring and building the source and the next will look at working with the codebase and the Netsurf community.

This is my personal blog, the other developers might disagree with my approach, which is why this is in my blog and not on the NetSurf website. That being said comments are enabled and I am sure they will correct anything I get wrong.

Resources

NetSurf has a selection of resources which are useful to a new developer:

Build environment

The first thing a new developer has to consider is their build environment. NetSurf supports nine frontends on several Operating Systems (OS) but is limited on the build environment that can be used.

The developer will require a Unix like system but let's be honest, we have not tried with anything other than Linux distributions in some time or MAC OS X for the cocoa frontend because its a special snowflake.

Traditionally at this point in this kind of introduction it would be traditional to provide the command line for various packaging systems to install the build environment and external libraries. We do have documentation that does this but no one reads it, or at least it feels like that. Instead we have chosen to provide a shell fragment that encodes all the bootstrap knowledge in one place, its kept in the revision control system so it can be updated.

To use: download it, read it (hey running random shell code is always a bad idea), source it into your environment and run ns-apt-get-install on a Debain based system or ns-yum-install on Fedora. The rest of this posting will assume the functionality of this script is available, if you want to do it the hard way please refer to the script for the relevant commands and locations.

For Example:
$ wget http://git.netsurf-browser.org/netsurf.git/plain/Docs/env.sh
$ less env.sh
$ source env.sh
$ ns-apt-get-install

Historically NetSurf built on more platforms natively but the effort to keep these build environments working was extensive and no one was prepared to do the necessary maintenance work. This is strictly a build setup decision and does not impact the supported platforms.

Since the last release NetSurf has moved to the git version control system from SVN. This has greatly improved our development process and allows for proper branching and merging we previously struggled to implement.

In addition to the core requirements external libraries NetSurf depends on will need to be installed. Native frontends where the compiled output is run on the same system it was built on are pretty straightforward in that the native package management system can be used to install the libraries for that system.

For cross building to the less common frontends we provide a toolchain repository which will build the entire cross toolchain and library set (we call this the SDK) direct from source. This is what the CI system uses to generate its output so is well tested.

External Libraries

NetSurf depends upon several external development libraries for image handling, network fetching etc. The libraries for the GTK frontend are installed by default if using the development script previously mentioned.

Generally a minimum of libcurl, libjpeg and libpng are necessary along with whatever libraries are required for the toolkit.

Project Source and Internal Libraries

One important feature of NetSurf is that a lot of functionality is split out into libraries. These are internal libraries and although technically separate projects, releases bundle them all together and for development we assume they will all be built together.

The development script provides the ns-clone function which clones all the project sources directly from their various git repositories. Once cloned the ns-make script can be used to build and install all the internal libraries into a local target ready for building the browser.

For Example:

$ source env.sh
$ ns-clone
$ ns-make-libs install

Frontend selection

As I have mentioned NetSurf supports several windowing environments (toolkits if you like) however on some OS there is only one toolkit so the two get conflated together.

NetSurf currently has nine frontends to consider:
  • amiga
    This frontend is for Amiga OS 4 on the power PC architecture and is pretty mature. It is integrated into the continuous integration (CI) system and has an active maintainer. Our toolchain repository can build a functional cross build environment, the target is ppc-amigaos.
  • atari
    This frontend is for the m68k and m5475 (coldfire) architecture. It has a maintainer but is still fairly limited principally because of the target hardware platform. It is integrated into the continuous integration system. Our toolchain repository can build a functional cross build environment for both architectures.
  • beos
    This frontend is for beos and the Haiku clone. It does have a maintainer although they are rarely active. It is little more than a proof of concept port and there is no support in the CI system because there is currently no way to run the jenkins slave client or to construct a viable cross build environment. This frontend is unusual in that it is the only one written in C++ 
  • cocoa
    NetSurf Mac OS X build boxes for PPC and X86This frontend supports the cocoa, the windowing system of MacOS X, on both PPC (version 10.5) and X86 (10.6 or later). The port is usefully functional and is integrated into the CI system, built natively on Mac mini systems as a jenkins slave. The port is written in objective C and currently has no active maintainer. 
  • framebuffer
    This frontend is different to the others in that it does not depend on a system toolkit and allows the browser to be run anywhere the projects internal libnsfb library can present a linear framebuffer. It is maintained and integrated into the CI system.
  • gtk
    This frontend uses the gtk+ toolkit library and is probably the most heavily used frontend by the core developers.  The port is usefully functional and is integrated into the CI system, there is no official maintainer. 
  • monkey
    This frontend is a debugging and test framework. It can be built with no additional library dependencies but has no meaningful user interface. It is maintained and integrated into the CI system.
  • riscos
    This frontend is the oldest from which the browser evolved. The port is usefully functional and is integrated into the CI system. There is an official maintainer for this frontend although they are not active very often. Our toolchain repository can build a functional cross build environment for this target.
  • windows
    This frontend would more accurately be called the win32 frontend as it specifically targets that Microsoft API. The port is functional but suffers from a lack of a maintainer. The port is integrated into the CI system and the toolchain repository can build a functional cross build environment for this target.

Building and running NetSurf

For a developer new to the project I recommend that the gtk version be built natively which is what I describe here.

Once the internal libraries have been installed, building NetSurf itself is as simple as running make.

For Example:
$ source env.sh
$ ns-make -C ${TARGET_WORKSPACE}/${NS_BROWSER} TARGET=gtk

Though generally most developers would change into the netsurf source directory and run make there. The target (frontend) selection defaults to gtk on Linux systems so that can also be omitted  Once the browser is built it can be run from the source tree to test.

For Example:
$ source env.sh
$ cd ${TARGET_WORKSPACE}/${NS_BROWSER}
$ ns-make
$ ./nsgtk

The build can be configured by editing a Makefile.config file. An example Makefile.config.example can be copied into place and the configuration settings overridden as required. The default values can be found in Makefile.defaults which should not be edited directly.

Logging is enabled with the command line switch -v and user options can be specified on the command line, options on the command line will override those sourced from a users personal configuration, generally found in ~/.netsurf/Choices although this can be compile time configured.

Wednesday, 6 February 2013

Two years of 3D printing

Almost two years ago my good friend Phil Hands invited me to attend a workshop at bath university to build a 3D printer. I had previously looked at the Reprap project and considered building the Darwin model, alas lack of time and funds had prevented me form proceeding.

Jo Prusa and Phil Hands watching a heart print.
The workshop was to build a new, much simpler, design called the Prusa. Of course the workshop was booked and paid for well in advance which left me looking forwards to the event with anticipation. Of course I would not betaking the results of the workshop home as Phil had paid for it, so I started investigating what I would need for my own machine. 

Of course at this point I muttered the age old phrase of "how hard can it be" and started acquiring parts for my own printer. By the time the workshop happened I already had my machine working as a plotter. I learned a lot from the Bath masterclass and a few days afterwards my own machine was complete.

First print
The results were underwhelming to say the least. There then came months and months of trial and error to fix various issues:
  • The filament feed bolt had to be replaced with a better one with sharper teeth. 
  • The thermistor which reads the extruder temperature needed replacing (it still reads completely the wrong temperature even now).
  • The Y axis was completely inverted and needed re-wiring and the limit switches moving.
  • Endlessly replacing the printer firmware with new versions because every setting change requires a complete recompile and re-flash.
  • The bushings on the Y axis were simply not up to the job and the entire assembly needed replacing with ball bearings and a heated bed otherwise prints would be completely warped.
  • The Z axis couplings kept failing until I printed some alternates that worked much better
Once these issues had been fixed I started getting acceptable levels of output though the software in the workflow used to produce toolpaths (skeinforge) was exceptionally difficult to use and prone to producing poor results.

Alas the fundamental design issues of the Prusa remain. The A frame design provides exceptional rigidity in one plane...the other two? not so much. This coupled with an exceptionally challenging calibration to get the frame parallel and square means the printer is almost never true.

Prototype iMX53 dev board eurocard carrier printed on my printer
In operation the lack of rigidity in the x axis means the whole frame vibrates badly even with extra struts to try and improve its rigidity. I am not the first to notice these design flaws and indeed Chris has done something about it by creating a much superior design.

I do however have a working printer and have developed a workflow and understanding of what I can expect to work.

Improvements in the software means that slic3r has replaced skeinforge and gives superior results and the CAD software is continuously improving.

Currently I mainly use the printer to generate prototypes and simple profiles and then send the resulting designs to shapeways for final production though simpler designs are usable directly from the machine.

Because I am away from home a lot and moving the machine is simply not a workable option the printer does not get used for "fun" anywhere near as much as I had hoped and the workflow limitations mean I have not been able to make it available to my friends to use as a communal device.


Recommending a 3D printer

In a previous entry I wrote about the technology of additive manufacture and the use in printing three dimensional objects.

My Prusa Reprap printer, not recommended for new builds
It is now almost two years since I built my own 3D printer and I keep getting asked by colleagues and friends about the technology and often what printer to buy.

I will answer the purchase question first and then describe my experiences which have lead to the conclusion in another post. This may seem a bit backwards but the explanation is long and is not necessary if you are happy to learn from my mistakes.

Of all the options available right now, and there are many, I would choose a Mendal90 kit from Chris Palmer. The complete kit including everything to build the machine is £499 plus shipping. If I could afford it this is what I would buy myself to replace my current machine.

This is a Fused deposition modelling (FDM) printer similar to my Prusa Reprap but better engineered to produce repeatable results without the numerous issues of the other models. In Europe I would also recommend faberdashery as a materials source as their product is first rate every time.

Yes the kit requires some assembly but the commonly available commercial printers either cost many times more to deliver equivalent results, use an SLS or other print strategy requiring very expensive consumables or are from a company with dubious track record with the community.

If forced to recommend one, the 3DTouch from Bits From Bytes is not awful, but really do not be afraid of the kit, you will learn more about how it all fits together and save lots of money for your materials.

A 20mm high pink dump truck toy
One thing anyone buying a 3D printer right now should understand is that this technology is nowhere near as polished as the 2D equivalent. With the exception of the SLS systems like shapeways and the like use (and have price tags to match) The output will have clear "layering" and some objects simply cannot be created using the FDM process.

I guess what I am saying is do not expect a thousand pound machine to produce output that looks like that of a hundred thousand pound printer. To be clear you will not be printing complex moving machines on an FDM process more simple things that need assembly.

Having said that I have some pretty good results my favourite has to be the working recorder though, I might have said the whistles except my sons have them and they are way too loud.

You will spend a lot of time designing your things in 3D CAD packages and fair warning they all SUCK and I mean really, really badly. Add to that all the rest of the tools in the workflow are also iffy and I do wonder how anyone every gets anything printed.

My (open source) workflow is:


Which is probably a case of "least bad" tool selection though I warn you now that OpenSCAD is effectively a bad editor (I wish I could use emacs) for a 3D solid macro language with visualisation attached and definitely not a graphical tool.





Tuesday, 1 January 2013

Gource

I have used the Gource tool for a few years now to produce visualisations of project history. The results are pretty but not especially informative and mainly serve to show how well maintained a projects revision control history is.

The results do however provide something pretty to put on projectors and screens at shows when there is nothing better to be displayed.

Recently I noticed the Gource tool got updated and I decided to compile it and give it a try. After the usual building of the dependencies (including all of libboost!) the new version (0.38) gives much better results than the previous edition I had been using (0.27).

I tested it on the NetSurf git repository generating an overview for the whole ten years the project has been running which produced a six minute video which I shall be using on the NetSurf stand at our next show.

Overall if you need a historical visualisation of your projects revision history Gource is a pretty good tool. I have also used the alternative "code swarm" tool in the past but that seems to have bitrotted to death so I cannot recommend it.

I don't drink coffee I take tea my dear, I like my toast done on one side...

Choices, options, selections if you rather. These are what set us all apart from our fellow man, perhaps it is only the appearance of free will and individuality but our world is full of choice.

In software it might be argued programming is nothing more than making thousands perhaps millions of choices which result in a list of instructions for a machine incapable of making decisions of its own.

On a more mundane level sometimes a programmer cannot make a choice suitable for all expected use cases and a user option is created. These options have become something of a cause for arguments within certain sections of the open source software community especially amongst the groups that influence the graphical user experience.

The discussion should be nuanced and varied (perhaps that is my age, or maybe I am more diplomatic than I thought?) but there seems to be little compromise on this discussion which (from an outsiders point of view) splits into two viewpoints:

  • On one side of the argument, which I shall label reductionist, is that all options should be removed with the software just doing the correct thing 
  • On the other, whom I shall refer to as maximalist, is that users should be presented with options to customise everything.

The reductionist group is currently winning the argument in the popular graphical environments and seem to be removing functionality which requires user choice wherever they can find it. This results in the absurd "joke" that the UI will become a single button, which they are trying to remove.

Personally my view is that an option should be present only when an option cannot be satisfactorily selected by the computer and even then a default suitable for as many users as possible should be picked.

You may ask why I have raised this topic at all? well over the last few days I have been trying to fix the preferences selection for the GTK port of NetSurf. The NetSurf project follows my view on this subject pretty closely but being a browser its very difficult to do the right thing for everyone

NetSurf has numerous frontends for the core browser, I use the term frontend because in some places the toolkit and OS are conflated (windows, cocoa) and not in others (gtk). For each frontend NetSurf is a native application, this is an important distinction, the windows and widgets a user interfaces with are produced by that platforms toolkit.

Old NetSurf Preferences Dialog
Old NetSurf Preferences Dialog
This is a deliberate choice unlike other browsers which render their UI themselves as web content, this is a beguiling solution when the authors wish there to be a single browsing "experience" with a common look and feel. However NetSurf looks and feels like a native browser for each frontend, which is what the project decided it wanted to achieve.

Given this the gtk frontend (is it a Gnome application? not sure of the distinction TBH) has had no dedicated maintainer for some time it has suffered from a bad case of bitrot from both GTK library changes and general neglect.

I have slowly been improving this situation, the browser can now be compiled with GTK version 2 (2.12 onwards) or 3, the menus and other UI elements have been translated for the supported languages and now the turn has come for the options dialog.

Oh, right, its "Preferences" not options, fair enough a common semantic throughout all applications does give a degree of homogeneousness but the word choice does seem to indicate the users control has been reduced. At this point I gained an education in just how unfriendly GTK and its tools have become towards the casual programmer.

The dialog I wanted to construct was a pretty standard tabbed layout which reflected the current options and allowed the user to change them in an obvious way. Given that I have constructed and equivalent interface in idiomatic manner for cocoa and windows I thought  this would be straightforward, I was very wrong.

The interface construction tool is called glade which used to be the name of the UI "resource" file and interfaces. However the tool is still called glade but the interface is now GtkBuilder which has a different (but similar) XML based file format. Then we discover that despite it being an extensible file format the UI files are specifically versioned against the GTK library. Also why on earth can these resources not be compiled into my program? OK make them over-ridable perhaps. Generally it is yet another file to distribute and update in step with the executable.

New NetSurf Properties Dialog open on Main tab
New NetSurf Preferences
So because I want to support back to version 2.12 of GTK I do not get to use any of the features from 2.16 in my UI unless I load a different UI builder file...oh and GTK3? requires a completely different version of glade and the UI files which are incompatible with gtk 2. Once this was worked round with having multiple UI files I then moved on to the next issue.

The GTK model uses function callbacks on events, these are known as signals. Perfectly reasonable but because the ui files are loaded at runtime and not compiled in there must be a way for the GTKBuilder to map the textural signal names to function pointers.

The way the GTK developers have chosen to do this is to search the global function table which means any signal handler function symbol has also to be global which means unnecessary overhead for the ELF loader increasing load times.

This lookup should have been confined to a single object or even placed in an alternative section to avoid these issues, this would not have seemed especially challenging to implement as all callback handlers have to use a preprocessor to define already (G_MODULE_EXPORT).

Another thing that makes developing GTK applications worse is the documentation, this seems to be a continuous complaint about may open source projects, which is taciturn or simply missing in many cases. GTK seems to suffer dreadfully from having multiple API versions all subtly different resulting in a lot of work on the developers part to simply find what they want.

A specific example of this is the signals and under what circumstance they occur  I wanted to update all the widgets with the current configuration state whenever the options window is shown (using gtk_widget_show()) so I figured that would be the "show" signal...right? nope, never did find the right one to use, ended up with "realize" which occurs once when the dialog is created, not what I wanted, but is at least consistent and works.

NetSurf with is Properties dialog open on the content tab
NetSurf with Preferences Dialog
Overall my impression of developing just one small dialog (60 widgets total exclusing labels and containers) for a GTK program is that the toolkit and its tooling is missing the facilities to make developers life easier when doing the drudge work that a good proportion of graphical program development.

It is not the case that one cannot do things, just that everything has to be done manually instead of having the tools do the work, and because of the documentation that tedium is magnified.

I did eventually reach the stage that the thought of writing the boilerplate to add another check button to enable "do not track" had me thinking "do they really need this or can I avoid the work?" perhaps that is why they are all reductionist?



Monday, 17 December 2012

In fact, we started off with two or three different shells and the shell had life of its own.

With apologies to Ken Thompson, this is a list derived from the wafflings residents of a certain UK IRC channel. The conversation happened several years ago but I was reminded of it again today from a conversation on that same channel.

No names have been kept to project the guilty.


/bin/bush - a shell that steals and lies?
/bin/cash - displays adverts before each prompt
/bin/crash - Microsoft shell
/bin/mash - requires libsausage.so
/bin/hush - terse output
/bin/irish - only shell that finds jerkcity funny
/bin/rush - über-optimised for speed, but might not actually work correctly
/bin/flash - proprietary shell that displays adverts and cartoons and hangs periodically
/bin/welsh - whinges about people not using it, but then steals features from other shells in order to actually make sense
/bin/rehash - never actually runs any program, just uses markov chains to construct output from stuff it has seen before.
/bin/wash - will remove lint in your scripts if you let it
/bin/parish - prays for the successful exit of every command
/bin/punish - symlink to /bin/csh
/bin/sheepish - apologises when $? is not zero
/bin/diminsh - decrements all result codes by one
/bin/lavish - and you thought bashisms were bad...
/bin/brainwash - once you've tried it, you'll believe it's the only shell in existence
/bin/hoggish - written in Java
/bin/reversepolish - arguments order different in go must
/bin/ganesh - no subprocess dares exit zero for fear of being removed from the system
/bin/roguish - every day is April 1st as far as it is concerned
/bin/macintosh - it's shiny, has *loads* of things you think you can poke at, yet it only actually responds to a single key on the keyboard and has no useful features for fear of confusing the user
/bin/lush - garbles output from processes like a drunkard
/bin/snobbish - you're not good enough to use it
/bin/thrush - it itches
/bin/vanquish - kills processes mercilessly
/bin/tush - pert
/bin/fetish -> /bin/zsh
/bin/skirmish - multi-user shell
/bin/whiplash - gets invoked if you break too hard
/bin/newsflash - BREAKING NEWS pid 1234 terminated
/bin/mulish - DJB does Shell
/bin/hsilop - arguments order different in go must
/bin/whitewash - government approved
/bin/trish - for tri-state hardware
/bin/jdsh - it sucks...
/bin/flesh - optimised for viewing p0rn

Wednesday, 12 December 2012

What man's mind can create, man's character can control.

I have a project that required me to programmatically control power to several devices. I have done this before using a Velleman vm8090 board which is relatively easy to control. However they are relatively expensive.

I turned to ebay and found a similar module at a substantially reduced cost. Upon receipt however I discovered that instead of being a simple serial USB interface it presented USB HID and the Debian system I was running it on has loaded the hiddev driver for me but it did not implement any of the standard HID Usage Pages leaving me with no way to control the device.
I did the obligatory
sudo lsusb -d 12bf:ff03 -vvv

Bus 003 Device 019: ID 12bf:ff03  
Device Descriptor:
  bLength                18
  bDescriptorType         1
  bcdUSB               1.10
  bDeviceClass            0 (Defined at Interface level)
  bDeviceSubClass         0 
  bDeviceProtocol         0 
  bMaxPacketSize0         8
  idVendor           0x12bf 
  idProduct          0xff03 
  bcdDevice            1.00
  iManufacturer           1 Matrix Multimedia Ltd.
  iProduct                2 Flowcode USB HID
  iSerial                 0 
  bNumConfigurations      1
  Configuration Descriptor:
    bLength                 9
    bDescriptorType         2
    wTotalLength           41
    bNumInterfaces          1
    bConfigurationValue     1
    iConfiguration          0 
    bmAttributes         0x80
      (Bus Powered)
    MaxPower               50mA
    Interface Descriptor:
      bLength                 9
      bDescriptorType         4
      bInterfaceNumber        0
      bAlternateSetting       0
      bNumEndpoints           2
      bInterfaceClass         3 Human Interface Device
      bInterfaceSubClass      0 No Subclass
      bInterfaceProtocol      0 None
      iInterface              0 
        HID Device Descriptor:
          bLength                 9
          bDescriptorType        33
          bcdHID               1.10
          bCountryCode            0 Not supported
          bNumDescriptors         1
          bDescriptorType        34 Report
          wDescriptorLength      54
         Report Descriptors: 
           ** UNAVAILABLE **
      Endpoint Descriptor:
        bLength                 7
        bDescriptorType         5
        bEndpointAddress     0x81  EP 1 IN
        bmAttributes            3
          Transfer Type            Interrupt
          Synch Type               None
          Usage Type               Data
        wMaxPacketSize     0x0008  1x 8 bytes
        bInterval               5
      Endpoint Descriptor:
        bLength                 7
        bDescriptorType         5
        bEndpointAddress     0x01  EP 1 OUT
        bmAttributes            3
          Transfer Type            Interrupt
          Synch Type               None
          Usage Type               Data
        wMaxPacketSize     0x0008  1x 8 bytes
        bInterval               5
Device Status:     0x0980
  (Bus Powered)
This simply showed me what I already knew and surprised me that lsusb did not dump HID report descriptor items. Some searching revealed that teh device had to be unbound so lussb could access the descriptor.

Thus a simple
echo -n 3-1.1.4:1.0 | sudo dd of=/sys/bus/usb/drivers/usbhid/unbind
resulted in lussb dumping the descriptor items:

Item(Global): Usage Page, data= [ 0xa0 0xff ] 65440
                (null)
Item(Local ): Usage, data= [ 0x01 ] 1
                (null)
Item(Main  ): Collection, data= [ 0x01 ] 1
                Application
Item(Local ): Usage, data= [ 0x02 ] 2
                (null)
Item(Main  ): Collection, data= [ 0x00 ] 0
                Physical
Item(Global): Usage Page, data= [ 0xa1 0xff ] 65441
                (null)
Item(Local ): Usage, data= [ 0x03 ] 3
                (null)
Item(Local ): Usage, data= [ 0x04 ] 4
                (null)
Item(Global): Logical Minimum, data= [ 0x00 ] 0
Item(Global): Logical Maximum, data= [ 0xff 0x00 ] 255
Item(Global): Physical Minimum, data= [ 0x00 ] 0
Item(Global): Physical Maximum, data= [ 0xff ] 255
Item(Global): Report Size, data= [ 0x08 ] 8
Item(Global): Report Count, data= [ 0x08 ] 8
Item(Main  ): Input, data= [ 0x02 ] 2
                Data Variable Absolute No_Wrap Linear
                Preferred_State No_Null_Position Non_Volatile Bitfield
Item(Local ): Usage, data= [ 0x05 ] 5
                (null)
Item(Local ): Usage, data= [ 0x06 ] 6
                (null)
Item(Global): Logical Minimum, data= [ 0x00 ] 0
Item(Global): Logical Maximum, data= [ 0xff 0x00 ] 255
Item(Global): Physical Minimum, data= [ 0x00 ] 0
Item(Global): Physical Maximum, data= [ 0xff ] 255
Item(Global): Report Size, data= [ 0x08 ] 8
Item(Global): Report Count, data= [ 0x08 ] 8
Item(Main  ): Output, data= [ 0x02 ] 2
                Data Variable Absolute No_Wrap Linear
                Preferred_State No_Null_Position Non_Volatile Bitfield
Item(Main  ): End Collection, data=none
Item(Main  ): End Collection, data=none

By consulting the device class definitions document I determined the device was using the "Vendor defined" Usage page (0xff00 to 0xffff) so I would definitely have to write a program to control the device.

Linux provides a really easy interface to deal with HID devices called hiddev (gosh, such adventurous naming) which I already had to unbind to get my descriptors decoded so I am fairly sure it works ;-)

The kernel documentation and header for hiddev provide the absolute basic mechanics of the interface but no example code or guidance. The obligatory web search turned up very little and even that had to be retrieved from the internet archive. So It seems I would be forced to work it through myself.

It seems the hiddev interface is orientated around HID devices generating reports which the program is expected to read. Numerous ioctl() are provided so the program can obtain the descriptor information necessary to control and process the received reports.

However in this case we need to be able to send reports to the device, all the descriptor information revealed was that there were eight (Item Report Count = 8) values with eight bits each (Item Report Size = 8) with logical and physical values representing the whole range of the octets.

Fortunately the seller provided a website with some control programs and even source. After some time rummaging through the Visual Basic program I finally found (in FrmMain.vb:2989) that the eight bytes were largely unused and the first was simply a bitmask of the eight relays coil status, set for energised clear for off. With bit 0 controlling relay labelled 1 through to bit 7 for relay 8.

To send a report to a HID device the hiddev interface uses the HIDIOCSREPORT ioctl where the report data is first set using HIDIOCSUSAGE .

The HIDIOCSUSAGE ioctl is passed a hiddev_usage_ref structure which must be initialised with information about the report descriptor identifier (constructed from the Usage Page and Usage as set by the items in the descriptor), the index of the item (named usage) we wish to set in the report (in this case the first which is 0) and the value we actually want to set.

After a great deal of debugging the final program is very short indeed but does the job, my main problem now is that if I switch too many (more than one) relays at once the whole device resets. The scope says the supply rails are behaving very badly when this happens, looks like I need to add a load of capacitance to the power well to stabilise it during the switching events.

Oh and add in the fact Relay 1 LED doest work unless you push on it and I do wonder about the wisdom of the economy in this case. Though yet again Linux makes the software side easy.