Thursday saw all the Collabora employees at Cambridge office go out and socialise at the beer festival. They seemed to have selected a wonderful day for it, the sun was shining and it was warm and blue sky day.
Alas, I had to attend some customer conference calls and work on some time sensitive research so I could not go to the ball as it were. At about eight my brain had run out of steam so I decided to call it a day and go and meetup with people at the festival for an hour or two.
The queue when I arrived dissuaded me from that notion. I asked one of the stewards and they indicated it would take at least an hour from where the queue finished.
So I decided to wend my way home along the bank of the Cam. I proceeded slowly along and to my utter surprise bumped into Ben Hutchings and his Solarflare work colleagues having their own soiree. I was immediately invited to sit and converse. Pretty quickly I was inveigled into accepting a glass of wine by John Aspden from his floating bar (AKA houseboat).
From here on my evening was a pleasant one of amusing new people, easy conversation and a definite pondering if the host would be discovering the delights of Cam swimming as he became progressively inebriated!
So although I missed the festival I did manage to have an enjoyable time. A big thanks to the solarflare guys and especially John who was the consummate host and provided me with far too much alcohol.
Saturday, 26 May 2012
Thursday, 24 May 2012
Interrupt Service Routines
Something a little low level for this post. I have been asked recently how to "test" for the maximum duration of an Interrupt Service Routine (ISR) in Linux
To do this I probably ought to explain what the heck an ISR is!
A CPU executes one instruction after another and runs your programs. However early in the history of the electronic computer it soon became apparent that sometimes there were events happening, generally caused by a hardware peripheral, that required some other code to be executed without having to wait for the running program to check for the event.
This could have been solved by having a second processor to look after those exceptional events but that would have been expensive, difficult to synchronize and the designers took the view that there was a perfectly good processor already sat there just running some users program. This Interruption in the code flow became known as, well, an Interrupt (and the other approach as polling).
The hardware for supporting interrupts started out very simply, the processor would complete execution of the current instruction and when the Program Counter (PC) was about to be incremented if an Interrupt ReQest (IRQ) was pending the PC would be stored somewhere (often a "special" IRQ stack or register) and then execution started at some fixed address.
The interrupting event would be dealt with by some code and execution returned to the original program without it ever knowing the CPU just wandered off to do something else. The code that deals with the interrupt is known as the Interrupt Service Routine (ISR).
Now I have glossed over a lot of issues here (sufficient to say there are a huge number of details in which to hide the devil) but the model is good enough for my purpose. A modern CPU has a extraordinarily complex system of IRQ controllers to deal with numerous peripherals requesting the CPU stop what its doing and look after something else.
This system of controllers will ultimately cause program execution to be delivered to an ISR for that device. If we were living in the old single thread of execution world we could measure how long execution remains within an ISR, perhaps by using a physical I/O line as a semaphore and an external oscilloscope to monitor the line.
You may well ask "Why measure this?" well historically while the ISR was running nothing else could interrupt it executing which meant even if there was an event that was more important it would not get the CPU until the first ISR was complete. This was known as IRQ latency which was undesirable if you were doing something that required an IRQ to be serviced in a timely manner (like playing audio)
This is no longer how things are done while the top half runs with IRQ disabled many are threaded interrupt handlers and are preemptable (I.e. can be interrupted themselves) which leads to the first issue with measuring ISR time in that the ISR may be executed in multiple chunks if something more important interrupts. Indeed it may appear an ISR has taken many times longer one time than another because the CPU has been off servicing multiple other IRQ.
Then we have the issue that Linux kernel drivers often do as little as possible within their ISR, often only as much as is required to clear the physical interrupt line. Processing is then continued in a "bottom half" handler this leads to ISR which take practically no time to execute but processing is still being caused elsewhere in the system.
The next issue is the world is not uniprocessor any more, how many processors does a machine have these days? even a small ARM SoC can often have two or even four cores. This makes our timing harder because it is now possible to be servicing multiple interrupts from a single peripheral on separate cores at the same time!
In summary measuring ISR execution time is not terribly enlightening and almost certainly not what you are interested in. The actual question is much more likely that you really want to be examining something that the ISR time was an historical proxy for like IRQ latency or system overheads in locking.
To do this I probably ought to explain what the heck an ISR is!
A CPU executes one instruction after another and runs your programs. However early in the history of the electronic computer it soon became apparent that sometimes there were events happening, generally caused by a hardware peripheral, that required some other code to be executed without having to wait for the running program to check for the event.
This could have been solved by having a second processor to look after those exceptional events but that would have been expensive, difficult to synchronize and the designers took the view that there was a perfectly good processor already sat there just running some users program. This Interruption in the code flow became known as, well, an Interrupt (and the other approach as polling).
The hardware for supporting interrupts started out very simply, the processor would complete execution of the current instruction and when the Program Counter (PC) was about to be incremented if an Interrupt ReQest (IRQ) was pending the PC would be stored somewhere (often a "special" IRQ stack or register) and then execution started at some fixed address.
The interrupting event would be dealt with by some code and execution returned to the original program without it ever knowing the CPU just wandered off to do something else. The code that deals with the interrupt is known as the Interrupt Service Routine (ISR).
Now I have glossed over a lot of issues here (sufficient to say there are a huge number of details in which to hide the devil) but the model is good enough for my purpose. A modern CPU has a extraordinarily complex system of IRQ controllers to deal with numerous peripherals requesting the CPU stop what its doing and look after something else.
This system of controllers will ultimately cause program execution to be delivered to an ISR for that device. If we were living in the old single thread of execution world we could measure how long execution remains within an ISR, perhaps by using a physical I/O line as a semaphore and an external oscilloscope to monitor the line.
You may well ask "Why measure this?" well historically while the ISR was running nothing else could interrupt it executing which meant even if there was an event that was more important it would not get the CPU until the first ISR was complete. This was known as IRQ latency which was undesirable if you were doing something that required an IRQ to be serviced in a timely manner (like playing audio)
This is no longer how things are done while the top half runs with IRQ disabled many are threaded interrupt handlers and are preemptable (I.e. can be interrupted themselves) which leads to the first issue with measuring ISR time in that the ISR may be executed in multiple chunks if something more important interrupts. Indeed it may appear an ISR has taken many times longer one time than another because the CPU has been off servicing multiple other IRQ.
Then we have the issue that Linux kernel drivers often do as little as possible within their ISR, often only as much as is required to clear the physical interrupt line. Processing is then continued in a "bottom half" handler this leads to ISR which take practically no time to execute but processing is still being caused elsewhere in the system.
The next issue is the world is not uniprocessor any more, how many processors does a machine have these days? even a small ARM SoC can often have two or even four cores. This makes our timing harder because it is now possible to be servicing multiple interrupts from a single peripheral on separate cores at the same time!
In summary measuring ISR execution time is not terribly enlightening and almost certainly not what you are interested in. The actual question is much more likely that you really want to be examining something that the ISR time was an historical proxy for like IRQ latency or system overheads in locking.
Linux kernel presentation
Recently I was asked to present a short introduction to the Linux kernel for our project managers. I put together a short slide deck for the presentation which I have decided to share.
I feel its important to note that I had a lot more to say about each section and the slides were more an aid for my memory to cover the important points. Of special note would be the diagram showing the "hierarchy" of contributors, this is of course nowhere near as well stratified as portrayed.
I feel its important to note that I had a lot more to say about each section and the slides were more an aid for my memory to cover the important points. Of special note would be the diagram showing the "hierarchy" of contributors, this is of course nowhere near as well stratified as portrayed.
Tuesday, 8 May 2012
NetSurf at a show
The wakefield RISC OS show is an event the NetSurf project has attended for a long time. in fact since 2005 when the "stand" was a name on an A4 sheet through 2006, 2007, 2008, 2009, 2010 to 2011 we have always been present.
The event has changed in that time from a large affair with many exhibitors to a small specialist interest event with a handful of stands. I took some pictures this year which give a fair impression of the event.
We were seriously considering not attending this year as 2011 had seen us barely break even on donations versus expenses to attend. However we decided that the projects annual Grey Ox Inn post event dinner was probably worth making the effort.
So we all met up in a hotel just off the M1 near Wakefield and set up our table. And although NetSurf as a project now has much more usage on other platforms we still represent the principle browser for the RISC OS platform!
We had a pleasant time, talked to a lot of users and made our expenses back in donations. Overall an amusing Saturday. Based on the size of the event and number and age of the attendees, I fear the RISC OS may be destined for the history books.
The event has changed in that time from a large affair with many exhibitors to a small specialist interest event with a handful of stands. I took some pictures this year which give a fair impression of the event.
We were seriously considering not attending this year as 2011 had seen us barely break even on donations versus expenses to attend. However we decided that the projects annual Grey Ox Inn post event dinner was probably worth making the effort.
So we all met up in a hotel just off the M1 near Wakefield and set up our table. And although NetSurf as a project now has much more usage on other platforms we still represent the principle browser for the RISC OS platform!
We had a pleasant time, talked to a lot of users and made our expenses back in donations. Overall an amusing Saturday. Based on the size of the event and number and age of the attendees, I fear the RISC OS may be destined for the history books.
Repaying a debt
Some debts are merely financial and some easily repaid but some require repayment in kind . Few debts are more important to me personally than a favour earned by a good friend.
Several years ago, before I started this blog, I replaced the kitchen in my house. Finances were tight at the time and I had to do the entire refit with only limited professional help. Because of this I imposed upon Mark Hymers and Steve Gran to come and assist me. They worked tirelessly for three days over a bank holiday for no immediate reward.
This weekend I had the opportunity to assist Mark with his own kitchen refit and reply my debt.
Although the challenges have been different on this build they were, nonetheless present, including walls which were most definitely not square and affixing cabinets 10mm too high so the doors could not close.
We also got to make a hole for a 125mm extractor which was physically demanding and not a little tiring (Steve actually wielding the drill had a fabulous aim)
I took some photos to document the process which has resulted in an image which is positively threatening, though the two of them are nice people really!
All in all a pleasant weekend with friends, the whole favour thing was really moot, I would have done it for a friend anyway.
Several years ago, before I started this blog, I replaced the kitchen in my house. Finances were tight at the time and I had to do the entire refit with only limited professional help. Because of this I imposed upon Mark Hymers and Steve Gran to come and assist me. They worked tirelessly for three days over a bank holiday for no immediate reward.
Mark and Steve with a drill |
Although the challenges have been different on this build they were, nonetheless present, including walls which were most definitely not square and affixing cabinets 10mm too high so the doors could not close.
We also got to make a hole for a 125mm extractor which was physically demanding and not a little tiring (Steve actually wielding the drill had a fabulous aim)
I took some photos to document the process which has resulted in an image which is positively threatening, though the two of them are nice people really!
All in all a pleasant weekend with friends, the whole favour thing was really moot, I would have done it for a friend anyway.
Subscribe to:
Posts (Atom)