Main Site Documentation

InterruptPort -OnInterrupt time parameter


I’m doing some work with a Domino. On other .net MF platforms I’ve worked with, interrupts are queued and the time parameter in the OnInterrupt event handler is the actual time of the interrupt, not the time that the event was fired.

I’m referring to the ‘time’ parameter in this little snippet of code…

using System;
using System.Threading;

using Microsoft.SPOT;
using Microsoft.SPOT.Hardware;

using GHIElectronics.NETMF.FEZ;

using RC6_Remote;

namespace DominoRC6Decoder
public class Program

    // Declare the interrupt port and it's handler
    public static InterruptPort IR_in = new InterruptPort((Cpu.Pin)FEZ_Pin.Digital.Di7, true, Port.ResistorMode.Disabled, Port.InterruptMode.InterruptEdgeBoth);        

    public static void Main()
        // Initialise the IR remote
        RC6_Decoder.RemoteInputPin = IR_in;
        RC6_Decoder.CodeReceived += new CodeReceivedEventHandler(RC6_Decoder_CodeReceived);
        IR_in.OnInterrupt += new NativeEventHandler(IR_in_OnInterrupt);

        // Now stop the main thread

    static void IR_in_OnInterrupt(uint data1, uint data2, DateTime time)
        RC6_Decoder.Record_Pulse(data1, data2, time);

    static void RC6_Decoder_CodeReceived(int mode, ulong data)
        Debug.Print("Received command. Mode = " + mode.ToString() + " ... Command = " + data.ToString());



On the Domino it seems that the events are queued but the time parameter in the event handler is the time that the event handler was called. Am I correct? Without the source I can’t prove that in any way, but logic analyser timings seem to indicate that this is the case.

If so, that’s a real pain as it rules out using the devices to decode Manchester encoded pulse streams (unless I write a C code interrupt handler). That should not be needed … for example, the same code works fine on a much slower NetDuino board.


every single NTMF device handle it the same way…it is the time when the interrupt occurred


Phillip, if you use the ‘code’ tags around your code it wil lmake it MUCH easier for other to read. After you paste in your code, select it all and then use the ‘Code’ button (kooks like a bunch of 1 and 0).

            Debug.Print("(1) result = " + result.ToString());
            result *= -1;
            Debug.Print("(2) result = " + result.ToString());


That is why I always ignore code that is not tagged and only answer the question!


Thanks. Code tags noted.

In this case it’s definitely not the time the interrupt occurred (or else the interrupt handler in the MF is insanely slow on this device, which I don’t believe). I’ve tagged it with a logic analyser and scope and the timing values are the handler times, not the interrupt times.

Attached are a scope trace and the resultant stored values in an array from this code. Note that the timing array is converted to milliseconds. The first two array values match the analyser cursors. The top trace is the pin driving the interrupt, the bottom is the Di0 pin being toggled when the interrupt event handler starts.

The trace shows 9.284ms between event handler calls, and the locals shows that the program recorded an interval of 9297us … effectively the same.

Also, it only seems to queue 3 or 4 interrupt events, which is far fewer than comparable MF devices (usually they queue 16 or 32 events, to allow timing based events such as these to be analysed on what is, of course, a non-RT OS).

Again, the exact same code (just the pin definitions changed) runs on other devices. I just retested it on a NetDuino (48Mhz Atmel AT91SAM7X).

This is the latest firmware on the Domino … running in debug mode, obviously!

Thanks again.

        public static InterruptPort IR_in = new InterruptPort((Cpu.Pin)FEZ_Pin.Digital.Di7, true, Port.ResistorMode.Disabled, Port.InterruptMode.InterruptEdgeBoth);

            RC6_Decoder.RemoteInputPin = IR_in;

            IR_in.OnInterrupt += new NativeEventHandler(IR_in_OnInterrupt);

        static void IR_in_OnInterrupt(uint data1, uint data2, DateTime time)
            pulser.Write(data2 == 1);
            RC6_Decoder.Record_Pulse(data1, data2, time);

        public static void Record_Pulse(uint data1, uint data2, DateTime time)
            intervals[pos] = time.Ticks / 10;       // record the interrupt time in microseconds
            signalStates[pos] = (data2 == 1);       // record the state of the pin at interrupt


I’m still unclear how you come to the conclusion that the ticks property is not when the interrupt occurred. The ticks property is the number of 100ns intervals that have occurred since jan 1 0001, @ 12 midnight (although I seem to recall that this offset value is wrong.) I don’t see how your code or traces can prove that ticks is right or wrong. I’m not saying your not right but I see no way to prove your conclusion.

As a general tip on interrupt handlers, Do only what you need to do in one and get out. In your example your scaling the ticks value before saving it ‘time.ticks / 10’, this is a big waste of time (division is expensive in terms of time), it might be better to just save the ticks value and worry about scaling it outside the event handler.


The code measures the difference between two edges of a signal )the top trace on the analyser). If the time param was the interrupt time then the difference between the first two interrupt event handler runs should have been in the realm of 2.7ms.

It is actually 9.297ms, which is the time between the first two edges of the second trace.

As that trace is a record of the pulses sent on digital pin 0, which is changing state when the interrupt handler is running, the time difference is the difference between the event handler executions, not that between the original interrupt edges.

The same code on another MF device returns around 2.8ms for the same signal edges (and queues interrupts for all those edges, not just the first four).

I’m aware that the handler is not optimal. I’ve been writing embedded code since about 1981, starting on 8085s, so I get it. That’s a whole different discussion, though (it was only demonstration code for another purpose but it happens to work right on the edge of the capabilities of a lot of MF devices so it is useful for that purpose).


You have glitch filter enabled. This isn’t good. Disable glitchfilter and see if it works.


I set up a little program to output a 50% duty cycle square wave on a PWM channel, I tried it first with a 2ms period and then a 1ms period. Below is the output for the 1ms period, the time is micro seconds between edges seen. Interestingly every time I run it all but the first reading makes sense.

[quote]Time: 1289
Time: 525
Time: 513
Time: 499
Time: 499
Time: 499
Time: 499
Time: 499
Time: 507
Time: 499[/quote]


Aren’t you calculating the difference between two edges? What’s the first edge base on?


I’m just saving the ticks property each time the interrupt handler is entered and after recording to 20 interrupts I print out time[1] - time[0], time[3] - time[2], etc

            // the pin will generate interrupt on high, and add an interrupt handler to the pin
            InterruptPort IntButton = new InterruptPort((Cpu.Pin)FEZ_Pin.Interrupt.Di0, false,
                                             Port.ResistorMode.PullUp, Port.InterruptMode.InterruptEdgeBoth);
            IntButton.OnInterrupt += new NativeEventHandler(IntButton_OnInterrupt);
            PWM servoL = new PWM(PWM.Pin.PWM1);

            // 1ns = 0.000,000,001s, 1s = 1,000,000,000 ns
            servoL.SetPulse(ms2ns(1), ms2ns(0.5));



            for (int i = 0; i < 20; i+=2)
                Debug.Print("Time: " + ((time[i+1] - time[i])/10).ToString());


        static void IntButton_OnInterrupt(uint port, uint pstate, DateTime itime)
            if (index < 20)
                time[index] = itime.Ticks;


Before you start, you should set the first measurement equal to DateTime.Now.Ticks.


[quote]Before you start, you should set the first measurement equal to DateTime.Now.Ticks.

Why, that would give me the current time which I don’t want? I’m saving the times when each interrupt occures starting with the first interrupt.



Glitch filter it was - thanks. The GHI implementation of the MF has a huge default glitch window time (8 milliseconds). I’ve never seen such a long default time! I changed it to a much more reasonable 100us (for other readers of this thread, the relevant code, which goes in the setup part of the program, is below) and everything was fine.

            Debug.Print("Glitch filter time = " + Cpu.GlitchFilterTime.Ticks.ToString());
            Cpu.GlitchFilterTime = new TimeSpan(1000);

I don’t usually disable glitch filters - they’re there for a good reason.

Actually I was quite impressed by the performance. New analyser trace attached if anyone is interested.

A side question - is there a define to turn off the annoying garbage collector statistics from being dumped out to the debug port, even in release mode?


Glitch filters are usually used for mechanical switches. I suspect that using a glitch filter on a digital signal would an affect on the measurements.

The only I would them on a digital signal would be when the signal is very noisy.

I believe there is a method in the Debug class to turn off the GC messages.



In fact, if you put a scope on the system I suspect you would find many of the first pulses are lost. The framework seems to do some housekeeping on the first interrupt. The code I have been discussing in this thread actually loses the whole first packet from the remote control, so the domino is missing at least the first 50ms of incoming pulses.

Burt it’s not just the GHI devices that do this. Every .NET MF device I’ve seen does it (I have a stack of about a dozen different ones on my desk right now). It should not be a showstopper for any reasonable .NET MF program. MF is [italic]not[/italic] a real time OS, so you have to allow for quirks like these. You do expect the timing to be accurate, though, so your example above suggests that the MF is doing something with the first interrupt before it records the time on that very first pulse, which is presumably delaying the device’s response to the second edge and thus resulting in the extended pulse.

The other possibility is that the PWM’s first pulse really is long. The only way to work out which is which is with an oscilloscope or logic analyser. I can’t recommend such a tool too highly. If you don’t have either, and you are mainly working with MF devices, then I would suggest that you start with a low cost logic analyser … I have a couple of the Salae units ( and I highly recommend them for general work, especially as they can debug SPI and I2C streams for you, and are ridiculously cheap at $150 (that’s just crazy).


good Info Phillip. Thanks for exploring that for us.


In my case it was only the first interrupt that was goofed up since I record only the first 20 interrupts it made it easy yo see. I suspect it has to do with the framework getting things set up. There is some time between when you first create in the interrupt input object and add the handler. It could also be that the first PWM pulse is long, I did not put it on the scope to see.


Glitchfilter is intended to be like a debounce for buttons… Its default value should be 7.6 ms if you look into Microsoft documentation. It does not make sense to have less for a button really. So other devices seem not to work as they should :wink:



That seems right to me. I’m just getting used to the differences between the GHI devices and the others I’ve got on hand. The glitch filter is useful on just about any interrupt source - if you look at it from the engineering angle it’s a Hysteresis facility. In my current work it’s useful as an additional noise rejection tool on an IR remote control receiver (above and beyond that provided by the IR device itself).

However, I should be clear that by comparing I’m not criticising the GHI devices. In fact, so far, I’m really pleased and impressed by them. The interrupt handing - and specifically the timestamping - is the best of all the devices I have, by a long way - and by much more margin than simple differences in clocks would suggest. That points to a quality firmware implementation on GHI’s part. GHI also has the best support for chip hardware of any of the MF manufacturers I’ve used.

I’ve been doing embedded systems for so long now, I’m not sure if I’m a cranky old engineer, or just cranky and old, but I guess I just speak plainly. I learned on 8085s for Lord’s sake, and before that my first PC was homebuilt using a Nat Semi SC/MP cpu clocked at 750 [italic]kilo[/italic]Hertz! You can see one here:

Actually, I’ve got to say that the Micro Framework is quite a revelation in embedded systems terms. I’m doing this work for some commercial implementations, and the USBizi (and probably some ChipWorx modules) are at the top of my list right now. I’m just getting my head around the systems and delving quite deeply.

Cheers, and thanks for both the help and the patience.