Help on RLP G120 Interrupt Jittering

Testing an RLP installed on my G120 HDR board, I found that every time that the interrupt signal is fired it takes a random time to serve the ISR installed on the unmanaged code, it can go from 2us to ~30us randomly and is not stable at all.

I was expecting some lag after the interrupt signal is sent and the ISR is serviced but something more stable with less jittering cause the ISR is serviced using unmanaged code.

My code is based in the following example: http://www.ghielectronics.com/community/codeshare/entry/724

Basically the RLP service routine is just doing and small pulse of an output, this pulse is set to high on the first line of the service routine and back to low on the last line.

I was thinking that the unmanaged code will execute the ISR right away but seems to be doing something else before and that something is not constant (timewise).

To install the ISR I’m using the RLPext->Interrupt.Install function, I wondering if I can install the ISR directly using NVIC_EnableIRQ(EINT1_IRQn) function (I have a reference to core_cm3.h) and renaming my ISR to void EINT1_IRQHandler(void).
I has been trying to install it this way but the system halt when I try to enable the interrupt, maybe the RLP library already has those functions defined.

Any ideas of how I can install this interrupt in order to get a more stable response time?

Thanks in advance to all RLP gurus!

@ Mogollon - MF is not a real-time system, and having any deterministic expectations is unrealistic.

I’m not using the MF to handle the interrupt, I’m using the lower level code to handle this specific interrupt.

I understand completely that MF is not a RTOS that is why I went to the native code to install this interrupt and have a fast response which works fine with the only exception of that jittering.

If this is the way that the processor handles the interrupt then there is nothing to do but I have a feeling that has something to do with the RLP->interrupt.install function, that is why I want to know if there is a way to install it using the native processor instructions and if there is a side effect. So far the processor halt if I try to install it using the NVIC functions.

My problem is not even that it takes sometime to serve the interrupt, my problem is that this time is not stable, it jump in a broad range for a 120MHz processor, I think that maybe there is some sort of task scheduling going on to serve this low level interrupt.

Maybe the RLP library is also doing some task scheduling at low level?

You are using RLP which is an extension to MF. But MF is still running and there are times where it runs with interrupts disabled. I am surprised the jitter is only in the 2 to 30 usec range.

Yes I got the picture @ andre.m but in lower level (native code) this “soon as possible” should be driven by something else not just the native processor interrupt handling.

I guess the TinyCLR (or the framework) is handling the internal interrupt and then switching task until my native code get served?

That is what I want to make sure that there is no other workaround to get a more stable service time for the interrupt, not fast! just consistent (Like installing my own ISR directly on the processor and avoid the task scheduling).

Maybe other active IRQs with higher priority ?

I’m working with Ext Interrupt 1, I will take a look but I suspect that would be the same scenario unless there is an interrupt not handled by the TinyCLR or Framework.

I don’t use RLP (I use interops because I find them to be better structured and easier to manage), but I assume that RLP is calling TinyHAL code in some fashion. For example, I bet RLPext->Interrupt.Install really just calls CPU_GPIO_EnableInputPin(). In this case, you’re really dependent on how the G120 NETMF port is written.

On the STM32F4 port that Oberon did, interrupts are disabled as soon as a GPIO interrupt occurs, and aren’t re-enabled until after the callback function is executed. So, in that case, what others have said about possible interrupts occurring is completely bogus.

Perhaps Gus or someone else from GHI can chime in and confirm that on the G120 they’re disabling all interrupts immediately at the beginning of their ISR, and re-enabling them as soon as user callbacks have been executed. If they don’t do that, you may get other weird behavior going on.

One more thing: the high-level access we get comes at the expense of lots of switch/case/conditional statements that take place in between the interrupt firing and our user code actually getting executed. That’s why interrupt timing isn’t consistent across ports and pins. For example, an interrupt occuring on pin A0 will be handled with a different delay than one occuring on B0. However, it should be consistent between calls on the same pin.

Thanks for that information jay. Will be good to have this explained by an expert of GHI, just to understands how things are really happening.

Could you explain a little bit about interops? or point me to some documents to read about it and compare with RLP.

I’ve noted this strange approach … but looking all the STM32 porting, I’m not very impressed indeed but it works. I’m going to implement the internal ethernet driver for external PHY but looking in the code I get a bit of problem how to manage DMA int without creating/getting hungup …

PS: This doc is a very good starting point to acquire some interop knowledge.
http://www.ghielectronics.com/docs/130/firmware-custom-build-fez-hydra

Interesting dobova, I wondering if this could work for a G120 board. It’s a premium product and the GHI repository doesn’t have a G120 version.

@ Mogollon - I’m working on a STM32F407 mcu, G120 is a NXP mcu and Premium lib is not public domain.
I’m not convinced that internal eth is good idea (lot of pin wasted), but the my board has phy chip I would use (if I can do something in short times).

You can’t do interops with GHI’s premium products since the source code for their port is unavailable. That’s why I stick to the open-source ports. Dobova posted a great link about interops on the FEZ Hydra. Definitely read through that for the details, however, here’s a high-level idea of what’s going on:

A core part of NETMF is the ability for managed and native code to interact – hence “interops”. People talk about writing “interop code” to do this or that, so we often think of interops as third-party library code. However, it’s important to understand that, actually, interop code is used all over the place inside of NETMF.

Look at the serial port code, for example. When you call the Send() function on a serial port object in NETMF, it’s actually a wrapper for a lower-level Send() function (still managed), which calls an interop function written in C++. This interop function calls the PAL function USART_Write(). This PAL function calls the HAL function CPU_USART_WriteCharToTxBuffer().

It sounds complicated, but the advantage is that, you, as the developer, only need to implement CPU_USART_WriteCharToTxBuffer() for your particular processor, and then call “SerialPort.Send()” in your managed application – all the gross code sitting between those two things have already been implemented generically.

Now, if you go digging through the code, you’ll see that there’s a lot of extra code required to do method checking, function table generation, grab parameters off the stack, and put parameters back on the stack. Luckily, when you enable stub generation in Visual Studio, when you declare a function extern and decorate it with the Interops label appropriately, it will do all the work for you. For example, if you want to implement a singleton class named MyLib with a Transmit function, you’d declare it:


[MethodImpl(MethodImplOptions.InternalCall)]
public extern void Transmit(byte[] data)

if you ticked the “Generate native stubs” checkbox, when you built the library, you’d get a stubs folder in the build output. This stubs folder would contain a function like this:


void MyLib::Transmit( CLR_RT_TypedArray_UINT8 param0, HRESULT &hr )
{
  
}

That’s easy enough – your byte array turned into a CLR_RT_TypedArray_UINT8 object named param0. You’ll have to look at the specific documentation for it, but this object can be accessed just like any other array, with the advantage of having some extra functionality; i.e., you can do:


void MyLib::Transmit( CLR_RT_TypedArray_UINT8 param0, HRESULT &hr )
{
    for(int i=0; i<param0.GetSize(); i++)  // GetSize() returns the size of the array
    {
        MyTransmitFunction(param0[i]);  // we can use [] accessor methods, just like C++ arrays
    }
}

So, there’s some data types that have some enhanced functionality, but all in all, there’s not a whole lot new to learn here.

One thing you won’t have to mess with, but you should still be aware of: the function above isn’t actually the function that gets called directly. Rather, in your stubs folder, you’ll see some other files that have to do with interop signatures, and a method lookup table, and then a marshall function that looks like this:


HRESULT Library_MyLib::Transmit___STATIC__VOID__SZARRAY_U1( CLR_RT_StackFrame& stack )
{
    TINYCLR_HEADER(); hr = S_OK;
    {
        CLR_RT_TypedArray_UINT8 param0;
        TINYCLR_CHECK_HRESULT( Interop_Marshal_UINT8_ARRAY( stack, 0, param0 ) );
        MyLib::Transmit( param0, hr );
        TINYCLR_CHECK_HRESULT( hr );
    }
    TINYCLR_NOCLEANUP();
}

That gross piece of code is what TinyCLR actually calls when you call your Transmit function from C#. It grabs the current stack frame, pulls the array off the stack, and calls your function with it. It also passes around the hr parameter which your native code can use to throw exceptions in the managed environment (very useful!).

Basically, with interops, you can do things as “nicely” as the core NETMF functions work – since you’re using the same technology.

Typically, you use native interops to speed up processing of stuff. That sort of stuff – algorithms – is really easy to implement, since it’s just straight-ahead C++. But when you want to start doing hardware stuff, you need to get the PK manual out and figure out the names of functions. For example, if you want to set a GPIO pin high inside of an interop, you have to do:


CPU_GPIO_EnableOutputPin(3, 0); // set A3 as an output pin, and set its initial state to 0.
...
CPU_GPIO_SetPinState(3, 1); // later on, turn the pin on.

Not hard, but definitely underdocumented. For me, the most confusing part was figuring out how to generate an event (like an interupt) in the managed framework from interop code.

You end up writing what’s called an Interrupt Driver which has an Initialize, Enable/Disable, and Cleanup functions. You’d typically have a fourth function that’d actually generate the interrupt. The trick is to call

SaveNativeEventToHALQueue( g_Context, param0, param1 );

where param0 and param1 are ints. Once that is called, you can hook up to that event in the managed environment using NativeEventDispatcher:

NativeEventDispatcher m_evtDataEvent = new NativeEventDispatcher("MyEvent", 0);
m_evtDataEvent.OnInterrupt += m_evtDataEvent_OnInterrupt;

If you need to pass a byte array to the managed environment, the only way to do that is to trigger an interrupt which causes the managed environment to call a “GetData(byte)” sort of function to retrieve the data.

3 Likes

@ jay - Thanks for this really good information

@ jay - Thank you for explanation, specifically I’m trying to figure out how to generate interrupt to managed code and your description is driving me on the right way.

Is there in the fw code some example about that ?

Yes, actually. Check out \Product\Sample\InteropSample. In it contains a “callback” sample. It’s very confusing how they wrote it (they derive a class from NativeEventDispatcher, instead of just instantiating it directly, and it’s not immediately clear what’s going on), but basically, in the managed side, they initialize a NativeEventDispatcher-derived object with the driver name “InteropSample_TestDriver” – the driver name is what NETMF uses to figure out which NativeEvents get wired to which driver.

Inside InteropNativeCode, you’ll find Spot_InteropSample_Native_Microsoft_SPOT_Interop_TestCallback.cpp. This is the native driver you’ll need to copy. Notice that it contains this global variable:


const CLR_RT_NativeAssemblyData g_CLR_AssemblyNative_Microsoft_SPOT_InteropSample_DriverProcs =
{
    "InteropSample_TestDriver", 
    DRIVER_INTERRUPT_METHODS_CHECKSUM,
    &g_InteropSampleDriverMethods
};

You’ll modify that variable by changing the name (replace “Microsoft_SPOT_InteropSample” with your module name) and the string (change “InteropSample_TestDriver”). You’ll also probably want to rename g_InteropSampleDriverMethods to something different.

You only need three functions: Initialize, EnableDisable, and Cleanup. These are the three functions inside the DriverMethods variable. However, you’ll probably have a fourth function that actually generates the interrupt. To generate the interrupt, call SaveNativeEventToHALQueue() with the NativeEventDispatcher HeapBlock that the driver was initialized with.

Also, the Porting Kit manual is actually not terrible at all – it documents this functionality pretty well. Look up “Supporting Asynchronous Method Calls” in the porting kit help file.

One thing that isn’t explained is how to get the driver in your solution. Add a reference to a .proj file that contains references to the compile files (this is similar to any other interop feature), and then add an InteropFeature tag with the name of the InteropFeature.

Remember, the name of the InteropFeature is what NETMF uses to look for the global variable that contains the assembly data. In other words, adding an interop named MyInterop will cause the build process to look for a
g_CLR_AssemblyNative_MyInterop_DriverProcs variable – so make sure you name things correctly!

@ jay - lol
10^12 Thanks for clarification, I got the example … So the Shuttle Orbital Autopilot PID processor is for noob compared to the eventcallback PK stuff !

Haha, it’s actually not that bad if you step back and make sure you understand what’s going on. If you just try to mindlessly modify configuration files, you’ll get lost in the woods very quickly!

Basically, when you include an interop feature using the

<InteropFeature Include="MyFeature_ModuleName" />

what that’s really doing is telling the builder to add a reference to a variable named g_CLR_AssemblyNative_MyFeature_ModuleName to NETMF’s array of native functions. Without that, there’s no way of NETMF getting a reference to your functions.

This is the same for everything – whether or not it’s an interrupt driver or just a regular interop function. The only difference is visual studio will automatically stub out regular interop drivers, so you usually don’t have to mess with that stuff by hand. Unfortunately, there’s no way to stub out interrupt (asynchronous) drivers, so you’ll have to do that by hand.

The only other difference is the checksum. For native interops, the checksum is a unique value that (should) change every time the code changes (so that you don’t attempt to call different versions of functions). However, for interrupt drivers, the checksum is always set to the constant DRIVER_INTERRUPT_METHODS_CHECKSUM. This wasn’t very clear in the manual, and I initially tried setting it to a different value. Didn’t work. Must be set to that constant.

One other thing to note is that there’s really no way of integrating the two together. So, you’ll always have to treat your interrupt/async driver as a separate module from the rest of your interop code.

Good luck!

Thank you again. I learn most of that at my expense doing the interop for LCD driver… But I will need callback for other peripheral driver (adxl accelerometer) and I was in the dark, really.