Can NETMF scheduler time slice be reduced from 20ms


I am attempting to use thread priorities to sort out a problem i have in a system using vehicle LIN comms. I need to respond to the LIN headers received in less than 20ms. I have the LINResponder routine running in it’s own thread. My main thread listens for button presses, updates values for LIN etc and then updates a 2.2" SPI display. I have kept the screen update to a minimum, but it still takes an age compared to the speed I need to respond to LIN, so it uses up it’s time slices pretty fully when the screen is updated.

I have downgraded the main thread to lowest priority and the LINResponder to the highest priority.

However, from what I have read, the NETMF schdeuler is set for 20ms, so my display thread takes the full 20ms before it relinquishes and the LINResponder gets a chance to respond to the new message.

Is there any way to reduce the scheduler time slice from 20ms? I considered filling the display update routines with sleep(0) to keep handing control back, but that seems a bit clumsy.

Or is there another way to ensure the LINResponder gets all the CPU time it wants/needs?



Add thread.sleep(0) to release the cpu to a pending thread.

You could also use a mutex to get exclusive access to the CPU.

If you need to respond in time intervals that short, NETMF might not be the best option for you.

Have to agree with @ godefroi - A quick look at the spec and the strict timing makes me wonder if you could ever pull it off reliably. There are just some things that netmf is not designed to do, and deterministic execution is one of them.

You might be able to pull it off in firmware (not RLP), but that’s a substantial task, and probably more substantial than just writing something for another platform (e.g., mbed, etc).


Thanks for the feedback. Yes - LIN needs a pretty quick response (nominal response time for my test message is 13ms). The Cobra III I am using is quick enough for me to make it “deterministic enough” (am I allowed to say that!!) for LIN to be happy in nearly all cases.
Everything is happy running the main loop serving LIN with buttons being sensed to change data etc. The only issue comes when I need to do a big update of the 320x240 SPI screen - that takes the full 20ms time slice, and a message coming in then (on a 40ms period) usually misses the response timing.

I am a bit tied in to this hardware for now, so I will try and make a background version of my display update routines that uses sleep(0) regularly during SPI display data transmission to allow the LIN serial transmission to continue. That might get me past this issue for now.

As an alternative, the LIN comes in via a COM port - is there any way to make the DataReceived event interrupt everything immediately? Or must it always wait until the current 20ms time slice is up before it can be scheduled?



If you want to handle it in the receive Interrupt, you need to use RLP and write your own C receive and send logig there using the CPU Registers directly. Look for Simons introduction to RLP in CodeShare to have a quick startup there.
I did this with some SPI stuff already, and UART (also for LIN and Modbus) is next on my list, but I’m not sure when exactly I will get to it.


By ‘Simons introduction to RLP in CodeShare’, do you mean the ‘RLP in depth 1.1.pdf’?

It’s the only Codeshare RLP search item posted by a Simon.


I read ‘RLP in depth 1.1.pdf’ and some general stuff on RLP. I have only dabbled with RLP briefly a long time ago on a Panda far far away.

I can see how to use RLP functions to execute specific tasks. Is it possible to invoke an RLP function (perhaps in it’s own thread) to sit permanently at low-level, monitoring a serial port for LIN headers, and replying to them with the data as soon as they come in, while the rest of the NetMF managed code still runs in parallel, so I can take user input, update the LIN data (I will have to work out how to exchange data between the running RLP routine and the managed code)? Or does the NetMF code stop while RLP is running?

Thanks for any pointers,


@ cyberh0me - Thanks for the info.

Hmmm… I wonder if I should do this backwards and use RLP to try and update the screen via SPI really quickly, instead of trying to move the ongoing LIN handling to RLP!!

I feel some experimentation is needed :slight_smile:


@ HalfGeek - that is the right guide for RLP. Its a really good guide.

You could use the native LCD driver on the g120(cobra 3) and ditch SPI altogether.

@ hagster - I “could” if I didn’t like the price of those SPI LCDs so much :slight_smile:

The supported displays were relatively expensive compared.

There was a plea for native SPI display support for G120 a while back ([url][/url] and [url][/url]), but not sure if anything has come of it.


For UART I guess you Need to Setup the UART Settings Registers so it will generate an Interrupt, and then implement the Interrupt Routine according to the CPU manufacturer specs.
The RLP C functions will not take you where you Need to be.
In fact you have to write classic low Level C style UART Driver with Interrupts.
If you Need to know that and what you have received in managed Code, you can use the RLP Events into managed Code.

on SPI display in G120/G400, the task tracker post is so upvote the task tracker folks!

Another option is to dump the threads. Use an architecture that uses the built in .Net MF event handlers for the LIN serial port and write all your methods so they’re non-blocking. That’s what I do for one of my programs with a lot of IO and it seems to work well for me. It would be worth timing how fast the event handler fires in the simplest possible no-threads test program. You do have to be careful about using .Net MF functionality that spin up threads under the hood. For example, my understanding is timers all get handled in one thread that .Net MF handles for us under the hood.

By the way. I have a minor issue about complaints I hear about .Net MF’s real time capabilities. I find it misleading to say that .Net MF isn’t deterministic or isn’t a real time operating system. Every OS is deterministic to a certain degree. The real question is how much jitter is there in the response time to a critical event. As long as I follow a few basic design principles, I’ve found that .Net MF can responds quite nicely in the 10s of millisecond time frame (maybe better, I just haven’t needed better so I haven’t tested for this) which is plenty real time for most applications. I’m not a software engineer so please take this with a grain of salt, but the guidelines I use are 1) don’t create enough garbage that the garbage collector runs during parts of your program that need to be deterministic. Better yet, don’t create any garbage. 2) Make sure all the stuff that has to happen with fast response time runs in the same thread and there aren’t any other threads running. 3) Make sure all the methods in the fast response time part of your code are non-blocking.

1 Like

I couldn’t help myself, I wrote a “simplest possible no-threads test program” like I mentioned in my earlier post. The program sets up a serial port with TX looped back to RX and a data received event handler then writes a simple message, logs the time after the SerialPort.Write command and waits for the event handler to fire. When the event handler fires, it logs the time and prints out the time delay.

In order to really understand the results, I’m hoping someone can answer the following question. Does the .Write command wait until the write is complete before moving on the next line or does it just start the write and move on? If the latter, than I have to subtract the time it takes to write the message to get the actual time delay. 9600 baud is about 0.1 millisecond per bit, one character is about 10 bits and 12 characters in my test message means it takes about 12 milliseconds to write the message. If this is really the case then the response time is of order 2 millisecond (see results below). If this isn’t the case, the time response is of order 14 milliseconds. Both are pretty good for a supposedly non real time OS, at least in my opinion.

Here’s the results. I’m not sure why the first time delay is so much longer than the others and I have no idea why one of the time responses is so much shorter than all the others.

Program Started
Received message: Test Message with time delay = 00:00:00.0460634
Received message: Test Message with time delay = 00:00:00.0137845
Received message: Test Message with time delay = 00:00:00.0137396
Received message: Test Message with time delay = 00:00:00.0138242
Received message: Test Message with time delay = 00:00:00.0138073
Received message: Test Message with time delay = 00:00:00.0139156
Received message: Test Message with time delay = 00:00:00.0140003
Received message: Test Message with time delay = 00:00:00.0141037
Received message: Test Message with time delay = 00:00:00.0140214
Received message: Test Message with time delay = 00:00:00.0141220
Received message: Test Message with time delay = 00:00:00.0142044
Received message: Test Message with time delay = 00:00:00.0143044
Received message: Test Message with time delay = 00:00:00.0143005
Received message: Test Message with time delay = 00:00:00.0001140
Received message: Test Message with time delay = 00:00:00.0142669
Received message: Test Message with time delay = 00:00:00.0139056
Received message: Test Message with time delay = 00:00:00.0140010
Received message: Test Message with time delay = 00:00:00.0140071
Received message: Test Message with time delay = 00:00:00.0141077
Received message: Test Message with time delay = 00:00:00.0140923
Received message: Test Message with time delay = 00:00:00.0141913
Received message: Test Message with time delay = 00:00:00.0142103

For reference, here’s the program. It would be great if someone more knowledgeable about .Net MF than me (that would be just about everybody) would check to see if this makes sense.

using System;
using Microsoft.SPOT;

using System.Text;
using System.Threading;

using Microsoft.SPOT.Hardware;
using System.IO.Ports;

namespace UARTLoopBack
    public class Program
        public static SerialPort serialPort = new SerialPort("COM6", 9600, Parity.None, 8, StopBits.One);
        public static TimeSpan startTime = new TimeSpan();
        public static TimeSpan responseTime = new TimeSpan();

        public static void Main()
            Debug.Print("Program Started");

            serialPort.ReadTimeout = 5000;

            serialPort.DataReceived += new SerialDataReceivedEventHandler(dataReceived);

            const string message = "Test Message";

            while (true)
                serialPort.Write(UTF8Encoding.UTF8.GetBytes(message), 0, message.Length);
                startTime = Microsoft.SPOT.Hardware.Utility.GetMachineTime();


        private static int numBytesAtPort = 0;
        private const int portBytesBufferSize = 256;
        private static byte[] portBytes = new byte[portBytesBufferSize];

        private static StringBuilder sbTemp = new StringBuilder(portBytesBufferSize);
        private static TimeSpan timeDiff = new TimeSpan();
        private static void dataReceived(object sender, SerialDataReceivedEventArgs e)
            responseTime = Microsoft.SPOT.Hardware.Utility.GetMachineTime();
            numBytesAtPort = serialPort.BytesToRead;

            if (numBytesAtPort > 0)
                    serialPort.Read(portBytes, 0, numBytesAtPort);
                    sbTemp.Append("Received message: ");
                    sbTemp.Append(" with time delay = ");
                    timeDiff = responseTime - startTime;
                    Debug.Print("Serial Port Read Exception");          //Probably should do something smarter here        

1 Like

Well I think I answered my own question about whether SerialPort.Write blocks until it is done. I tried the startTime = Microsoft.SPOT.Hardware.Utility.GetMachineTime(); line ahead of the SerialPort.Write line and the timeDiff is only a few tenths of milliseconds longer than with the “startTime = …” line after the .Write line. So, I think that means the SerialPort.Write line is non-blocking. Probably everyone else knew this but I like to prove these things to myself.

@ Gene - Now, it would be interesting for you to do that test again, but write the data to a string vs a debug.print, as that uses a lot of time / resources, and then display the data after say 20 iterations…

@ michaelb - Thanks for taking a look. Since I store the responseTime first thing when the event handler fires I don’t think it matters how long all the sbTemp and Debug.Print stuff takes. It shouldn’t affect the value of timeDiff. Does that seem right to you?

@ Gene - Yes & No, remember, I’m not a professional at this, but just figuring it out as I go along. I’ve been digging into the multi threading as a result of needing VERY specific timing for 1-wire devices, if you search thru here, you will see references to that, and a couple of responses from Microsoft that have me still reading back thru code to understand.

Now that said, being a Sunday night, without digging thru reference manuals to remember how long each thread normally runs thru, simply having the debugger connected, (or a usb connection, without the debugger running), makes your program a multi threaded application, as it takes time out for the debug routine (or your breakpoints wouldn’t work, your not using a hw debugger)… So, after reading your response, I would change my question to the same, but instead of outputting to the screen, write to a text file with the debugger disconnected…

Secondly, after using debug print extensively over the last 18 months as I’ve been learning, I’ve seen it itself get interrupted as its displaying, so depending on when the hw interrupt fires, and when the thread switches, I’m not convinced that its not 1/2 way thru your next iteration before it has the rug pulled out from underneath it, and does the .print routine… This is from troubleshooting sending 600 or so individual Ethernet posts to a server, (they didn’t want to spend the time to parse a single file, but that will get corrected soon), as fast as their server can take them, and well, don’t have proof, other than what I’m willing to commit to, is just a general feeling of the data that gets included in the debug.print isn’t always in sync with what is happening…

I think you’re seeing USB “packets” and buffering the output at play here. Debug.print writes the content out but the data never arrives at the PC side because it’s in a send queue that requires a full payload. I suspect you’ll find that the string itself was all written as a single operation, it just needed a second transfer to get to you. (as an aside I reckon we could validate that without looking at the code by sending strings of known sizes as I bet that would show how large each transfer was)