RLP to handle RTC interrupt

Our product is using the G400 with the NETMF SDK 2016 R1. We need to implement RTC interrupt (trigger every second) within RLP.

The SAM9x35 datasheet suggest the RTC interrupt is tied with all other module interrupts and I need to first configure interrupt controller to be able to use it.

My question is has any one tried it? Will configure the interrupt controller negatively affect the stability? Thanks.

Why not just use a Timer in C# and set its TimeSpan one second?

Because I am trying to get more accurate timestamp for log purpose since the NETMF clock have a much larger drift than the RTC. (My board would drift 10 minutes in an hour and sync clock in short interval is not an option)

I am thinking to use the RTC interrupt to reload a timer, this way by combining RTC time and the timer count I can get a relatively accurate time.

You can have a timer/thread that reads the RTC once a second and update the system time. Adding RLP for RTC is going to be very complicated for little benefit in my opinion.

Agreed ! Just move the RTC time over to the system time, even every 10 seconds, and use that.

Sadly this is very hard to get done. I know NETMF is not a realtime operating system but at least I can get a reliable time down to the millisecond it would be a lot useful.

I don’t understand your point?

Any time you read the RTC and then go to use it, you’re already well past that actual time. So why does millisec accuracy matter?

As Gus and I said, just take the RTC time into system time, and then use system time. Periodically update system time with RTC as often as you think is necessary (my take would be in the minutes not seconds frequency). Code snippet below (untested, cut from someone else’s project, but has the necessary commands)

///
try
{
lastRebootTime = RealTimeClock.GetTime();
}
catch { }
if (lastRebootTime.Year > 2010)
{
rtcIsWorking = true;
Utility.SetLocalTime(RealTimeClock.GetTime());
Debug.Print(constAppName+": System time set : " + RealTimeClock.GetTime().ToString(constDateFormat));
}
else
{
Debug.Print(constAppName+": Error setting system time (RealTimeClock battery may be low).");
}
}

RTC tick every second & doesn’t have the millisecond part. If I am using the RTC to correct the system time it could be off from the real time by 0~999 milliseconds. This could make the situation worse if I am updating every 10 seconds base on system timer.

The goal is to sync the G400 with the host computer at most once per hours and still keeping the drift within a second.

ok, so why don’t you tell us why you think you need msec accuracy? Actually, I don’t care, but if you explain this kind of thing it may help others come up with potential solutions that might suit you - my take is that using the RTC interrupt isn’t going to improve your situation, I believe you still have the same issue with drift

How about using a GPS chip with a one second pulse? Or external RTC?

In response to your original question, on the SAM9X, no, I haven’t, but I have done similar things on the STM32 platform. The first thing to realize is that the very, very best you can hope for is 2% accuracy (20mS NETMF task switching triggered every 1000mS, so up to a 20mS delay between HW interrupt and first netmf opcode executing in your handler. That’s true whether you use the onboard RTC or an external interrupt source.

If you really, really need 1s accuracy, then you need to configure the timer and interrupt (probably in native code), collect your data in native code in response to the interrupt, and then hand off the already-collected data to managed (C#/Vb) code to be stored/sent/whatever. For timing accuracy, you can’t transition to netmf to collect the data.

In my own opinion and practive, NetMf code simply doesn’t belong in sense or control loops that need sub-second accuracy. There’s too much jitter and delay introduced by the interpreter. If you can tolerate 2% (or more under some conditions) of jitter (which is not the same as ‘drift’) then what you want to do is doable in pure netmf code.

The kind of drift that you are talking about makes me wonder if you are delaying by your desired interval after each periodic task rather than calculating a variable delay to the next event. That would make your algorithm ‘drift’ by the duration of the periodic task’s execution time (plus that pesky 2% jitter). You can avoid that by setting your timer at the end of your periodic task, and setting it based on the RTC value (delay = Tdesired - Tnow where Tnow comes from the RTC chip). Even if you set a fixed delay right at the start of your handler, that scheduling jitter and delay will accumulate, so you need to calculate a variable delay, like I describe here, each time and not use a fixed value.

Thank you everyone!