In response to your original question, on the SAM9X, no, I haven’t, but I have done similar things on the STM32 platform. The first thing to realize is that the very, very best you can hope for is 2% accuracy (20mS NETMF task switching triggered every 1000mS, so up to a 20mS delay between HW interrupt and first netmf opcode executing in your handler. That’s true whether you use the onboard RTC or an external interrupt source.
If you really, really need 1s accuracy, then you need to configure the timer and interrupt (probably in native code), collect your data in native code in response to the interrupt, and then hand off the already-collected data to managed (C#/Vb) code to be stored/sent/whatever. For timing accuracy, you can’t transition to netmf to collect the data.
In my own opinion and practive, NetMf code simply doesn’t belong in sense or control loops that need sub-second accuracy. There’s too much jitter and delay introduced by the interpreter. If you can tolerate 2% (or more under some conditions) of jitter (which is not the same as ‘drift’) then what you want to do is doable in pure netmf code.
The kind of drift that you are talking about makes me wonder if you are delaying by your desired interval after each periodic task rather than calculating a variable delay to the next event. That would make your algorithm ‘drift’ by the duration of the periodic task’s execution time (plus that pesky 2% jitter). You can avoid that by setting your timer at the end of your periodic task, and setting it based on the RTC value (delay = Tdesired - Tnow where Tnow comes from the RTC chip). Even if you set a fixed delay right at the start of your handler, that scheduling jitter and delay will accumulate, so you need to calculate a variable delay, like I describe here, each time and not use a fixed value.