Musings on garbage collection, performance, timing jitter and power consumption

I help develop hardware and software for battery powered instruments and moorings that remain in the ocean (surfaced and submerged) for several years collecting data for scientific research.

Although I’m not a “noob” developing with ARM processors and embedded C/C++, I’m a complete novice with C# and the Gadgeteer environment. I’m investigating the use of GHI’s products to minimize development time and effort for oceanographic instruments. So far, our results have been very encouraging.

Several of my organization’s other instruments or platforms use Java running on top of embedded Linux. Java’s garbage collection has been very problematic for devices
that are deployed for long periods of time. Either storage was completely exhausted or the JVM spent a long time reclaiming storage at unpredictable intervals . All of the garbage collection related problems have been reduced to nuisance level but were severe enough to give pause about using any software environment that relies on GC.

Any anyone used NETMF and Gadgeteer controllers to build long running apps? Any comments about the impact of NETMF’s garbage collection on long running apps would be welcomed.

CLR seems to be an interpreted environment driven with “byte codes”. The NETMF documentation mentions “Just In Time” compilation to machine code. Do GHI’s products do any JIT translation? If not, are there any rough rules of thumb for the overhead of the byte code interpreter? Our apps don’t require much performance but I like to know what things “cost” to some degree.

Our apps log data to a local file system, use TCP/IP jiggery-pokery to telemeter data to the shore often through satellite modems and have several PID loops for control of velocity and position. The control systems are rather slow, operating at 10 Hz but some instruments would benefit from certain control loops running at 100 Hz or more. More or less, everything is soft “real time”.

The PID controllers require being executed at a fixed rate with reasonably small (< 10ms is tolerable, less < 1 ms would be great) timing jitter. Any timing jitter just introduces additional phase lag into the control system. There must be a dozen different ways to execute code at a fixed rate using C# using a thread with “sleep” or something driven from a timer. Any suggestions about an approach that will minimize timing jitter?

Finally, is there any way to place a G120 or similar GHI controller into a truly “deep sleep” mode. Browsing the GHI literature revealed that some controllers can be placed into a “deep sleep” mode that still required several milliamps of current. Low power to meet our requirements means less than 50 uA of current being consumed in “deep sleep”.
Is the relatively high current consumption driven by NETMF or is it just hardware dependent?

Thanks for insights that member of the forum can provide,
Wayne

Wayne for the deep sleep have you considered external hardware to power cycle the device? If it doesn’t need to be awake for extended periods this might be your best bet for keeping power consumption at a minimum.

GC on NETMF can be problematic as well at times but you can control it reasonably well by forcing it to execute Debug.GC(true) after high cost operations. With the G120 you can also benefit from using RLP which will allow you to have tighter control over the timing of your reads.

Welcome to the forum!

I would like to add that unfortunately there is no JIT at this point.

[ul]The GC itself doesn’t become a problem over long times, but usage patterns can be. I don’t know if the NETMF’s GC does compaction, but if it doesn’t, fragmentation could conceivably become a problem. Other than that, I don’t forsee the GC being an issue.
There is no JIT compilation in NETMF. The “bytecode” (MSIL) is interpreted at runtime. The overhead is quite large, and can be anywhere from unnoticeable to catastrophic, depending on your requirements.
The fastest PID loop that I’ve heard of was running at (I believe) 50Hz. 100Hz is almost certainly out of reach, even if nothing else was going on at all, especially on the G120. The fastest GHI board is the Hydra, and after that is the Cerberus family of boards, and after that comes the G120. 10Hz shouldn’t be a problem, I wouldn’t think.
Timing jitter is an inescapable problem, given the mechanics of the GC and threading. The “thread quanta” is 20ms, which means that a timer with a delay of 50ms could execute code anywhere from 50 to 70 milliseconds later, and Thread.Sleep has the same problem. On top of that, there is the GC to consider, which will run at unpredictable times.
Gus or someone else would have to comment on power-down states. I’m not sure they’re completely implemented in the 4.2 firmwares yet.[/ul]

@ farangutan -

JIT was considered at one time for the Micro Framework (MF) but never implemented/released. Originally, the target processors had minimum RAM, and could not support the compiled code.

MF is not real-time. If periodic polling is done using a timer, there can be several milliseconds of jitter. Especially, if GC is in process.

GC issues can be controller by careful use of dynamic allocation of objects. Using one of the devices with megabytes of memory should help avoid any long term running issues.

It should not take you long to understand the limitations of MF.

[quote=“Mike”]
JIT was considered at one time for the Micro Framework (MF) but never implemented/released. Originally, the target processors had minimum RAM, and could not support the compiled code.[/quote]

Specifically, AOT compilation was shelved because the compiled native code was larger than the MSIL, leading to problems fitting it in the flash. JIT was shelved because it consumed a lot of RAM to store the compiled versions of methods, and there was too much contention and recompilation going on to provide good performance.

@ godefroi -

I am not sure, but the ChipWorkX might be faster. But given the “legacy” status of the ChipWorkX module, I
would not consider it use for a new project.

Erm, i have one running on my balance robot at a few hundred Hz…

The ChipWorkX and Hydra both use AT91SAM9, same core, same max frequency (240 MHz), and they both run at 200 MHz (unless Hydra runs faster on newer firmwares). They should be fairly comparable speedwise, from a NETMF perspective, even given the different compilers used.

I stand corrected. That’s good information to have. What board are you running it on?

On a Cerbuino i am reading a serial stream from an IMU at 50Hz.
The PID loop happy runs at over 700HZ with a few other things going on but i turned down the wick as 700Hz is kind on pointless.
A G120 would still run at a few hundred Hz easily.

A big factor in PID rate determination is the actual processing required. An accurate answer can only be determined after the development of a prototype.

Thanks for the prompt and insightful replies… it was a great help.

My balance bot is using a Cerberus to read from my IMU at 120Hz. My main loop can run at 10-20 times that but as you say it’s pointless if you don’t have newer data to act upon. My main limiting factor is how fast can I get data out of the IMU. Not how fast can it be consumed.

The only places where I’ve noticed GC issues are with COM interop and performance on smaller forms. Our mobile group switched to C++ because GC was getting in the way.

So that tells me that you and Juzzer need to get multiple IMUs !