System.OutOfMemoryException - runs fine on Hydra, blows up on Cerberus after one and half cycles of "blue"

Write it like this and the timer timeout will not matter:

using System;
using Microsoft.SPOT;
using GT = Gadgeteer;
namespace TimerMemoryLeakTest
    public partial class Program
        int i = 0;
        int direction = 1;
        GT.Timer timer = new GT.Timer(50, GT.Timer.BehaviorType.RunOnce);
        void ProgramStarted()
            multicolorLed.GreenBlueSwapped = true;
            timer.Tick += new GT.Timer.TickEventHandler(timer_Tick);
        void timer_Tick(GT.Timer timer)
            if (i >= 255)
                direction = -5;
            else if (i <= 0)
                Debug.Print("Free memory: " + Debug.GC(true));
                direction = 5;         
            i += direction;


When you say the interval value should be a function of the memory - don’t all of those events get shoved into memory (in the queue)? So really, both are a limitation, correct?

I definitely understand that if you try to avoid filling the queue, neither are a factor, but depending on both factors, you could run into similar issues


I didn’t think about setting the timer to a run once instance and having it restart. That’s a good optimization trick. I know for longer running loops, in older code, we could stop the timer at the beginning of the tick event, run the code and then restart it at the bottom. You don’t even have to fool with it this way as it will only tick once, you do your business, then just restart it. Basically the same thing, but much less messy.

Still trying to get my head around the differences between desktop/mobile and embedded programming. Haven’t done much embedded stuff since 2001, so taking a bit to get back up to speed :wink:

I said it should not be a function of memory. :slight_smile:

Yes, events do get pushed into the queue. But, if you keep up with the events, then the queue should not be growing. Keeping up with the events is based upon your application and the processor processor speed. If the size of memory matters, then the bigger memory allows a longer time to pass before the crash.

fundamentally if your queue is growing based on a periodic event like a timer, then you’re doing your timer wrong… no matter how much memory you have, you’ll fill it sometime and never catch up. If the event is “user” driven, so there’s a chance of breathing room to catch up eventually, you may be ok, but the times like these when you’re doing this to yourself, you need a better approach like Gralin has given you

Yeah, sorry, I’m on antibiotics - you indeed say that. And your latter point was my latter point - the two are related, but avoiding it is the best tactic!

I guess what I didn’t realize is that the queue was indeed growing. I was assuming that it was processing the SetIntensity function as fast as I could throw them at it and had a memory leak, not a queue overrun (which in essence was a memory “leak” as it ate up more and more until it was out).

Remembering that you have to do things slightly different than you would on a 24 core 64bit server with 256GB of RAM (which was the core of one of the projects I’ve been working on for the past 10 years) - they’re a little more tolerant of things like that :wink:

Definitely appreciate the advice from all - will correct my code on my blog and teach that pattern when I give talks from now on.