Main Site Documentation

Size of an elf (non North pole kind)


I’m getting “Failed allocation for 1234 blocks, 14808 bytes” printed to the debug console when loading an elf file from a resource. Commenting out a single method makes it go away. I only have about 150 LOC in the compiled file, the elf file is 15K. Is this to be expected or am I doing something wrong?

Any way to load it as a stream or in multiple elf files or am I already at the limit of the whole device when this occurs? If I can load multiple elf files can they interact with each other if they use header files or would it have to go through managed code for them to interact?


Looks like you need a FEZ Cobra soon :slight_smile:

What we usually do is load the elf file at the very very beginning of the program execution then release the buffer. Also wrap it all with forced GC to compact the heap


Debug.GC(true);//compact the heap
byte[] elf = …load resource
use the elf
discard the elf
elf = null;
Debug.GC(true);//compact the heap


LOC is kind of a red herring; you have 150 and are hitting the limit. I have about 400 and rounding 5KBi. So it depends on several additional factors as you probably already suspect.

Watch out for things like say, floating point, which will easily add several K to your binary once the libraries are linked. If you are using it you could scale the numbers and work with integers only. Buffered IO could hit you hard too. Look out also for too much stuff in the heap. The MAP file is your friend.


I’m including the header files for PWM so maybe that is why. I guess I can just yank out the constants I’m using and plug them in my own source.


Oh, and make sure to compile with -g0 to avoid mucking your bin with debugging info you’ll not use.


Pulling in the header constants didn’t help at all so apparently the compiler is smart enough to only use what it needs. Using -g0 got me down to 9K and removing the single constant number I had with .0 on the end got me down to 5K.

Taking out the .0 makes the line below no longer work correctly though… Any tips for how to do that without doubling the size of the compiled binary? duty is an int.

long rate = CYCLE_WIDTH * (duty / 100.0);

long rate = CYCLE_WIDTH * (duty / 100);



long rate = (CYCLE_WIDTH * ((duty*1000) / 100.0))/1000;

I know this can be done by upscaling the values…I am just too tired to think if this is right or wrong :slight_smile:


Yup, that works, but without the .0… :slight_smile:


Late to the party, but what about this:

long rate = (CYCLE_WIDTH * duty) / 100;


Generous inlining will increase the size of your code as well as other factors such as loop unrolling (done in compiler optimizations), so LOC really is a very poor judge of compiled code size. If you aren’t concerned about running it in nanoseconds, and your compiler supports optimizing for speed or size, optimize for size. The code is running on an embedded chip anyway, so it will be fast as long as the code is well written. Macro’s and #define’s also increase code size since the first stage of the compiler substitutes the short hand for the long hand.

Depending on how the Yagarto chain does optimizations, setting the highest optimization may result in larger code than the un-optimized version, but depending on what you are doing, may still run just as fast. Take care when selecting the optimizations and do a before/after test with each optimization level to find out where you get the best performance/code size medium.


If you are trying to optimize for size, the gcc and yagarto gcc version has the following parameter

-Os -> this will optimize for size. Enabling all optimizations enabled by -O2 except for those that might increase file size.