Memory Management

Hi,

I’m looking for a way to monitor memory usage on a G30 using netmf 4.3 in debug mode.

Debug.EnableGCMessages(true);
Debug.GC(false);

does not print anything.

Also when I initialize a byte array of 2497 bytes at a certain part of my program I get a message “Failed allocation for 210 blocks, 2520 bytes”

There is no exception thrown and a valid reference to the array is returned. Any thoughts?

After this I get many similar messages and eventually an exception for out of memory will be thrown at varying places within my program.

I mostly just want to be able to understand my memory usage so I can tackle the most consumptive parts.

Debug.Print(Debug.GC(false)).ToString();

““Failed allocation for xxx blocks, xxxx bytes” is when you try to create an object that is larger than the continuous block of ram due to memory fragmentation.

For me i pretty much use it like the yellow warning light on your cars fuel gauge.

Forcing a GC before allocation might help .

1 Like

Oh wow I forgot to check i for a ToString. Thanks!

I still don’t see why I get that message and not an exception as well. Does it run a GC and then try again?

I also can’t seem to catch the actual OutOfMemory. Is it because it is a FirstChance exception?

Is this accurate? It is printing about 20k and then failing to allocate about 2.5k.

It’s not out if ram but doesn’t have a continous free block of ram big enough for the object.

Interesting. I have a bunch of little objects that are encapsulated by slightly bigger objects that are encapsulated by one big object. So maybe I will try to make it just a bunch of little objects.

Hmm. This doesn’t seem to make much of a difference. But that makes sense right? The large objects just hold references to the smaller objects so there wouldn’t be a need for enough continuous memory to store everything in one place, correct?

My plan is to study as best I can how memory works in this system and try to find a better way to allocate everything. This would work, I believe, in the event that my issue is memory fragmentation.

I will also try to determine the entire size of my objects given there is no fragmentation to see if maybe my data structure is just too large for my embedded system.

Just a couple questions if anyone has any insight to help me:

  1. When I print Debug.GC ( ) do I see a number that is affected by fragmentation? Or is the value the total number of free bytes… even if they are so spread out I won’t find much use for them?

  2. Is there a better reference for me to study memory within .netmf 4.3 than the full .NET documentation?

  1. Shows free ram regardless of fragmentation. Try running Debug.GC(true) to help clean up of unused objects which will help with the fragmentation.

  2. As you have no doubt learnt already, NETMF references are few and far between and slowly disappearing :frowning:

Thanks, Justin.

I do force the GC before every single allocation and I’m not seeing improvement. It could be because this portion of my program is the only portion that allocates memory after startup (all other objects are known about beforehand and never die).

It’s possible that my objects are simply bigger than I think they are. I’m just unsure because in printing from the GC the same sized objects appear to vary by as much as 30%.

It may be required that I move these data structures to the PC and implement more detail in my wireless commands to achieve the correct results. This is going to be a massive overhaul for me and deserves some analysis. So I am going to follow this up with a description of my data structure and some psuedo code which shows how I allocate and fill the structure.

My hope is that if I provide more detail then it may be clear to someone how I am misusing the memory allocation system.

Or perhaps I am not misusing it, and someone could see why my objects are so much larger than the serial stream that I use to build them. In this case my choice to put more strain on the wireless system by providing the result needed over wireless instead of the location of where to start traversing the data structure will be justified as the only feasible path to success.

Okay, so as I said I’m going to put down more information on my Memory Management problem.

The goal:

  1. Take into a byte array a serialized version of some data
  2. Deserialize the array and reconstruct the fleshed out data structure
  3. Use the data structure in processing future events

The problem:

When the Serialized Data is > ~2kB, then the reconstruction of the data structure will result in OutOfMemoryExceptions, even though there is a reported 20kB of free memory AFTER the serialized array is already established in memory.

This is the way I will state the problem. I understand there are other ways to explore in order to reach my actual goal as prescribed by my job, but I feel a strong need to understand what is happening in my current implementation. Why does deserializing this data cost so much memory?

I will give:

  1. A diagram of the data structure.
  2. An example of what a fleshed data structure might look like at the end of the process, replacing the notion of arrays with each element in the array being a child of the object which holds a reference to the array.
  3. Code representing my implementation

The data structure consists of 7 classes. A class may contain primitives, an array of primitives, a reference to an object of another class, or a reference to array of objects of another class.

Below is a diagram of the data structure.

Below is an example of how the structure might flesh out at runtime:

Finally, a code example of how the structure is deserialized and reconstructed is below.

// Assume that RawData is a filled byte array containing the serialized data

// reference to the top level object
private static A objectA;

// Calling Code
objectA = CreateStructure();

// Deserializes RawData and returns reference to the top
// level of the data structure
private static A CreateStructure()
{
    A returnObj = new A();

    ushort NumberOf_B_Objects = GetUShortFromRawData();
    returnObj.ArrayOf_B_Objects = new B[NumberOf_B_Objects];

    for ( int i = 0; i < returnObj.ArrayOf_B_Objects.Length; i++ )
    {
        returnObj.ArrayOf_B_Objects[ i ] = Create_B_Object();
    }

    ushort NumberOf_G_Objects = GetUShortFromRawData();
    returnObj.ArrayOf_G_Objects = new G[NumberOf_G_Objects];

    for ( int i = 0; i < returnObj.ArrayOf_G_Objects.Length; i++ )
    {
        returnObj.ArrayOf_G_Objects[ i ] = Create_G_Object();
    }

    return returnObj;
}

// Create and return a B object
private static B Create_B_Object()
{
    B returnObj = new B();

     // example of one primitive datum
    returnObj.data = Get_B_Data_FromRawData();

    ushort NumberOf_C_Objects = GetUShortFromRawData();
    returnObj.ArrayOf_C_Objects = new C[NumberOf_C_Objects];

    for( int i = 0; i < returnObj.ArrayOf_C_Objects.Length; i++ )
    {
        returnObj.ArrayOf_C_Objects[ i ] = Create_C_Object();
    }

    return returnObj;
}

// Create and return a C Object
private static C Create_C_Object()
{
    C returnObj = new C();

    returnObj.Data = GetDataFromRawData();

    bool Has_D_Obj = GetBoolFromRawData();

    if( Has_D_Obj )
    {
        returnObj.RefTo_D_Obj = Create_D_Object();
    }

    return returnObj;
}

I will stop there. I hope I have made it clear that the remaining functions will be very similar to the previous functions.

I would greatly appreciate anyone who took the time to understand this and provide insight into how I should allocate memory in another manner, or how I could see that the structure is just truly much larger than the serialized data.

Thanks.