Hydra serial memory leak issue?

I have a basic data logger system that I originally had running on a Fez Rhino where it would log sucessfully for hours. It basically reads incoming data on a serial port that is being sent at 100Hz with 80 byte packets and then stores the incoming packets to an SD card and transmits the packet out another serial port at 20Hz, i.e. every 5th packet is stored and transmitted with some analog sampling at the same time.

I’m using the same code now on the Hydra, with the only difference being that I’m using the relevant Gadgeteer classes for opening the serial ports etc.

What I’ve noticed is that even when I pare the data logging code right down to simply opening a single serial port and read the 100Hz input without logging it to the SD card and without transmitting it out another serial port I see a memory leak and get an out of memory exception after running for a while.

So in ProgramStarted I create and open the serial port and create a thread to perform the data logging:


_rockwellComPort = new Serial(GetSocket(6), 115200, Serial.SerialParity.None, Serial.SerialStopBits.One, 8, Serial.HardwareFlowControl.NotRequired, null);
_rockwellComPort.Open();

_loggerThread = new Thread(new ThreadStart(StartLogging));
_loggerThread.Start();

The code for the data logging thread simply sits in a loop reading from the serial port, for now it doesn’t write it to the SD card and it doesn’t transmit it out another serial port:


void StartLogging()
{
	byte[] buf = new byte[BUFFER_SIZE];

	Debug.Print(Debug.GC(true).ToString());

	while (true)
	{
		_rockwellComPort.Read(buf, 0, 1);
		if (buf[0] == SYNC_BYTE)
		{
			_rockwellComPort.Read(buf, 1, 6);
			if (buf[5] == INS_PACKET_TYPE)
			{
				int cDataBytes = buf[6];
				// Now read rest of packet including 16bit CRC
				int bytesLeft = cDataBytes + 2;
				int totalPacketSize = HEADER_SIZE + bytesLeft;

				if (totalPacketSize < BUFFER_SIZE)
				{
					while (bytesLeft > 0)
					{
						int bytesRead = _rockwellComPort.Read(buf, totalPacketSize - bytesLeft, bytesLeft);
						bytesLeft -= bytesRead;
					}
				}
				else
				{
					Debug.Print("Bogus size");
				}
			}

			// Reset packet
			for (int i = 0; i < buf.Length; i++)
				buf[i] = 0;
		}
	}
}

As you can see there is a single buffer created at the start of the thread routine which is used for each incoming packet, i.e. I’m not creating a new buffer for each incoming packet.

The GC seems to kick in roughly once a minute and looking at the GC report it looks like the BINARY_BLOB_HEAD keeps increasing, it starts off at startup at 378,804 bytes and increases by just under 300KB each minute.


GC: 2msec 446076 bytes used, 5845056 bytes available
Type 0F (STRING              ):   1680 bytes
Type 11 (CLASS               ):  17532 bytes
Type 12 (VALUETYPE           ):   1512 bytes
Type 13 (SZARRAY             ):   5880 bytes
  Type 01 (BOOLEAN             ):     36 bytes
  Type 03 (U1                  ):    432 bytes
  Type 04 (CHAR                ):    780 bytes
  Type 06 (U2                  ):    576 bytes
  Type 07 (I4                  ):   1044 bytes
  Type 08 (U4                  ):    108 bytes
  Type 0C (R8                  ):    360 bytes
  Type 0F (STRING              ):     60 bytes
  Type 11 (CLASS               ):   2400 bytes
  Type 12 (VALUETYPE           ):     84 bytes
Type 15 (FREEBLOCK           ): 5845056 bytes
Type 16 (CACHEDBLOCK         ):     72 bytes
Type 17 (ASSEMBLY            ):  31260 bytes
Type 18 (WEAKCLASS           ):     96 bytes
Type 19 (REFLECTION          ):    168 bytes
Type 1B (DELEGATE_HEAD       ):    720 bytes
Type 1D (OBJECT_TO_EVENT     ):    576 bytes
Type 1E (BINARY_BLOB_HEAD    ): 378804 bytes
Type 1F (THREAD              ):   1536 bytes
Type 20 (SUBTHREAD           ):    192 bytes
Type 21 (STACK_FRAME         ):   1104 bytes
Type 22 (TIMER_HEAD          ):     72 bytes
Type 27 (FINALIZER_HEAD      ):    504 bytes
Type 31 (IO_PORT             ):    612 bytes
Type 34 (APPDOMAIN_HEAD      ):     72 bytes
Type 36 (APPDOMAIN_ASSEMBLY  ):   3684 bytes


GC: 40msec 696732 bytes used, 5594400 bytes available
Type 0F (STRING              ):   1512 bytes
Type 11 (CLASS               ):  17172 bytes
Type 12 (VALUETYPE           ):   1512 bytes
Type 13 (SZARRAY             ):   5880 bytes
  Type 01 (BOOLEAN             ):     36 bytes
  Type 03 (U1                  ):    432 bytes
  Type 04 (CHAR                ):    780 bytes
  Type 06 (U2                  ):    576 bytes
  Type 07 (I4                  ):   1044 bytes
  Type 08 (U4                  ):    108 bytes
  Type 0C (R8                  ):    360 bytes
  Type 0F (STRING              ):     60 bytes
  Type 11 (CLASS               ):   2400 bytes
  Type 12 (VALUETYPE           ):     84 bytes
Type 15 (FREEBLOCK           ): 5594400 bytes
Type 16 (CACHEDBLOCK         ):     48 bytes
Type 17 (ASSEMBLY            ):  31260 bytes
Type 18 (WEAKCLASS           ):     96 bytes
Type 19 (REFLECTION          ):    168 bytes
Type 1B (DELEGATE_HEAD       ):    684 bytes
Type 1D (OBJECT_TO_EVENT     ):    576 bytes
Type 1E (BINARY_BLOB_HEAD    ): 628560 bytes
Type 1F (THREAD              ):   1920 bytes
Type 20 (SUBTHREAD           ):    192 bytes
Type 21 (STACK_FRAME         ):   2124 bytes
Type 22 (TIMER_HEAD          ):     72 bytes
Type 23 (LOCK_HEAD           ):     60 bytes
Type 24 (LOCK_OWNER_HEAD     ):     24 bytes
Type 26 (WAIT_FOR_OBJECT_HEAD):     48 bytes
Type 27 (FINALIZER_HEAD      ):    456 bytes
Type 31 (IO_PORT             ):    612 bytes
Type 34 (APPDOMAIN_HEAD      ):     72 bytes
Type 36 (APPDOMAIN_ASSEMBLY  ):   3684 bytes

As I mentioned this same code runs fine for hours on the Fez Rhino which has a whole lot less memory and never sees an out of memory exception.

When this happens it is usually because you did not read all data from the serial drivers.

Try to simplify your code and monitor the GC output.

But surely the serial drivers have a smallish fixed size buffer so even if my code can’t keep up reading the data fast enough etc. the serial driver buffer won’t grow unbounded until the device runs out of memory?

Instead my code should just miss some of the incoming serial data.

Also is there any difference between the handling of the serial driver buffer between Fex Rhino and Fez Hydra? As I mentioned this is the exact same code that was used on the Fez Rhino without any issues.

I tried calling Serial.DiscardInBuffer() after every 5th packet that I detected thinking it may free up the serial driver buffer but it made no difference.


_rockwellComPort.DiscardInBuffer();

I’m not using a DataReceived event handler, instead as I did on the Rhino I sit in a loop simply reading from the serial port. It’s not the case that the serial driver is queuing up data waitiing for a DataReceived handler to consume it is it?

Those devices can’t be compared as you have 4.1 on one and 4.2 on the other.

So do the serial drivers in 4.2 Hydra have unbounded buffers as opposed to say a 4KB or 8KB circular buffer?

There are buffers on multiple layers.

Sompifying the code a bit would either fix the problem or you will be able to show us the problem.

Well I thought the code was really simple already :wink:

It sits in a loop reading a byte from the serial port until it sees the sync byte (0x5) which marks the start of a packet from the AHRS unit, then it reads the next 6 bytes and confirms that the packet type is the correct packet type. The 6 bytes just read include the packet type and a single byte specifying the data length of the packet, which is fixed for this packet type.

It then reads the remaining bytes making up the packet.

So I’m not sure what sort of simplification you’re looking for?

Thanks

4.1 blocked till data was received. 4.2 returns right away like big .NET. This maybe why.

So a sleep in your thread maybe needed.

Which component are you referring to? Serial.Read() still blocks until data arrives or until Serial.ReadTimeout is exceeded right?

To give the GC more of a chance to run? The GC seems to run roughly once a minute without any sleeps in my data logging thread.

I did try a Thread.Sleep(20) after every 5 packets and it didn’t seem to make any difference.

Thanks

The Serial.Read() will block until the first data arrives, but will potentially return before all data has arrived. For example, if the source sends 10 bytes and you have a receive buffer of 10 bytes the read will not block until all 10 bytes have arrived, it might for example return after receiving only the first 3 bytes and you will need to call Read() again potentially multiple times to receive the full 10 bytes.

Here is a post where I had a similar issue with the serial com code used to read from the DL40, it contains the code used to resolve the problem.
http://www.tinyclr.com/forum/topic?id=9195

Hi taylorza from Cape Town ZA.

I do already handle Serial.Read() returning fewer bytes than I requested, see the original code above or the relevant snippet below. It’s still not clear to me why this would cause a memory leak.


while (bytesLeft > 0)
{
	int bytesRead = _rockwellComPort.Read(buf, totalPacketSize - bytesLeft, bytesLeft);
	bytesLeft -= bytesRead;
}

And to clarify the difference between 4.1 and 4.2, is it the case that in 4.1 Serial.Read(cBytes) would wait until all cBytes had been received and then only return, unless the read timeout was exceeded in which case an exception is thrown, but in 4.2 Serial.Read(cBytes) can return anywhere from 1 to cBytes before the read timeout expires?

Thanks

I think I’ve narrowed down the reason for the memory leak. I was taking a look at the source code for the Gadgeteer.Interfaces.Serial class -

http://gadgeteer.codeplex.com/SourceControl/changeset/view/24955#200062

It’s mostly a simple wrapper over the System.IO.Ports.SerialPort class. However I noticed that in it’s ctor it subscribes to the SerialPort’s DataReceived event and this is done irrespective of whether the code using Gadgeteer.Interfaces.Serial subscribes to the Serial.DataReceived event or not.

So my hunch was that the firing of this event was causing the memory leak at least in the case where I wasn’t subscibing to the Serial.DataReceived event.

So I modified my code to use System.IO.Ports.SerialPort instead of Gadgeteer.Interfaces.Serial but left the rest of the code as is in terms of how it reads from the serial port etc.


//_rockwellComPort = new Serial(GetSocket(6), 115200, Serial.SerialParity.None, Serial.SerialStopBits.One, 8, Serial.HardwareFlowControl.NotRequired, null);
_rockwellComPort = new System.IO.Ports.SerialPort("COM4", 115200, System.IO.Ports.Parity.None, 8, System.IO.Ports.StopBits.One);

And instead of seeing the GC kick in once a minute and a memory increase of just under 300KB per minute the code has run for 20+ mins now without the GC kicking in.

So I don’t think the memory leak issue is related to any changes between 4.1 and 4.2 in terms of the serial driver but rather the DataReceived handling in the Serial wrapper class.

3 Likes

Thanks for the details.

What’s a bit weird is the performance difference I saw between using System.IO.Ports.SerialPort and Gadgeteer.Interfaces.Serial.

When using SerialPort directly my code was only detecting every 2nd packet being transmitted on average, whereas when using Serial it was detecting every packet @ 100Hz.

I don’t see anything obivous in the way that the Serial class creates and sets up the SerialPort in it’s ctor that would explain this massive performance difference.

I also made a copy of the Serial class and simply commented out the subscription to the SerialPort.DataReceived event and sure enough the memory leak disappears when using this modified version of the Serial class, it’s been running for 20+ mins without the GC kicking in and the code is detecting every packet @ 100Hz.


//this._serialPort.DataReceived += new SerialDataReceivedEventHandler(this._serialPort_DataReceived);

@ Sean - Excellent detective work.

I was about to post the same issue, although my implementation is slightly different. I am using the LineReceivedEvent from the GTI.Serial class, and have the same issue, running out of memory. I am also getting Buffer OverFlow message on the Hydra Screen and very bad performance. Looks like you have sorted it all out.

I will have to forgo the convenience of the Line Received event as the System.IO.Ports.SerialPort does not support it, but I can cope with that as long sa the performance is there.

Some questions, In the line:

_rockwellComPort = new System.IO.Ports.SerialPort(“COM4”, 115200, System.IO.Ports.Parity.None, 8, System.IO.Ports.StopBits.One);

why do you use “COM4” ? Is this setting the name? How does the software know which hardware socket the serial port is on?

Thanks in advance.

Chris

If you take a look at the Hydra Wiki at Home - GHI Electronics you’ll see a section titled “Non Gadgeteer serial ports” which lists the mapping between socket numbers and the standard COM port number, e.g.

COM4 = Socket 6 -->> On Schematic it is TXD2\RXD2

Hi Chris

COM4 is the logical serial port number. This essentially maps back to the UART definition that identifies the specific pins on the processor that is used. What device do you have? You should be able to identify the COMx name to use for your chosen device and socket if you look on the Wiki.

If you look at the source code for GTI.Serial:

http://gadgeteer.codeplex.com/SourceControl/changeset/view/24955#200062

You’ll see that for the implementation of it’s LineReceivedEvent that it essentially spins up a new thread which sits in a while(true) loop reading 1 byte at a time from the SerialPort and firing off the event when it’s recognizes a line.

Meanwhille it has also subscribed to the SerialPort.DataReceived event even though you aren’t subscribing to Serial.DataReceived and which seems to cause this memory leak.

If you want to stick with GTI.Serial and using it’s LineReceivedEvent your other option is to do what I did, i.e. make a copy of the GTI.Serial class locally in your project and simply comment out the line in GTI.Serial’s ctor where it subscribes to the SerialPort.DataReceived event and continue using the LineReceivedEvent.

Cheers