Large memorystream CLR_E_OUT_OF_MEMORY

I am receiving an error while trying to transfert 700kb over tcp:

It seems that the board runs out of memory. Maybe memory is fragmented and there’s no enough space in a single block to handle the memory stream. This happens in a method that aims tu update through ethernet (tcp connection). The app.hex is for now about 691624 bytes. I read the socket by chunks of 1024 bytes.

I am using a memory stream instead of directly a file stream because I think it will speed up the process (writing several hundred of times 1024bytes is faster in a MS than in FS).

Does anyone have any advice to give?

COM_PC.tcpread_raw(ref cur_raw);
                while (ms.Position < req_size)
                    ms.Write(cur_raw, 0, cur_raw.Length);
                    COM_PC.tcpread_raw(ref cur_raw);

What board is it?

NETMF heap has a limit of about 700KB.

@ leforban - Since you mentioned “App.hex” it sounds like you’re trying to do in-field update. If this is the case, I would pass each packet directly to the SystemUpdate.Load() method so it can be applied immediately. I do not recommend storing the IFU file in an intermediate data structure that might grow and eventually exceed a device’s memory. My application performs IFU over serial instead of TCP, but the idea of consuming the file piece-by-piece is the same.

Hi everyone.

@ Architect: The board is an EMX based one.

@ Gus: Yep I know that is why I had a look on custom heap and largebuffer but still do not know if I am looking in the right direction.

@ andre.m: Yes dumping tcp stream by chunks of 1024bytes into a memory stream is faster than in a file stream for size greater than 100k. I receive tcp stream, put data into memory stream and when all the data are received I am putting the memory stream into the file stream. This works well for 200kB files but not for the App.hex that is about 700k.

@ Iggmoe: yes this can be a solution. However my boss want also to have a back up of the last app.hex on sd in order to launch IFU again with the back up copy in case of ???

My last idea is to check size of the memory stream and as soon as it becomes large (let 's say 100k), dump the memory stream into the file stream…

Speed is important because this is the same method used to update or modify the configuration of our application. People modify parameters through XML file that are send to the board. Actually XML files are about 200k and it takes several second for the transfert of two files (may be 4-5 seconds). For IFU It does not really matter if transfert is achieve in ten seconds or twenty, this is something that won’t happened very often but I wil not create an other method just to tranfert app.hex. I would prefer to adapt the one that I have and be able to receive file larger than 700kb if needed.

Edit: The method works also with CDC transfert for wich maximum speed is 64kB/s. So there’s already a speed limitation involve by the hardware link…

No there’s no acknowledge sent by the board to confirm that the transfert is successfull (and no acknowledge for success write on SD card) this may be added later.

Why 1024? As I said before the method is a generic one and may use CDC or tcp for data transfert. I did a lot of experiment and the best tradeoff in term of reliability and speed was 1024. (on CDC you can not know how many bytes are available therefore if you put a greater size, if only 64 bytes were available, you obtain the rest of your byte array with 0x00 and you spent more time to detect them and have only usefull data.

Why ms would exceed time? May be I am wrong but my feeling was that filestream involve overhead each time writing data resulting in the following pseudo inequation:
time_to_do(200 * ms.write(1kB)+1 time_to_do(fs.write(200kB)) < 200*time_to_do(fs.write(1kB));

Ok now I understand your point.

I do not have time to do this experiment for the moment, but I will add to the todo list. I think this could be usefull to have a graph that shows the obtained speed according to the length.