I am quite new in NETMF and I would like to know if execution of GC can somehow interrupt RS232 transmission?
Normally when some message is sent over RS232(by interrupt driven way - what is I hope the case in NETMF) then one byte is follow by next one until whole message is sent without any gap(space) between characters(bytes).
Is there any risk in NETMF(e.g. GC execution) that Tx is somehow significantly interrupted so the transmission stream is not continual?
Thanks for reply.
Execution of managed code is suspended while gc is running.
time of disruption depends upon how long gc takes to run. Create and free lots of objects, and gc time extends.
@ mhstr -
Sure. To avoid that, you should control GC manually.
So let say we would like to send a message with length e.g. 100bytes. Transmission starts and 10 bytes are sent and then GC starts its activity which takes cca. 200msec.
So does it mean that now 200 msec. no byte is sent and rest 90bytes are sent after 200 msec.(= with 200msec. gap)?
"You should control GC manually"
Is it possible to disable execution of GC manually for a while or there is only possible to start GC manually?
I am asking because some proprietary RS232 protocols(and I have to communicate through one of this) are also limited by timeouts - e.g. if there is no more bytes(from the point of last received byte) longer then X msec. then reception is regarded as finished.
So do you think is it possible(it should be) to communicate with such a devices?
@ mhstr -
I am not sure, but if that is my case, I will try to call GC before sending these bytes. Becuse depends on what device you are using but, few hundred bytes or few KB (except for Cerberus) are not large enough to let system call GC inside.
And Debug.EnableGCMessages(false) to disable output, that helps GC run faster.
I personally don’t think GC will be a problem.
If it takes 200ms to GC on the Raptor then you have allocated a large number of objects and/or have been allocating and releasing a many many objects.
MF is not a real-time, so deterministic requirements, such as yours, are going to be difficult, if not impossible, to achieve.
You can reduce the GC times by controlling your memory usage and forcing GC to happen at “good” times, but you can never be 100% assured that serial transmission will not be interrupted.
Are you sending the entire message with one serial send?
I am sending cca. 120 bytes long message from time to time(when some binary input changes state) in one serial send.
I use in my application Queues for communication between tasks. Queues are dynamic entity so they probably can create some necessary work for GC. But on the other hand this type of communication between task should be very safe so I am using Queues.
I have tried to force GC to run every cca. 500msec. and during this period the GC takes according debug output cca. 3msec. so it should be OK(I am guessing even longer period for GC calling should be OK here - maybe some seconds or so).
But is it safe to call GC so frequent, isn’t here any side effect?
forcing GC isn’t a problem. All you’re doing is asking it to do something that it was going to do later, but do it on your demand.
Queues: you could potentially get leaner memory if you chose to optimise for re-use of objects rather than simply freeing them. It’s a tradeoff though; only you can decide where to put your effort.