SerialPort (UART) character overruns on 921600 baud (EMX)

Hello Community,

I have an BT module (Bluegiga WT12) sitting on COM3 running at 921600 baud (no flow control, but those are small data bursts I expect for now). There are frequently characters missing in the SerialPort input and SerialError.Overrun fires for each.

For debug I’ve disabled my reading thread, so the scheduler isn’t locking anything. I get about 100 bytes sent from the device so it shouldn’t overrun the input buffer, however the character buffer (? according to MSDN) overrun fires.

What is it about and how can i solve this?

Thanks in advance.

In order to answer your questions, we would need to see some code. Can you please post
a small program that displays this problem.

That rate might be a little bit too fast for the system to handle. Can you go for a lower rate?

EMX uart drivers have 4k buffer IIRC. If your bursts are small then it may work.

hmm, the funny thing it is not about the input buffer, which would produce RXOver, wouldn’t it? It also only happens if data comes in bursts of more then a few characters.

I’ve reduced the baudrate to the slowest one that should be acceptable (460800), there are no errors any more for now… Let’s see how it performs once it gets a steady data stream…

OK, now I’m in real trouble: once in steady datastream on 460800, I get constant both character buffer and RXOver’s… But I can’t go lower then this. I pickup data in biggest possible chunks (!) from the SerialPort to write it down on SD, but I get a black screen with “Buffer OVFLW” on my EMX instantly.

Is the NETMF SerialPort using GPDMA to pickup data? (Does it make sense to go for RLP to implement a more efficient buffering system, or am I doomed?)

I will try RLP.

Only if you process your data in RLP and info go up in the system. What exactly are you trying to do? What is the application?

At this baud rate, to have any hope of keeping up, if it is even possible, the code must be organized in a highly efficient manner.

I have seen no example of the architecture being used to read the data. Without an understanding of how the data is actually being received
and processed, it is very difficult to determine a solution.

The OP has not taken the time to provide any profile information, so it is really difficult to determine a proper skill context.

I didn’t publish the source coz it’s quite some code and it can be definetely optimized by sacraficing architecture but not in the context of full program.

Basicly, it goes like this:

var thread = new Thread(Reader) { Priority = ThreadPriority.High };


void Reader()
       while (serial.BytesToRead < PACKET_SIZE)

        var data = serial.Read((serial.BytesToRead  % PACKET_SIZE) * PACKET_SIZE);
        int lastPacketOffset = data.Length - PACKET_SIZE;

       DoDataAnalysis(data, lastPacketOffset); //Does simple sincword and flag checks on ONLY THE LAST packet


Read(this SerialPort port, int length)
     var buf = new byte[length];
    port.Read(buf, 0, length);
    return buf;

It’s very simplified but it fits the principle. Note that it always pulls as many packets as are in buffer to process only the last, which should increase performance at high backorder.
I know that re-creating buf is not as efficient as using it over again, but theoretically with the plenty of RAM it should nly get slow once GC kicks in, while it would take over a minute for RAM to fill up at this bitrate. It can not be the only reason considering the system is crashing instantly.

Getting in trouble with such a baudrate on an ARM7 with 74 Mhz is such a fail! The data stream is actually sent by a little 8-bit AVR which also has time to communicate through I2C to multiple chips and calculate CRC!

I suggest you separate the serial reading from the processing into two threads with a queue between them. Get the data out of the serial buffers as fast as possible. Pre allocate the queue elements.

Pre-allocate the buffer used for receiving. A new operation and later GC is expensive. The GC may be what is causing you problems.



to see Garbage collection in your debug output. You might be surprised. I certainly was on my first .NETMF project when I saw how much GC was going on and how much more efficient pre-allocating my buffers was.