Panda II SerialPort reads bytes in chunks unless I put a non blocking breakpoint

This is a totally weird problem

I have an event handler for SerialPort.DataReceived:

void clientPort_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
    digits.DisplayNumber(12); //Simply writes to some output ports to enable a few leds, nothing else
    var count = clientPort.BytesToRead;
    while (count > 0)
    {
       //bla bla bla
    }
}

If the client sends 97 bytes, no matter what I do, DataReceived is always called twice:

Written 64 bytes to the led
Written 33 bytes to the led

Always 64 bytes followed by 33 bytes. Always!!!

If however, I add a breakpoint in VS2010 to the digits.DisplayNumber(12) line
and enable the “when hit” option so that it displays a message in the debug window
but does NOT BLOCK/INTERRUPT the application I ALWAYS get 97 bytes in one go.
Every single time!

And this is what I want, I want to get them all in one go.
I have tried adding delays --> thread.Sleep(…) <-- to simulate whatever it is that happens
when that debug point is hit but nothing helps.

I just can’t figure out what is going on.

Please note that 97 bytes is just an example, some messages are only 16 bytes etc…

Does anybody know why and how a non blocking breakpoint can influence the serial port
and how I can copy that behavior ?

(Note you can replace digits.DisplayNumber() with any other method call, the result is the same)

Not a problem… it is working perfectly.

The serial port is a stream device. You should make no assumptions as to how many bytes will be available in the DataReceived event. Doing so will only get you in trouble.

You will have to develop a way to tell the start of your message and its length.

I know but that’s just it
This system needs to be integrated with existing hardware made by a chinese company
that for some reason cannot deal with the fact that the data is received in two chunks…
When it does, the whole system breaks down… and that company is long gone, I’m just cleaning up the mess.

One solution would be to hang a desktop computer with visual studio to all 150 systems out there, but that’s not really an option.

So if I can learn / understand what it is that VS2010 does that makes it work every single time
perhaps I can emulate that behavior.

I am confused… :wink:

The DataReceived event handler is in your code. What does it have to do with other equipment?

ok data streams in,
based on the first 15 bytes I decide where the data should go,
to the panda itself for processing, or forwarded to the other device, if it’s for the other device
I write it to the second serial port as it comes in on the first serial port.

If I receive the data in chunks then of course I write it in chunks…
and then the 3rd party device chokes on it and doesn’t reply anymore.

If I enable the break point I receive the data in one chunk so I forward it in one chunk.
And the 3rd party device replies.

Note: I cannot change the system that sends the data to the panda and I cannot change the system to which the panda has to forward the data. They are both legacy systems made by different vendors.

Setting the minimum amount of bytes before the event is triggered won’t help me as sometimes messages are much shorter. But I wish it wouldn’t trigger the event as soon as it sees a small interruption in the stream of data…

I found that removing the breakpoint and adding

Thread.Sleep(200);

after the call to digits.DisplayNumber(…) does seem to improve the situation,
as if that gives the panda time to receive additional data before trying to read it.
200ms has a huge impact on the entire system, but it might be the only option?

OK.

I understand.

You could play all sorts of timing games, to achieve atomic reads, but I doubt you will ever get it 100% reliable.

Using VS debug is causing the interrupt handler dispatcher to be suspended, allowing enough time for the full message to be read into the serial buffers. But, even this could be unreliable with larger messages.

The obvious thing to do is change your code not to send in chunks. Wait for a complete message and then send.

Yep,
the problem is the panda doesn’t have enough memory to buffer some of the messages.
It sucks having to work with other hardware :stuck_out_tongue:

I assume that the external equipment is not using a serial interface?

PC --> RS232 --> RADIO WAVE MODEM --> radio waves —> RADIO WAVE MODEM --> RS232 --> PANDA --> RS232 --> 3RD PARTY CONTROLLER

But I’m not even sure the the 3rd party controller sticks to the RS232 specifications…

Now, the panda controllers are deployed in the field and we can update the settings remotely,
so perhaps if all else fails, I can add the “delay” to the settings so that they can configure them
for each location depending on the largest message sent from the client system to that specific controller.

If it’s the only way, that’s the way it’ll have to be.

I find it hard to believe that the 3rd party controller, which uses a RS-232 interface, is sensitive to timing. ???

By nature, RS-232 is asynchronous. In all the years I have been doing serial programming, and I was doing it when it was high speed communications, I have never come across a situation where a device was sensitive to the starting and stopping of the data stream.

I think you need to look at what the Panda is actually sending to the controller and make sure the data stream is correct.

If we connect another controller by another vendor that uses the same protocol… it works fine.

But we can’t upgrade the existing project to that other vendor’s controller because of legal issues.

You could stop using the event handler and try using a thread.

The thread would use a blocking read of one byte, and then when forwarding, the delay between bytes would be must less.

yes, you just need to get away from the THOUGHT that the datarecieved event is anything more than a way to take an amount of data off the queue.

As Mike says, you can unbuffer it by polling, and you’ll always exactly know where you’re up to. Or you can use some other buffer construct to take the data off the serial port and into your app, and then let your main code wait for the buffer to contain it’s pre-defined “message”.

You either want events, or you don’t. In the case where you have very sequential processing requirements, events aren’t giving you anything - although you could still have the datarecieved event handler also signal another event when it sees the terminator character you’re looking for so your main app then does it’s bit.

All seems totally doable in netmf, and I can’t see anything that would stop you from doing what you want.

Depending on the chip architecture and driver code, the chunking may not be discarded by polling.
For example, on goo’old 16750 UART controller for PC, there is 64 bytes hardware fifo (was 16 only on 16550). The interrupt interrupt controller could be configured to generate an interrupt when the fifo reach level X% or a timeout of Y characters was elapsed.

So if you were working at application level without having control on how the driver is working, you would not have any control on how many bytes you would get by call. Everything would be a combination of your own real-time as well as the timing of bytes arrival on the input.

I am surprised that you can buffer a whole message to retransmit it once it is fully received. Your example list 97 bytes.

However, if the incoming byte stream is on a regular timing, your sending should be almost identical to the receiving. The chunk size will only be a buffering, ie delay.
What I mean is that when you have received N bytes, you send them through. But this occurs in the background. So in the mean time, you wll receive other chars, which you will push to the output FIFO.
At the end, all bytes shall be sent out regularly if received regularly.

If the way it works natively is not “smooth enough”, you can try to implement a jitter removal buffer.
If you can’t have a buffer large enough to hold the maximum message size, you can have a limited circular buffer. This buffer would be written by the DataReceived handler, and emptied by the sending thread. If you insure that the sending thread starts only when the circular buffer as reach a certain fill level, then the output stream would be continuous.

… Not sure if I am clear but I’ve already done such implementation in the past, not on NetMF unfortunately…

Depending on the chip architecture and driver code, the chunking may not be discarded by polling.
For example, on good’old 16750 UART controller for PC, there is 64 bytes hardware fifo (was 16 only on 16550). The interrupt interrupt controller could be configured to generate an interrupt when the fifo reach level X% or a timeout of Y characters was elapsed.

So if you were working at application level without having control on how the driver is working, you would not have any control on how many bytes you would get per call. Everything would be a combination of your own real-time as well as the timing of bytes arrival on the input.

However, if the incoming byte stream is on a regular timing, your sending should be almost identical to the receiving. The chunk size will only be a buffering, ie delay on the total stream. Unless your own real-time breaks it.
What I mean is that when you have received N bytes, you send them through. But this occurs in the background. So in the mean time, you will receive other chars, which you will push to the output FIFO.
At the end, all bytes shall be sent out regularly at the serial baud rate.

If the way it works natively is not “smooth enough”, you can try to implement a jitter removal buffer.
If you can’t have a buffer large enough to hold the maximum message size, you can have a limited circular buffer. This buffer would be written by the DataReceived handler, and emptied by the sending thread. If you insure that the sending thread starts only when the circular buffer as reach a certain fill level, then the output stream would be continuous.

… Not sure if I am clear but I’ve already done such implementation in the past, not on NetMF unfortunately…

I worked in similar scenarios with similar hardware, what worked for me is:
use your Panda to do the messaging work, meaning don’t send chunks, wait until you have the full data and then forward it to the right port at one shot, you will probably need to flush the TX buffer after your message is written to make sure nothing stayed in.

and don’t relay on timing when serial is involved different pieces of hardware behave differently so assume nothing!

hope this helps