I’m doing some work with a serial port that is time critical. I need data sent when i send it, not at some point in the future.
When i use .Flush() on the serial port - not only does it take a very long time to execute, it doesn’t actually seem to flush the buffer at that point.
I don’t have an input buffer on the receiving side. It’s using a FT232RL on COM2.
If I don’t flush, data comes over in blocks of just over 1kb (anywhere between 900bytes and 1600bytes). I can’t seem to find a property for setting the buffer size for sending.
My research continuously shows that a Flush() takes approx 3ms.
Attached is with Flush() turned on. Data rate is 115200.
Packets are numbered, missed packet count include ones with CRC errors.
Data making it from the fez to the pc suffers from errors it seems, hence the CRC failing. I can see this in debugging.
Missed packets are ones that couldn’t be decoded because the header was corrupt (same issue as a CRC except it couldn’t find packet length or the header itself).
Without flush, the data quality decreases considerably, however the packets come more frequently (or rather, there are more sent in a an average span of time). This happens during repeated testing.
have you tried flow control?
Flush time is undetermined. You can’t say it is 3ms as it depends on how many bytes are in the fifo.
If you are losing data then you need to look into hardware handshaking like Mike suggested.
I havent tried flow control, I don’t have those bytes available. My data stream is around 6800bps, each packet is 22 bytes - i shouldn’t need flow control to a computer through a FT232RL (which can handle much much higher than 115200). Production would only be about 200bps, over radios on 2400baud. I want to get the firmware all sorted before i start testing wireless first.
I’m not loosing any bytes, the “lost” packets are actually packets where the bytes are wrong, so they are discarded.
For me, flush time is very clearly determined. It doesn’t really matter how much data i dump - it delays the code by 3ms. This is unacceptably slow from a commercial firmware build IMO. 3ms delay is about 345bytes worth of time at 115200 - I can’t believe an ARM7 is this slow to flush a buffer.
Chris Seto and myself have been noticing issues with data quality coming out of the serial ports for awhile now, but this is the first time i’ve actually noticed it for sure, as i have a data packet that is identical except for 24 bits of information which includes a timestamp, and 8 bits for the packetid.
I’ve tried with multiple ft232RL’s, and i’ve tried with 10ms delays between sending data - i still get corrupt bytes though, and flush is slow.
Serial is one of the most commonly use interface. I doubt there is a problem there. It is even used on PPP cell phone modems that our commercial customers use…with thousands of bytes going back and forth.
Well I am experiencing a problem, which I can clearly see occurring. Rather than telling me my doubt my problem exists, perhaps you have some ideas on how I can debug or resolve the issue?
It is nice to know that it is used by your commercial customers, however that doesn’t help with the problems I am experiencing. The wonderful thing about TCP is that it has a way to re-request packets that have errors. My wireless configuration (ultra high gain rx antenna) doesn’t allow me that luxury. I’m using the TCP standard CRC for my packets, so I wonder how many of your customers are losing 1-2% of their packets and having to re-request them?
First, please try this on Beta SDK coming out very soon.
Then, is it possible to write as small as possible program that generates the error? (perhaps with tera term on the other side).
This way we can immediately look into any possible issues…
Ok, the beta has serial port fixes?
I was planning on writing an app that sends bytes 1 to 255 to a PC app that reads bytes 1 to 255 and then reports any discrepancies today. TerraTerm is not well suited to reading 300+ bytes/second of data visually (Assuming a 3ms delay from flush)
Hi MarkH. There is a lot of balls in the air here. Flush, SerialPort, FT232, your code, etc. I would try simple app via real serial port or the CDC serial port first to see if you see the same issue. If so, then can drill down to software issue. In MFConsole I was sending megabytes at 115K and never using flush at all and no corruption. The corrupt packet issue smells like a code issue with overlapping buffer writes and/or reusing buffers. Why is your code so dependent on Flush timing?
I’m using serial communication too, where I constantly send and receive requests. Never had the feeling I should use Flush to get things out.