I have realized that buffers bigger than 2048 bytes are sent and received in 2048 byte chunks. For this reason I have written following code block to read data from a TCP socket:
ArrayList buffer = new ArrayList();
int totalBytesAvailable = 0;
while (socket.Available != 0)
byte tempBuff = new byte[socket.Available];
totalBytesAvailable += socket.Available;
In some cases, when the data available to read is bigger than 2048 bytes, I can read first 2048 part successfully but the second part is cannot be read since the while loop is terminated due to socket.Available returns 0. When I have been debugging this problem is not occurs but when not debugging the problem occurs and I can only get 2048 bytes of incoming data.
The code you supplied prints how much data is available in the buffer each time it is running through the loop. Are you seeing that there is code available? Have you tried longer delays than 10ms? I would start with 1000ms, and work my way down until it worked 100% of the time.
It seems to me that there seems to be some latency issues you are running into here. I’m picking up some W5100 chips to play with, I might find out more information in the next week or two and be able to help further.
TCP/IP is a stream protocol. There is no concept of messages or records. It is the programmers responsibility to process the stream and extract the messages.
Unless all of your messages are a fixed length, there is usually something in the data stream for determining message boundaries. This could be start and stop characters, or starting each message with length field.
I never use the Available method to determine how much data is ready to be read.
Generally, I read a socket in a separate thread. I use a fixed length input buffer of typically 2048 bytes.Within a while loop, I then issue a Read into this buffer with a length of 2048. When the read operation completes, it returns the number of bytes that were read. Using the number of bytes read, I then process the bytes received. When I have assembled a complete message, I usually put it into a queue, for processing by another thread, and then reissue the Read, looking for the next message.
You also have to consider error conditions. I always surround the read with try/catch to catch SocketExceptions. You also have to be ready to receive a zero length from a Read, which indicates that the remote end has closed their socket.
Using timing delays to get all the parts of a message is dangerous. You could tune the delay on a clean network, only to have it fail on a less reliable network, due to TCP error recovery. All the pieces will arrive, in the correct order, but the timing is not deterministic.
Yes it prints how much data available on a buffer and it prints 0 although there might be some data on the buffer. So that, there is no next iterarion of while loop hence I could not read all the data that income over network.
Actually the delay idea comes from the following case: For same conditions when I am debugging (a slow execution of that code block) there is no problem. But without debugging (a rapid execution of that code block) there is a problem about buffers. I think there is a bug in GHI’s W5100 drivers.
I have tried a 4000ms delay (I was curious about when It will work 100% of the time) and sometimes the same problem may occur over time about ~2% of the time
I have gave up using Available method and I have implemented another method to read the data came over network. The method I have implemented is nearly same with yours. Now it seems working properly but now I am studying on a different part of the project and I haven’t run test cases yet. I will inform you about the results.