Network socket software problem

Matt:

Associated with each socket are large transmit and receive buffers. I do not know the exact size of these buffers, but I think they are around 10K each?

I am a bit confused about your 128 socket count. From what you said last, the Cobra would be the client, connecting to a PC which is the server. In this situation, I don’t see why you would have 128 sockets open. You should only use one socket between each device. Does the Cobra communicate to 128 PCs? Tell me a bit more about you application.

I don’t really know about different types of client/server structures. There are different protocols used, but the architecture is the same. A client calls the server to send and/or receive info. Generally, it is a multiple client to one server relationship, forgetting about load sharing configurations.

Generally it is not a good idea to have resources(sockets) sitting idle. But, if you had a source/client sending data every 10 seconds, you might want to keep the socket open. Every minute, maybe not. The decision has to be based upon your application and the available resources.

There is no asynchronous socket support with MF, so synchronous with threading is the way to go. Makes that decision easy. :smiley: Or maybe not…

My typical design for a server is to have a thread which does the accepts for new calls/session. I don’t believe TCPListener is implemented on the MF. You will have to use the bind, listen and accept methods.

I instantiate a session object, which I build, which has a Start method which is passed the socket for the session. In the Start method I create a thread, which when it runs places a read on the socket. It can then receives messages from the client. The session object provides the context for the session.

When the client disconnects, which usually results in a completed read with a length of zero, the session closes the socket and terminates the thread.

And, you have to all of this while standing one one foot. :stuck_out_tongue:

My client machine (Cobra?) will be collecting up to 120 data streams–12 different parameters (pH, temperature, conductivity, water level, CO2 concentration, etc.) on 10 different reactor vessels. The client will also be controlling each parameter of those reactor vessels, i.e., maintaining feedback control loops by monitoring sensors and adjusting the reactor environments via pumps and valves. Everytime a data stream acquires new data (by reading a sensor), which is done on the data stream’s own thread on its own timing, the data will be sent to the server for storage. Notice that there is a known number of parameters and clients, i’m not opening this up to the internet with an unknown or variable number of clients.

My first stage thinking, as i know how to send data from a data stream to a server through a socket connection, was to simply establish a socket connection for each data stream. That would be how i would need 128 sockets. I am certainly not stuck on that, for me it was just natural first stage thinking–just multiply what i already have done. However, you have said that the right way to think of this is to have one socket per client machine. I can certainly do that by building a queue within the client to line up the data to be sent through its socket to the server. Then data would be moving through the socket about once a second and the connection would remain open. The queue would be read and the data sent to the server by a separate thread so the rest of the client doesn’t wait while the server is reading. I wouldn’t be surprised if this is ultimately a simpler design and/or perhaps the only way it can be done, not to mention saving memory. Hope this makes some flavour of sense.

A single socket sounds like an ideal solution. Remember, sockets use resources on server machines as well, it’s just that they’re typically more well-endowed than our humble microcontrollers :wink:

When you say “reactor vessels”, certainly you don’t mean nuclear reactors? :o

well, cell nuclei are involved…

yes, one socket per machine makes a whole lot of sense. I just put a “virtual data storage” object on the client which in reality just shovels the data out the back door (socket) and over the network and through the woods to the real data storage living on the server.

BTW, i haven’t given up on the idea of using more than one socket per client machine. When i set up commands coming from a remote user interface to the client with associated notifications from the client back to the remote UI, i get to use a different socket as part of a “virtual ui/command centre” in the client.

Yes, multiple sockets there are an excellent idea, means you don’t have to repurpose an existing one for another task and hope the first task can wait until you’ve finished with it’s socket.

Matt, I think you have zeroed in on the solution. One socket for data transfer and one for commands.

If a solution sounds simple, it is usually the right one. ;D

There’s generally little use in keeping 128 sockets open and only using one at any given time. There is no reason (aside from how you wrote your software) that any given socket must be “dedicated” to a single purpose, so there’s no re-purposing going on, except your own.

There are well-known examples of the two-socket command/data style of interface (FTP being the most common one), but there are gotchas there that you need to watch out for.

There are even protocols that create virtual “channels” over a single socket connection for you (see BEEP, though as far as I can tell, it was never widely implemented, as it is quite complex).

If it were me, I’d use a single socket, and each time data was to be transferred, I’d send down a “header” of sorts, saying, “hey, here comes some of data X”, and then I’d send the “body” of the data. That way, I could add or remove data types or even whole reactors without needing to alter my socket management code.

Rest assured the data (which are simply floating point values) have associated id headers which include data type, reactor id, and more. The header structure could be expanded to include system commands and run one central socket for all network communications, but then i’m back to the question of what are the drawbacks to maintaining multiple sockets vs. adding that level of generalization. Right now i’m happy to hog the middle of the road by using different sockets for different object/tasks, maybe 2 or 3 total for each physical machine. Thanks all for input, certainly i now have some preliminary knowledge of socket resources.