Open source TCP/IP client/server APIs

I’m interested in learning if there are any 3rd party open source class libraries around that provide support for bidirectional high performance messaging between .Net Framework and .Net Micro Framework?

One could rely on some form of web services for this which is reasonable but these often carry overheads due to their reliance on HTTP, text, JSON/XML and so on.

The kind of overall system I looking at here would provide:

  1. Bidirectional - that is, each device/machine is acting as both a server and a client (not a big deal though, just run both at each end)
  2. Support dense, small messaging like packing binary data into messages and using perhaps store-forward to minimize socket IO etc
  3. Support for large or lengthy messages and sequences of messages.

The binary message approach aims to send ONLY what’s actually needed and avoid burdensome overhead data like we get with HTTP and so on, tons of text strings, headers, JSON etc. Just think what actually goes over the wire when you want to send say three 4 byte integers…

I’ve already developed something that does this (used for high speed securities processing) and more and delivers throughput around 10x that of protocol buffers BUT this is designed for .Net only (not “open” I mean, like Java couldn’t use this without losing performance) AND leverages some unsafe features for some low level data manipulation, so can’t be ported (easily) to .Net MF.

I’d love to be able to use this kind of system for .Net MF/Gadgeteer because of the power it affords - take just one example, when a server processes a message from a client, the handler (its a state machine) can throw exceptions. These are caught by the server framework and converted in client ‘Exception’ messages and the client thread (if its in synchronous mode) will eventually see that exception. Being able to catch exceptions in client logic which were originally thrown by a server’s handler is a very neat programming model.

Anyway, I’m interested to see what’s out there already…

Thx

One thing I find myself asking myself (a habit that causes my friends and family some concern), is whether this is a hammer in search of a nail (and those are another hobby of mine).

The reason I say that is because at the desktop computing scale, your endpoints are several orders of magnitude faster at generating and processing information than the pipelines between them are at transporting that data. Therefore, it makes sense to spend processor time at the endpoints to optimize the traffic to get as close to 100% utility factor out of the network pipe as you can.

With MCU-scale processors though, that disparity in speed doesn’t exist. The cycles spent marshaling data might create costs on the MCU that exceed any benefit on the wire.

I don’t have hard numbers, but I have a gut feeling that spending cycles to create highly optimized wire packages probably won’t result in the same sorts of payoffs due to marshaling cost and other bottlenecks. I would be happy to be proven wrong.

3 Likes

I think you’re right to express skepticism about this and I agree with your assessment of the various factors here.

I did quite a bit of research some years ago I began to draft the requirements for the API that was eventually developed (it was also fully design reviewed and refactored three times - time consuming but worthwhile).

One of many things I discovered was that the CPU time sending/receiving data between to Windows machines over ethernet is heavily influenced by the number of IO operations one does.

Sending 100 MB (just dumb empty binary blobs) as 1000 100KB operations is much less CPU than sending it as 100,000 10KB operations, almost a linear relationship if I recall. So when sending a lot of data (for example video) it pays to try and actively minimize the number of IO operations, that is create as large as possible chunks of data before sending.

By using serialization that strives to minimize the size of data and use store and forward mechanisms one is able to transfer a lot very quickly, far beyond what’s possible with something like HTTP - that is by absolutely minimizing CPU one is able to ramp up xfer speeds, ideal for dealing with video or very high rates of diverse sensory data.

Ideally (I’m in hobby mode here) I’d love to have a platform for my desktop to “manage” multiple devices, robots etc each of which could be generating lots of data - leveraging the desktop to do heavy processor work on behalf of the device.

Of course on a desktop there’s DMA which offloads a ton of work to the network adapter at almost no local CPU cost, the same isn’t true on the Gadgeteer boards (I suppose).

Korp