Embedded Devices should not be used as Servers

@ Mike - I would think that every Intel Xeon or AMD Opteron platform running a Windows Server would have this capability. Do you have information different?

In this post Fukushima world, and japanese cars arriving in south america with ionizing radiation sources in them; I think that single upset events are not so rare after all
http://www.autoblog.com/2014/07/15/fukushima-radiation-in-used-cars/

Does Windows Server recover in case of a ECC error? Had a server farm with 100+ servers, and all that happened if a memory chip went bad with an error showed up on the front of the server. Server still went deadā€¦

Indeed, itā€™s a question of definition. For me, a server is hardware or software that receives requests from clients, processes them and then returns responses.

Whether or not a device has ECC memory has nothing to do with being a server, client, or both.

It depends on your requirements how high a degree of availability you need to ensure. If you need high-availability hardware and software, then ECC is only part of that. A dual core processor running in lock step and cross-checking that the results of both cores are identical could be one such element; there are automotive Cortex-R processors capable of that (but their vendors probably donā€™t talk to you unless you want to buy millions per year). Then there are high-availability (ā€œHAā€) embedded operating systems. Because ECC doesnā€™t help if the rest of the system, in particular the OS, cannot deal with other hardware faults that may occur.

Normally, you donā€™t need this kind of availability, and thus donā€™t want to pay for it. Then the trick is to be able to deal with rare failures. A watchdog that reboots the system after a fatal (but transient) problem may already help. Or good old-fashioned manual intervention. Usually, software or transient network failures are more important issues. And there, making a device a RESTful server can actually increase robustness. A HTTP (or CoAP) server can simply wait and answer requests, not caring about whether clients fail or donā€™t. They simply respond. Which may be with just an error message, if they donā€™t feel like doing more than that. A client does not need to care much if a server fails, it can just repeat a failed request. No inconsistent state to worry about. But the client may need to keep track of some application-level state in a state machine, if it does more than just trivially ask for a current sensor value. Anyway, embedded REST servers can often be simpler than their clients, which is good for microcontrollers as servers. But here we are definitely not talking about fault-tolerant hardware, but about how to make implementation of a distributed system simple and robust, which boils down to how ā€œpartial failuresā€ are handled.

Also, when using other communication technologies, microcontrollers as servers is often a good choice. For example, with BLE a device is usually a server (ā€œperipheralā€), although it sometimes makes sense to program it as a client (ā€œcentralā€). Increasingly, it makes sense to be both, e.g. for implementing a mesh topology.

My preferred setup of a device in the foreseeable future is as both client and server, using HTTP/2 and firmly staying with a ā€œstatelessā€ REST architecture (HTTP/2 - The New IoT Protocol? - Oberon's IoT blog). I still much prefer this simple and robust approach over MQTT, AMQP, DDS and the like, in spite of all the claimed (but to me not convincingly proven) scalability advantages of such schemes :wink:

3 Likes