This looks very very interesting…
Wait, so I pay Amazon so I can run my software on my device?
Nice racket, eh?
It’s like, worst of all worlds. I mean, having any infrastructure depend on outside systems (i.e. “the cloud”) is already a hassle when your connectivity isn’t 100%, but now you have to pay for “the cloud” even when you’re not using it. Except, sometimes you are, so you’re still dependent on it.
So I have to pay my AWS bill now to turn my lights on and off even when the DSL is down!
At $1.49/yr they might as well just give it away…
This is exactly why I recently ditched my existing home automation system which relied on a cloud connection to allow interaction with certain devices. Just to switch a light on for example, took anything from 5-10 seconds at times which is not Wife Approved.
I now have the whole system running internally with only sensor data being uploaded to the cloud but it is not required for the system to work. It the internet connection was lost, it still all works. Much happier wife now too.
I now have the whole system running internally with only sensor data being uploaded to the cloud but it is not required for the system to work. It the internet connection was lost, it still all works. Much happier wife now too. :)[/quote]
I only skimmed over this Greengrass but the impression I got was that’s main scenario they are trying to solve. It didn’t sound to me like they were planning it to be used for real-time home automation but more for IoT sensor networks where the “app” basically acts as a queue and handles uploading data to the cloud when it can and isolates your device from the transient nature of the cloud when it can’t. It can work the other way around too if you have a device that needs to respond to data/events created elsewhere but I don’t think there’s any expectation that it will be always real-time.
MQTT server on the internal network, forward relevant events to somewhere else if you really can’t live without giving Amazon/Google/MS your data and money.
HaD’s “Minimal MQTT” series was really an eye-opener to me. A wifi-equipped module for $4, running a Lua interpreter, is right there at the level where everything can be networked now.
Edit: and this, to me, reveals the real problem that NETMF faces. NETMF can’t (currently) compete in the “simple highly-integrated” market against NodeMCU and ESP8266, and it can’t compete in the “extremely low-power” market against 8-bit micros, and it can’t compete in the “high-power GHz+” market against Pi/Odroid/CHIP/etc because they can run .NET Core and Mono.
The only market left to NETMF is “absolutely must be able to deploy and debug via Visual Studio and not run an OS”, and that’s a pretty small market. That’s why I believe that something like Llilum (or even better, Steve Maillet’s new system described here: [url]Future of .NETMF (take #999999) · Issue #527 · NETMF/netmf-interpreter · GitHub) is the way forward, because at least then, you could run on low-power hardware. Even if you had to give up VS debugging, you’d be no worse off than NodeMCU and Arduino (and clearly, the lack of hardware debugging on those platforms hasn’t resulted in their demise).