What is the better Gateway device?

IOT project:

1-5 sensor devices (PIR, Temp etc).
433mhz or nRF24L01 communication between device and gateway.
Gateway: CerbNet or rPI.

I will be putting 50 of these systems into tester’s houses, and then hopefully selling a commercial product.

I want to get your opinion on what would be a better gateway.

The price is about the same.

The PI has a larger attack surface and may need windows update, blah, blah blah - everything that comes with a full windows install.

But the PI can do many things (that I probably don’t need) that the CerbNet can’t do.

What are the pros and cons of using a gateway with a full blown OS vs a netmf device?

Thanks.

You’re probably going to need to provide more detail about exactly what your product does/needs in order to get a useful response here. We could go on all day long about the generic differences between a solution with an OS vs a microcontroller project.

However, one thing I will point out now is that if you plan this to be a commercial product then you should not be looking at the CerbNet as it is not approved for commercial applications and is being retired soon. You should look at the G80/G30 boards probably and specifically the FEZ Lemur or Panda-III if you need that form factor. I imagine there will be a “Net” version of one these boards available in the very near future to replace the CerbNet.

Personally, my general rule is if you don’t have a really good reason to go with a full-blown OS solution then default to the microcontroller solution.

Ok I want to piggy back this one to add my own angle, since I am looking at this too.

I want to build a system where I have a gateway with a touch screen, sensors detecting soil moisture, and a unit to replace a sprinkler controller. All connected with XBee.

I am starting to look toward W10 IoT because the connected touchscreen option seems appealing, along with the UI being able to be in XAML, ideally I’d love to just do the UI in HTML5/CSS5/JS and have it render, not sure if that’s doable or not with XAML and IoT.

I know there is Glide, but I’d rather build a UI once for web/gateway and tweak the local copy.

Thoughts?

@ Squeebee - are you familiar with the OpenSprinkler system board for the Pi? I looked into this a while back when I thought I wanted a sprinkler system… It would be cool if you ported it over to Win10 on Pi2.

http://rayshobby.net/ospi/

@ ianlee74 - Not familiar but that is interesting. My main goal is to get past a sprinkler with a web UI, I want to sense when soil is dry and then turn the water on. I also want to be able to control latching hose valves as well as in-ground sprinklers, so you can do things like control a soaker hose in the garden as well as the in-ground sprinklers, or rig up an above ground automated sprinkler system. That and I’d like to see a nice touchscreen controller/gateway.

I will be harvesting ideas there though, thanks! :wink:

@ ianlee74 - Thanks for your input

Personally, my general rule is if you don’t have a really good reason to go with a full-blown OS solution then default to the microcontroller solution.

This is good advice.

I was hoping that the rest of the pros on the board would chime in and discuss the pros and cons of each type of gateway. I am sure further responses will open my eyes to situations I had not considered before. With Win10 IOT being brand new, this seems like a very timely topic.

the IoT architecture intent is pretty clear in the Win10 release - a gateway running Win10 IoT on RPi or other supported board with lower level devices running within the controlled network and reporting to the gateway. Whether that should also extend to a significant UI component on the gateway, that’s a decision I think becomes more interesting when you think about how you may want to present an interface in multiple locations - for example, need a status display upstairs to know what the CO2 level is on that sensor, or to lock the garage door when you go to bed. Does having a UI device up there running Win10 IoT warranted or is a netmf device with LCD and Glide better, or is web presentation everywhere using any browser and a generic tablet even better still?

First, I agree completely with what Brett said about Win10 IoT, but now a will blather on about the social aspect of automation command and control and why I dislike all of those options (tv, mobile, wall/desktop display). This is a topic very near and dear to my heart…

Tablets, phones and wearables are too heavily overloaded with functionality and too ‘modal’ meaning that you have to navigate too much to get to what you want to do (see also the recent NETMF Fob discussion). TV-based interfaces suffer from the same problem. Wall and tabletop control points are never quite close enough to where you are (unless you overpopulate a space, eating too much power and polluting the ambiance), so while they can be less modal they are still not quite convenient and still overloaded (doing everything, and requiring too much navigation).

Personally, I think mixed-initiative dialog agents combined with a few control/display points are the answer, The problem there is the complexity around the audio processing. Kinect and Amazon can make it work well with game sounds and music because they own the audio output stream too (so that they can cancel it out), but ambient noise is still an accuracy killer. Also, both Kinect and Amazon rely on array-mic subsystems that are only just coming available, but aren’t backed with great SDKs yet. You need that in order to do beam-forming and increase the selectivity. Kinect gets an added boost because it knows where people are in the room from its 3d camera and can beam-form to listen just to them.

All of that is before you even get to the point where you try to parse the speech, do higher-order dialog or do speech generation (generating meaningful speech from data - a surprisingly hard problem).

So, I think speech is by far the most efficient (results for given effort), adaptable, and socially compatible mode for automation systems (esp. at home), but it is also very much an open problem. I think speech is the preferable command channel, and speech plus displays are the preferred output channels.

1 Like

@ mcalsyn - I agree with you in the case of a lot of people/personalities. However, I personally love a completely silent room except for maybe some music. Even to have to talk on the phone feels like it robs me of energy. So, I’m not really eager to talk to my computer and have it talk back to me. I love being able to talk to my phone while I’m driving and have it generate texts or whatever when it is just unsafe for me to use a touch pad. But, if I have a choice I prefer to communicate with my fingers (also while driving occasionally… :wink: )

I think about it like having a butler in the room with both you and he having a phone in hand, equipped with any set of apps you can imagine. There are just times when speaking to the butler will win out in convenience and efficiency (and ambiance-preservation) over using any screen-based mode of communication. Other times, like accessing a weather forecast, just touching your tablet may be the best (and least distracting) way to get the info you want, since visual presentation will be richer than any spoken description anyway.

Now, to be a good butler, he has to have a good model of your current focus of attention and activity. If you are watching TV, a small ‘toast’ on the TV might be the right way to notify you of an incoming call, rather than him shouting over the soundtrack. Or maybe, based on your movie-watching activity and a variety of other heuristics (like, who is calling), maybe the right action is not to notify you of the call at all and just queue the notification for later.

That model sounds better to me. I think speech input is a great supplement to visual or touch when it happens to be easier but I don’t see it ever replacing visual/touch interfaces.

In my opinion I would use neither for a commercial product :-[. I suggest you take a look at the Intel Edison. Learning can be steep, documentation can be awful but getting better. Forget about using wiring for controlling sensors, look at the MCU. Why use 433 MHz for communication?

@ kiwi_stu - My first stab at this project used the Intel Edison. I love the little board, but using c/c++ is not my idea of where I want to spend time.
The other reason I moved away from the Edison is it has wifi and not rj-45. There doesn’t seem to be an easy way for customers to tell Edison about there wifi credentials. Doing so requires bunch of infrastructure (phone apps, web servers on the board) that I don’t want to take on (maybe later if the product succeeds) So getting an ip address by just plugging a Ethernet cable in, is the quickest easiest way to get the gateway connected. (Please tell me a quicker solution for a wifi enabled gateway)

As far as 433 goes, I am not really married to it, I just need a cheap reliable, easy (BLE is not easy) way to connect sensors to the gateway. I am open to your opinion on what would be the best.

1 Like

@ Brett - My current solution does not require a UI as the gateway is just passing the motion detected datetime on to the cloud. So I think Win10 is way overkill.

I understand your point about c/c++ and Linux. It is a steep learning curve, especially when you start building your own versions of Yocto with bitbake. In the Intel architecture, the Intel edison is not the gateway solution. There is a specific device for doing this task called a “gateway”. The Intel edison communicates the sensor data to the “gateway” device, the Intel edison(s) are therefore configured the communicate with the “gateway” device. The “gateway” device is configured with the customer wifi credentials etc.

My biggest issue is the cost of these “gateway” devices to build solutions using them. In my experience of a single industrial implementation, the architecture works great, is secure, scaleable, adding new (Intel edison) sensors to the environment is relativity painless. But the cost can be expensive when there are multiple sites.