MBED compared to .NET Micro

I’m interested in how MBED compares to .NET Micro. I’ll value any comments but I’m particularly interested in the following questions:

Q1) How doe the hardware compare to the GHI .NET Micro offerings? For example, is there a MBED controller board similar to the G400 with 1+ MByte of program memory, 90+ MBytes of RAM and 6 (or more) serial ports?
Q2) How does the development environment compare to Visual Studio?
Q3) How mature are the drivers? For example, the GHI implementation of the .NET Micro serial driver is now a mostly painless way to get a buffered, interrupt driven serial port driver. There were a few really painful hiccups in the past, but my scars from those are mostly healed. Is there something similar in MBED?
Q4) How helpful is the user community. I’m continually amazed at how quickly I can get most of my sometimes complex, sometimes naive questions answered on the GHI Forum. Is the MBED forum as effective?
Q5) How stable and bug free is the code base?


You could give it a try, with this inexpensive product


There are all sorts of mbed available. Do you need all that RAM and Flash ?

The online editor has its owns strengths and drawbacks (read no debugger), but you could use the mbed offline options.

mBed has a decent community, tons of code libraries and is supported by ARM and several Silicon and board vendors.

C++ versus C#… if it is an issue to you

check the mbed site. it should tell you what boards are available.

I’ve been looking at the mbed site and haven’t found anything like the G400 except for a couple of high end boards that need too much power. I don’t need 90+ MB of RAM but I do need 2 or 3 MBytes. I got the impression from reading the mbed website that there was a debugger but maybe I got it wrong. If there isn’t a debugger, that is pretty much a deal breaker. And the thought of having to deal with C++ give me a rash. I don’t have any compelling need for mbed but I do want to know what my options might be.

Even a slow mBed board (Cortex-M0, <50 MHz) is going to run circles around a G400, performance wise. The overhead of the NETMF interpreter, when it comes to performance-critical code, can conservatively be described as “crippling”.

Consider that most 3D printers out there run on 8-bit 16 MHz CPUs with 8K of RAM. In this (and without an FPU), they do all the path planning, movement calculation, and real-time control of all the functions of a 3D printer (and generally drive an LCD display at the same time).

You can get a Nucleo board right now with a built-in hardware debugger, and even Ethernet (jack and all) for <$25.

I definitely agree with Godefroi (which is strange, but true), though maybe not with the ‘crippling’ comment. You’ll only find it crippling if you made poor choices. You choose netmf as a way to simplify and accelerate development. But you choose mbed, FreeRTOS, and other similar native-code options to improve runtime performance. You either spend your time and money on dev-time resources, or run-time performance.

Until/unless Llilum comes to be, we aren’t going to see the C# language and the rich and easy to use API surface combined with native-code speed, all in one package.

Until then, you need to amortize the cost of increased development time (C/C++) or increased hardware cost (.netmf) over the full run of devices you expect to make/sell.


I am going to have to disagree here. On a full blown design that uses graphics and displays, the NETMF system will run well. Maybe slightly slower than native but the development cost will make it much worth the performance. If we are talking about blinking an LED, then yes native is faster, but then using assembly is even faster!

It is all about finding the right tool for the job. If I was to design a 3D printer for example, I would use a small micro that I program natively to control the steppers but then will have a NETMF system for networking, graphics and the user interface. NETMF can’t handle real time and native (mbed) is much more difficult than NETMF to get anything serious done.


I think we’re saying pretty much the same thing. Choose the right tool for the job, and performance vs cost vs ease of development is core to that choice.

I was clear.

When it comes to networking, however, remember that if SSL is important to you, that NETMF might not (currently) be your best bet. Or is it working now?

You are replying to yourself, so I assume you (however many there are of you) are in agreement.

If netmf is crippling, in any way whatsoever, then you chose poorly.

Folks who cannot choose technologies fit to the requirements of the work, should seek work in some other field. Sure there are limitations. Knowing and correctly applying the available tools is part of the job.


Gus disagreed. Back when GHI sold mbed boards, he’d probably have agreed with me, but now, he has to disagree :slight_smile:


I thought these are still in production :slight_smile:

I thought the GHI-mbed collaboration fizzled out? Something about ARM wanting to go a different direction, or something? Either way, I’ll modify my statement thus, then: “back when mbed boards were the new GHI hotness, Gus would’ve agreed with me, but now that Octavo stuff is, he disagrees.”

@ godefroi - you must knew everything about me :o

Open Source Gus!


Thanks for the input. Much of this thread was really helpful. There is no debate about why I got involved in .NET MF: I needed the most effective embedded systems development ecosystem for a relatively inexperienced programmer like me. Even with the hiccups I’ve experienced along the way, there is not doubt in my feeble little mind that .NET MF met this requirement. Based on what I know now, I don’t think there is anything on the planet in the same league as far as effective development ecosystems. All that being said, if there was a MBED G400 look alike, I could probably port my code to C++ and get the performance benefits. It does look like MBED provides robust IO drivers and libraries similar to .NET MF so that doesn’t seem to be an issue. However, I still haven’t found a MBED board that comes close to the features of the G400 particularly with respect to memory. Having a ton of flash and RAM makes it way easier for me to write robust, maintainable, extensible, easy to understand code and not worry about allocating, deallocating and patching memory leaks.

Thanks again for all the input.

@ Gene - Having lots of memory only delays your memory leak crash; you still can’t ignore them.

@ godefroi - Didn’t mean to imply I was ignoring memory leaks. But having lots of memory allows me to do things like 1) declare a lot of stuff static to reduce/eliminate running the GC 2) Have a large, deep queue that my various drivers can all write to and not have to service it very often 3) keep track of a lot of system status information to help prevent or post-diagnose problems 4) do some fairly serious signal processing when the need arises. If .NET Micro has memory leaks under the hood, I’m still very much up the proverbial creek.