The biggest reason to go with NETMF is the development speed. You’ve got the best IDE that exists, intergrated debugger in every chip, multithreading, events, RAM manager, lambda functions and what not. Besides, it’s all in C# (or visual basic), so if you are a C# (or Visual Basic) programmer, you get going immediately. There’s no barrier between MCU and PC programming: you can use the same IDE, same plugins, I even use the same exact code for dlls (dlls still have to be compiled separately).
However, for all that development speed one has to pay by execution speed. NETMF is very slow, compared to bare metal ARM. So, if your program calculates something sophisticated, and does that all the time, you’d better not choose NETMF. If your program needs calculations from time to time only (well, lets say heavy calculations takes <10% of time), then RLP+NETMF is a very convenient choice: relaying intense processing to the RLP is not easy, but once it’s done, it’s used as a simple function and you can forget it.
Flash and RAM is not an issue I think, there’s plenty of chips with external storages around. Besides, some chips already have 2MB of flash, so that really leaves pleeeenty of space. Even 300k that’s left on Cerberus is a huge amount for embedded application, isn’t it?
Oh, and another tricky thing. In NETMF world, every programmer is heavily dependent on code that is written somewhere else and is closed-sourced. So if you encounter a bug, you cannot fix it (sure, digging TCP stack for a bug is not something everybody does, even though the code is available), even theoretically. And sometimes, support for some features are simply dropped (GHI did that for application protection, TCP debugging, PPP in 4.2. Although they are going to reintroduce some features soon, that doesn’t change the fact that it happened). So, be prepared