Introducing TinyCLR OS: a new path for our NETMF devices

I wonder if the next announcement is TinyCLR OS running on the Octavo Module…

Why on Earth would you want that???

@ ianlee74 - Total Control.

Question:

Can I stream data to a GHI F20-uSD module from a GHI G80TH using the TinyCLR OS? (i.e. is support for the GHI filesystem modules already there?)

(I’m currently downloading VS2017RC, and the TinyCLR files - so I haven’t explored what is available yet with the TinyCLR namespaces)

F20 accepts simple serial commands that you can send from any system, including TinyCLR.

@ Gus - Great :slight_smile: If only i had one, I could use that to log telemetry on my rocket :smiley:

Edit:

I succeeded in getting vs2017rc installed, along with TinyCLR, and I can happily build tinyCLR apps. (No usable hardware for now, but that will change sometime soon)

In other news, VS2017 looks amazing. At least the very clear integration with so many [em]fun [/em]things looks amazing. The Visual Studio Installer Tool is a nice upgrade.

I think I will just be exploring these for the next few “boss-isnt-coming-into-work” days :smiley:

I’m excited about the TinyCLR/VS2017 combination.

Does anyone have performed some performance benchmarks using TinyClr vs last .NetMF/GHI Sdk?

@ leforban - I wouldn’t do that yet. You are comparing a netmf production release to TinyCLR experiential release.

It still would be interesting :think:

I wouldn’t expect that there would be any significant differences. It’s still NETMF, still running the interpreter, just some changed-up APIs. Gus, correct me if I’m wrong here.

@ godefroi - different native compiler, different managed compiler… Just for a start.

That’s why it is of a big interest for those who need low latency and/or have more or less DSP oriented applications.

@ leforban - “different managed compiler” is probably not actually a net performance gain, not at this point anyway. Roslyn is still young, where the old compiler had more than a decade of work on it. The new native compiler would be a safer bet for a performance gain, but it’s evolutionary, not revolutionary. DSP wasn’t remotely possible before, and that’s not going to change until and unless the interpreter goes away.

@ godefroi - Minor correction: I do all kinds of real time digital signal processing (DSP) on the G400 and previously on the Spider board. Luckily, the signals I have to deal with are all less than about 10 Hz. You can do all kinds of real time DSP on any processor you care to use if the signal bandwidth is low enough.

The only reason I bring it up is I’m hoping .Net Micro and/or TinyCLR take over the entire embedded world (or at least 75% of it) and I don’t want people to get the wrong impression that TinyCLR "Can’t do DSP’ or “Can’t do real time”. It can do both quite nicely for a vast array of real world tasks. Not sonar array processing or real time video image processing or microsecond interrupt handling, but those kind of tasks are a pretty small fraction of the whole embedded world.

Happy New Year

4 Likes

@ Gene - Thank you :slight_smile:

@ Gene - You’re correct, of course, that you can do anything if either a) you’re willing to wait long enough, b) willing to use a 400 MHz microcontroller where a 16 MHz 8-bit might otherwise be sufficient, or c) are doing things so slowly that just about any hardware will be sufficient.

I doubt 75% of the embedded world is willing to operate under those constraints, though, and that’s why I think that native compilation is unquestionably the way forward.

1 Like

@ godefroi - I disagree. .NET on PCs was a bad idea to many few years ago, now it is the way to go.

Similarly, using C was a bad idea on micros as you should use assembly!!!

Lose performance, gain everything else. I do native development everyday and I do hate it every single time I use it. This is my real life experience, not a theory.

3 Likes

@ Gus - The only arguments against .NET I ever heard were from those who believed it was interpreted. Of course, it never was; it was JIT compiled from the very beginning. If it were interpreted, the criticisms would’ve been correct; there would be whole classes of performance-sensitive problems for which it would not have been an appropriate solution (unless, again, you’re willing to solve the problem very slowly, or you’re willing to overspec your hardware by an order of magnitude or two).

You’re not doing your customers any favors by keeping hopes alive on significant performance increases. The reality is that, until native compilation is implemented, performance may vary by a few percentage points one way or another (or may even shift dramatically when functionality currently implemented in managed code is moved into native code), but no revolutionary changes are coming. You know that the interpreter is the limiting factor in performance, and until it’s gone, performance generally is going to look pretty close to how it does today. You owe it to your customers to be honest about that.

@ godefroi - I already said, lose performance, gain everything else. Our customers, including commercial customers, rather be more productive than anything else. In case you didn’t notice, you are the only one here against this community’s vision. We all want to see a future for TinyCLR and are very excited about it. But you can use whatever fits your needs.

@ Gus - Absolutely. NETMF (and therefore even more so TinyCLR) have enormous strengths, and they should be emphasized, because while not all of them are unique in isolation, they are unique in this specific combination. It’s an immensely useful platform, but performance is not one of its particular strengths. However, when one of your customers asks,

The honest thing to say is, “no, because we don’t expect there to be significant differences in the performance. We do however plan to provide X feature and Y feature and Z feature that were not previously available.” When you say things like:

and

You’re giving false hope and setting people up for disappointment. We’re all grown-ups here, we can handle the truth. If we were looking for bleeding-edge performance, we’d be using other hardware and software combinations, after all.