Ranges and Indices

Im curious if we might ever see Ranges and Indices available for use.
I see it introduced in c#8 but dont quite understand the prerequisites that would make it not inclusive to TinyCLR. This would be nice for some memory management work im doing

Could it be (unavailable due to) requirement of generics? Or just not something implemented?

The range functionality appears to rely on generics (GetSubArray<T>()), which are not supported by the netmf/TinyCLR interpreter.

And, in my humble opinion, generics and a host of other language features are ‘syntactic sugar’ that bloat your compiled code, which is exactly the opposite of what you want on a resource-constrained system. We can invest a bit more care and effort in the coding on the front end to allow for a smaller interpreter and lower runtime memory and cycle costs. Syntactic sugar is fine on larger scale systems where the cost/benefit equation is different.


I concur; adding high-level stuff is just going to bloat the codebase. We’re writing embedded systems here, not web apps. :cry:

Perhaps im not understanding something here, but notwithstanding generics, say a feature such as this was implemented but you choose not to use/include it, how would that affect your compile size?

What im asking is, is the implementation baked into the code size somehow even when the feature goes unused? For if it is not, then what would be the harm in including it for those who don’t care their code size gets bloated.

The issue with generics is that they need to be baked into the type system of the interpreter and there are a bunch of new CIL (common intermediate language) opcodes that the interpreter would have to understand. All of that represents native (C/C++) code that has to live in the interpreter whether you use that or not.

Anything that grows the interpreter has a permanent cost. Any compiled .net code that you include in your program only costs you if you include it in your program. But what we are talking about here is increased size of the interpreter, which costs everyone, whether they user generics or not.

You said ‘notwithstanding generics’, but Range requires generics support. Also the Range code appears to be part of mscorlib, which also costs you whether you call it or not because assemblies are pulled in in their entirety.


I see, i wasnt sure that there was an interpreter here as i generically recalled c# not having one and everything gets compiled to IL.

Now… only if someone would explain to me what the ‘naitive’ layer is :smirk:

C# does not require an interpreter. C# (and F#, etc) compilers generate CIL code which is a kind of machine-independent assembly language. Most .net runtimes then compile and further optimize that code right before running it for the first time. But another option to compiling CIL is to run it directly using an interpreter. That’s what TinyCLR does.

The interpreter and all the low-level security, driver, and bootloader code is referred to as ‘native’ code because it was written in C/C++ and compiled directly to native ARM binary code.

Huh, TIL. I just assumed that someone a lot smarter than me was crosscompiling CLR to ARM.

I’m assuming that this is why including our own C++ libraries is not supported?

Dunno if they’re smarter or just have more time on their hands, but there is a dotnet project that does AOT compilation through llvm to x64 and macOS ARM64 here

Some folks (myself included) have tried to use this llvm tie-in to also cross-compile into ARM32 with some success. There’s a similar dotnet runtimelabs project to compile into webasm and there are ARM32-compatible webasm engines, but they are memory hungry at runtime.

The dotnet AOT compilation would give you the full language and blazing runtime speed, plus binary extensibility, but I think it’s a year or more of work by a half-dozen dedicated folks, and I have neither the other five folks nor a year of free time (nor a business plan).

I got STM32 chips to run webasm code, but again, one proof-of-concept run is a long way from having a fleshed-out build and debug ecosystem. The attractive things about webasm is multiple languages, and the ability to debug without special hardware, but it is still interpreted bytecode. This is the engine that I was using, plus a ZephyrOS under that.

Bottom line is that TinyCLR strikes a pretty good balance of features, and getting to true native execution is a huge mountain to climb.

If you want another data point, look at Wilderness Labs and how long they have struggled to make headway on these same problem spaces (AOT, debugging, etc).

Thanks for the information - this is interesting. That being said, I’m happy with where TinyCLR is at and am not knowledgeable enough to do better, I just like learning about this stuff. I do have a need to do high speed vector and matrix math at some point, but as a stepping stone from regular software to full blown embedded development TinyCLR is great. it may not be the fastest but its fast enough for what I need it for (usually).

Knowing this, is there a TL;DR version of how the interpreter works i could read somewhere? It’d be good to get my head around it so I can engineer stuff around it and avoid things that it would struggle with.