Argon and Accelerated C#

James at Love Electronics has started a discussion on implementing accelerated C# for some near native speed capability.
He is looking for community feedback…

[quote] Be able to mark certain methods or operations as Accelerated, such as complex algorithms such as those involved in DSP, IMU processing, AI (SLAM).
Only require a subset of the .NET Micro Framework, basically working with Value Types (Structures, Integers, Bytes, Floats (Could possibly implement String as a Value Type)).
Threading would be a plus, but much more complex to implement (for example, allow the Accelerated method to create threads internally, separate to the NETMF threads).
I am only thinking about creating Atomic operations (call the Accelerated C# operation, return) rather than having long running Async operations.
Implementing Device Drivers (such as SPI, I2C etc) should be implemented using existing Interop methods, Accelerated C# is only to accelerate business logic (application specific logic).[/quote]

anyone interested in adding their input can go here http://www.loveelectronics.co.uk/forum/index.php?/topic/45-accelerated-c-discussion/

Cheers Hugh

We can discuss here if you guys prefer. We do not mind.

Its amazing how open GHI is to this.

Either or both i guess its a community question after all. It may be better to discuss on the love forum and synopsis here as the discussion started over there. I dont know what the etiquette is though :slight_smile:

I think more people will participate here.

Here or there is no matter, what is important is that there is community input. So if anyone has thoughts to add then read the post on the Love forum and then comment here.

Are we talking about a selective JIT or AOT compiler? Marking specific methods as JITable?

I think it’s a fantastic idea. It solves the major problems with the JIT and AOT that was experimented with in the early days of NETMF.

Corey Kosak over at the Netduino community has done some extremely interesting work in this area as well.

Gus,

Firstly thank you for the welcome.

Godefroi,

We are talking about performing AOT compilation on specific methods marked as [Accelerated]. At the moment I am working only on static methods, meaning they are designed to take a piece of logic (such as AHRS code) and removing the overhead of the Interpreter for this specific piece of code.

The reason I am only looking at allowing Value types is that when you start including things like System.IO objects and other CLR functionality, there is a lot more stuff that has to be AOT compiled, and this will contain many more objects that are unsuitable for AOT compilation. Especially items that call from C# into Interop code will definitely not be supported.

For instance.


public struct EulerAngles
{
  public float X;
  public float Y;
  public float Z;
}

public static class AHRS
{
  [Accelerated]
  public static EulerAngles Calculate(float accX, float accY, float accZ)
  {
    // Complex AHRS code that is very slow in NETMF.
    return myAngles;
  }
}

You would use this as normal


public void Program()
{
  while(true)
  {
    GetSensorReadings();
    EulerAngles orientation = AHRS.Calculate(parameters...);
  }
}

You will see no change in your application, other than loosing the ability to jump into the Calculate method with VS.
The Calculate method can use EulerAngles because it is a struct (Value Type).

I hope this makes sense and at least gives points for discussion.

3 Likes

Wow, despite the limitations of this approach I would love to have this capability.

Please post some time benchmarks comparing non-accelerated code to accelerated code when you get a suitable point in your experimentation.

Netmf really need this. While it is not faster than RLP, it would be easier to use.

Amen!

Yes, it is different from RLP, whereas RLP allows you to compile C libraries you can call from C# (for instance just build any number of existing C libraries and use them off the shelf), the main criteria for RLP that I’ve seen (At least on this forum) is for people wanting to accelerate a bottleneck in their code, or a piece of code that does not need to be managed.

This is the perfect solution for those kinds of scenarios.

1 Like

Are you planning to contribute this to the core NETMF distribution?

i’d be interested in benchmarks for this…couple of questions

-is this something that happens at run time or compile time?
-doesn’t the jit compile IL code into native code on ‘first sight’ anyway? ngen’d code are not necessarily faster than jit’d code, just the initialy startup is faster.
-you are using markups/attributes…which means i think you’ll be invoking code that will use reflection, i think. how costly will that be?
-what about dependencies? this seems as though it would only apply to the most basic code without other .net library related dependencies, correct?

if this is something that happens at run time, it seems like something like this will come down to who is faster…the jit or interpreting the attributes…both will lead to native code compilation/execution.

interested in hearing more and seeing some benchmarks

.NETMF does not currently implement JITc, though if you look at the firmware there are indications of an earlier JIT implementation it is not used.

ah…well then, now i see the rational for something like this! (…still learning the nuisances .net mf)

Also James (James? I think Love Electronics is James?) said it’s AOT (Ahead Of Time), so there’s no reflection here. The attribute is simply there so the AOT compiler knows what to compile and what not to compile.

Hi Guys,

Lots of great questions here, however what I’m really after is input on what people want from this functionality, rather than how to implement it.

You need to start with what people require, than just go head first into what your going to do with no plan as to what you want to achieve.

At this stage I’m unsure of the route to use to improve the performance, only some vague ideas.

This technology will be similar to GHIs RLP feature in terms of it being a premium feature for their boards.

Cheers,
James

I think that’s great, but unfortunately, it puts it way outside my price range :frowning:

How do you mean?