Making GHI libraries more versatile

Extensions for NETMF that are provided by GHI libraries are really great, but I wish I could be able to modify them. I cannot recompile GHI libraries myself because they are closed-sourced. But maybe GHI team could allow inheriting their classes and doing the desired modification that way? For example, you could get rid of NotInheritable modifiers and use Protected instead of Private (I use VB.NET terminology here). That way we could write derived classes that allow optimizing/adjusting and extending functionality.

Most libraries are native with simple managed wrappers. So even if the libraries are open, it would not be very beneficial in most cases.

@ Gus - I think it would be. For example:

  1. I could avoid any unnecessary checks (if I know what I am doing, all it does is just wastes CPU resources and hence slows the main application) like these:
if (this.clockPin == Cpu.Pin.GPIO_NONE)
{
	throw new ArgumentException("clockPin cannot be Cpu.Pin.GPIO_NONE.", "clockPin");
}
if (this.dataPin == Cpu.Pin.GPIO_NONE)
{
	throw new ArgumentException("dataPin cannot be Cpu.Pin.GPIO_NONE.", "dataPin");
}
if (writeOffset < 0)
{
	throw new ArgumentOutOfRangeException("writeOffset", "writeOffset must be non-negative.");
}
if (writeLength < 0)
{
	throw new ArgumentOutOfRangeException("writeLength", "writeLength must be non-negative.");
}
if ((writeOffset + writeLength) > writeBuffer.Length)
{
	throw new ArgumentOutOfRangeException("writeBuffer", "writeOffset + writeLength must be no more than writeBuffer.Length.");
}
if (readOffset < 0)
{
	throw new ArgumentOutOfRangeException("readOffset", "readOffset must be non-negative.");
}
if (readLength < 0)
{
	throw new ArgumentOutOfRangeException("readLength", "readLength must be non-negative.");
}
if ((readOffset + readLength) > readBuffer.Length)
{
	throw new ArgumentOutOfRangeException("readBuffer", "readOffset + readLength must be no more than readBuffer.Length.");
}
  1. I could rewrite methods that uses ParamArray modifier. https://www.ghielectronics.com/docs/26/memory-usage recommends: “Also avoid creating and freeing objects (…) it introduces a lot of memory fragmentation and is not suitable for frequently called code paths”. ParamArray allocates new array every single time it is called.

I could think of more examples, but I am pretty sure a lot of speed related optimization could be done if deriving classes would be possible.

I wonder how much time is spent on these checks in compare to the rest of the method. And if removing these checks is a good idea.

My point here is - it would make your libraries more versatile. Developers could decide whether to use that option or not, but at least they could have such an option.

For example, I would primarily be interested in reduction of memory fragmentation and hence frequent garbage collection, so I would go an extra step and derive classes where I feel it would help to achieve my goal.

2 Likes

+1. Very rarely is it a good idea for a vendor to give less control to the developer. This definitely isn’t one of those cases.

@ iamin - Making the libraries more kind to derivation is something we have been thinking about on and off for awhile now. It is more complicated than just marking things as virtual (Overridable in VB) so it isn’t something we can just quickly do for the next release. We would need to document them, document their interaction with other members, review any assumptions we make on visibility and call order, impose restrictions on when we can change them, and so on. That said, most, if not all, Dispose methods should properly implement the IDisposable pattern so they at least can be overridden by you if you want to add functionality only.

Given that it is a public API, we cannot remove parameter validation (outside of redundant ones of course), but allowing the developer to manually override them is an option.

I can’t think of any classes off of the top of my head that were marked as sealed or NotInheritable (I can look Monday). Which did you come across?

It isn’t a very clean solution, but you can always use reflection to call the private native stub functions and essentially reimplement the library. We obviously don’t support that method since the internal implementation can change from release to release and there may be assumptions we make in our implementation that you don’t know about when calling them.

1 Like

That would be really great, but I don’t think that it will happen. What you have described is a tremendous job that requires a lot of human resources.

I completely understand that parameter validation is a must in general cases. How can you allow developers to manually override parameter validation?

I think I made a mistake here. By default, properties and methods are NotOverridable, so you need to add modifier “virtual” where necessary.

Yeah, possible but not “elegant” solution.

I think developers would be willing to take a risk here. As you plan to release only one or two stable SDKs per year, I don’t think it could be an issue at all.

[line]
Let’s take your class [em]RuntimeLoadableProcedures[/em] for instance. IDisposable is implemented. It looks like all you have to do to let us override the method [em]Invoke[/em] is to add “virtual” modifier and change “private” to “internal”.

Agreed and I think that all of that documentation is totally unnecessary. The only thing that matters is the interface (what goes in and what’s expected to go out). A developer knows that once he starts changing things in between then he needs to test that everything else still works as expected. If there are side effects beyond changing the output of the function then the function is doing too much.

Not quite. In many cases this works well, but it is not possible to say where you cross the line and unexpected and difficult-to-debug behavior creeps in.

Basically, this approach is using subclassing for “unplanned code reuse”: you have something, and modify it in ways that were never intended. For quick maker-style hacks by people who know the code base very well, and where resulting problems don’t have consequences, this is ok. But it is not the engineering mind-set and the kind of developers for which .NET was developed.

Many years ago, we coined the term “semantic fragile base class problem”. My friend, former colleague, cofounder of Oberon microsystems, and developer lead for Microsoft Azure Stream Analytics, Clemens Szyperski (Microsoft Research – Emerging Technology, Computer, and Software Research), has analyzed this problem in depth in his book “Component Software - Beyond Object-Oriented Programming”.

Basically, objects can have state (otherwise they are pure function collections), so at least some methods are expected to change the state of the object itself, not just produce output (as result values, or via ref or out parameters). This state (often private) would have to be documented well enough to prevent any re-entrance surprises. For example, a method A may internally call another method B of the same object (this is usually not documented), which may have been overridden. Before the call of B, A may violate the object’s invariants - temporarily of course, as it will have to reestablish them before terminating. But what state (preconditions) can B rely on? If you override B, you need to know this - the mutable object state unfortunately [em]is[/em] part of the interface of the object (in the sense of “the set of assumptions that two objects make about each other”). Typical effect of such problems is: everything works at first; then the library that implements the object comes out in a new release, with some changes to the implementation; poof, applications behave differently in hard-to-debug ways. You need to study the library implementation to understand where the sudden problems come from. In fact, the entire implementation has become the interface…

It’s really about re-entrance problems where you don’t expect them, because there need not even be multiple threads to cause them.

Our conclusion was that code reuse only really works in a solid way if code reuse is done in a planned way, and that interface inheritance is much less subtle and critical than implementation inheritance.

With my hat on as a maker, I like quick hacks that solve a problem painlessly. But with my hat on as a software engineer, I try to avoid such hacks (and library designs that ask for them). If in doubt, I’d always vote for erring towards the engineering side when further developing a .NET variant.

Hmpf, sorry for the long sermon. Clemens has described this much better than I did here, so I recommend reading his book, which is muuuch longer though :slight_smile:

4 Likes

@ Cuno - Thanks for the great reply. I realized I oversimplified my response as far as real-world implementations go. But, I still believe that in the modern world of robust programming IDEs (i.e. VS) that the actual [em]documentation[/em] of these things shouldn’t be considered a prerequisite to releasing the functionality when the source is available and the IDE is very capable of generating such documentation on demand. Of course, in the context of proprietary source then those owners are inflicting that work upon themselves. Yet another reason to open source as much as possible.

I haven’t read Clemens’ book but I’m adding it to my list. Thanks for the suggestion.

1 Like