Why HttpsAuthentCert should go away

I just posted issue 509, but I want to advocate for it a bit here. It’s an important change that will greatly increase the utility of your TLS support.

Right now, for any given https site, you need to provide the CA root cert as part of the request. That means that you have to have the right cert available and matched to the URL at compile time. But certs and providers churn all the time, which means you would have to update your program.

If instead of HttpsAuthentCert, the request accepted a delegate, and the delegate provided maybe a CN and thumbprint, and returned an X.509 cert, then developers could implement their own CA trusted root collection and select the correct CA public key cert at runtime.

For instance, you might export the entire CA root cert collection onto an SD card, and then you could hit any TLS web site without needing to know the URL and which cert it needs ahead of time. And if sites changed their trust chain, it wouldn’t break your program. At most, you just need to maintain a directory on an SD card, which you could update OTA from a cloud URL.

I think this is an important change that would greatly improve the utility of the network stack.


I am no crypto guy, but…

I think the well established model works, and that’s what TinyCLR should adopt. That means root certs need to be validated at runtime, rather than from static (like the SD card). Maybe it’s OK to cache them (I have no idea how long they’re actually valid for and how often there are changes to them) but I’d try to steer clear of anything that the user could do (wrong) that compromised the trust in a security chain. That probably also needs a “no longer trusted” list of intermediate CAs/certs that changes more often than the roots - we’ve all seen a CA whose poor handling of security and trust leads them to be shown the door and lose their business.

So I absolutely agree with @mcalsyn that we shouldn’t need to pre-load the cert at compile/deploy time, but if possible I think things need to be more dynamic and not require me to go grab some certs and dump on a card myself.

I am suggesting using the existing model - same as Windows, Mac, and Linux - a local cache of trusted root CA certs. How you implement and update that is up to you, but the current model requires compiling in the relationship between a web request and the cert used to validate it.

If you want, you can respond to the delegate by returning the same cert you use now. If you need more flexibility, then you can use the delegate to implement a real cert registry.

This isn’t a weakening of anything - it’s a broadening of functionality to more closely match what OS’s do.

EDIT: If you want to obtain certs at runtime, then respond to the delegate call by reaching out to pull it from the web, but generally that’s insecure. That’s why OS’s have trusted root registries local to the machine and updated only by trusted OTA update processes.

1 Like

what I more meant was that the “default” position should be that the TinyCLR OS deals with this (like Windows, Linux etc) and that way numpties like me can’t screw it up by forking up the delegate.

Unfortunately, I don’t think that’s tenable. Windows, Mac, and Linux maintain their own secure cloud-based update infrastructure, and I don’t think that GHI would want to undertake that, and not every TinyCLR device will have a pathway to the open internet for such updates anyway.

On top of that, there’s no reliable place to store such things (there’s no consistently available storage architecture of sufficient size), and baking them into the firmware would be a huge load of data that not everyone needs.

For that and a few other reasons is why I am advocating that they just give us a delegate callback that we can fill in with the most appropriate implementation for our particular app needs.

FWIW, I will open-source my trusted-Root registry implementation if and when we get a delegate in place.

How about introducing an additional new overload which enables the use of delegates? So this wouldn’t break old NETMF code and would open the advantages of the delegate pattern.

In martins proposals I don’t understand the advantages of using a thumbprint to search the right certificate. Isn’t it so, that then the thumbprint must be hardcoded at compile-time?

Regarding the thumbprint … what happens is that the web site returns a claimed chain of trust. You (as the client) must verify that you trust the root cert in that chain. The root of that chain will be identified with a CN and thumbprint (and other data). You need to take that identifying data and validate that you hold a public-key cert which you trust and which matches the root of the chain of trust claimed by the server.

So, what you are doing is taking a claim made by the web site and validating it against your own data, and what links the two together are the CN and thumbprint of the root cert.

Regarding supporting both for backward compatibility, that works. You could use the HttpAuthentCert if present, and defer to the delegate hook only if HttpAuthentCert is null. If both are null, then you are making a secured but unvalidated connection (same as null HttpAuthentCert today).

Ah…, now I understand. The Callback would would have the thumbprint or CN as a parameter and return the RootCertificate from my machine matching this thumbprint or CN.

1 Like