Code Security

A thought occurred to me:

Once my product is out in the field (with .netmf) what would it take to extract the program from the micro controller. It is possible to run a program from another bootable media (e.g. SD card) which then reads the flash data sections of my device and stored it back on that same SD card?

Is that possible? Can an attacker extract my .netmf program by “convinving” the uC to boot from another media (e.g. SD card)?

I know it’s possible to tell the device not to allow reading of it’s stored memory from external debuggers and JTAG stuff but how do you shutdown external booting?

Lost since 4.2 uh? Nuts.

GHI always promised that Application Protection will come back, (Since the beginning of 4.2) but that it’s a lower priority, because only few people are requesting it.
Shortly GHI sayed in the forum that it’s not that easy to implement Application Protection in 4.3 (because of the core, … can’t remember exactly.
I believe it’s at least delayed to the next bigger NETMF update.

If you search for “Application Protection” on the forum, you will find quite a lot of hits.

IoT is going to introduce a lot of folks to all sorts of realities, including a bunch around security. How to protect your code is just one of the problems that won’t get an iron clad solution, reasons include things like lack of physical security so I think we might have to accept some limitations. What do you think would be reasonable security items for code security? Do we go for a Trusted Computing sort of solution and make SOC’s where say the device will only boot from its onboard memory, but some people would argue that this isn’t acceptable either (que up Richard). If someone really, really, really wanted your code there are other possible ways to get it but it becomes dependent on skills, tools, time and resources. In a highly secure system, there are some uses for things like Thermite (melted blobs tell no tales). So accepting 100% secure isn’t possible, what is acceptable or is it an all or nothing sort of thing?

Well My standard for security is either encrypted boot of firmware from SD card (since SD cards already support encryption) or have some sort of “production mode” wherein the chip (e.g. STM32F4) will not allow a different boot method once the production mode boot method is enabled. You can still tell the chip wipe and boot which will get it out of production mode for development needs, but it should be impossible to run another program on the chip that was loaded from an external source.

I was looking at the Microsemi line of FPGAs once upon a blue moon. Their product lines which include a hard cortex micro processor allow for “booting” the fpga data in it’s encrypted state. Further, you can’t load a new program without first wiping the old one.

I’ve read the chip manuals over and over, yet I still don’t understand how they implement code security. I’m beginning to think they they have none.

If they get access to the boot program then they have everything and we’re right back to problem one.

If this is for the STM32F4 (Cerb-Family), there is a register you can set to enable “Read Protection”. See page 93 of ST Reference Manual RM0090. It looks like you can mark the flash so that it can no longer be read (other than with your own program (I think). Also, it may not be reversible, so be careful.

Has anyone ever tried this?

It’s on my roadmap to try this eventually, but not at the moment.

-Valkyrie-MT

Encryption is expensive in terms of CPU cycles and hence power usage (toss in of course a performance hit or CPU upgrade) unless you use uncle bob’s weak ass encryption scheme but then you need to ask yourself just how much security your gaining. Next question is where is the code unencrypted to so it can run and can it be pulled out in its unencrypted form from there? That physical access thing is a real bitch for security even if you use encryption.

@ andre.m - If someone loads some code on the board, that is unknown to you, you still might get blamed for malfunctions of the device.

You have absolutely no idea how stupid customers can be.

And think about hackers:
They hack you device, put their code on it, and you get blamed for sending out sensitive information.

Even if you can prove after all that you have not caused the problem/leak/whatever. The damage to your name is done.

Specially if there are 3 parties like: OEM, supplier and you. You sell a device to the supplier. The supplier thinks he can “optimize” your device. By this the OEM gets a problem. Now the supplier would blame you at first that your device caused the problem.
Now you have to bring prove that this is not the case, but it’s already in the OEM’s head, that you caused the trouble.

I could tell you several of actual cases where it went like this.

@ andre.m - I work in the automotive sector. You can find a lot of stupidy here. And since the OEM’s are very powerful compared to the suppliers and the small 3rd parties like I work for, everyone tries to not be the root cause of any problem that arrives at the OEM.

@ andre.m - both is bad. So far they did not change anything source code related. But they do with configurations, scripts and even hardware.
But I can imagine, that in some cases they would try to modify software too (or use a different software).
Once in China we had a project with an partner.
A couple of years later someone told us, that right after the installation of the equipment (2 stations) the took apart 1 station. A couple of weeks later they had 5 stations. At least the part from our partner. They couldn’t copy or part that easily.

If they can extract the software from the device, they could clone it more easily, or modify it in a way you wouldn’t approve.

It has happened, it will happen, and there will happen bad things along with that.

Conclusion:
At some point, code protection and read/write protection is important.