My latest task consists of writing a driver for the Bosch BME280 combined humidity, temperature, and pressure sensor using I2C on the Cobra III. I intend to make this driver available to the community.
The BME280 device stores device-unique factory calibration data in read only memory registers that one uses to modify the raw ADC counts for humidity, pressure, and temperature to obtained a calibrated (more accurate) result from the device. The Bosch documentation for this product provides at least two different sets of software routines to convert the raw ADC samples obtained from the BME280 into calibration-compensated results. One set of routines uses floating point mathematics. The documentation suggests the floating point routines only for processors that employ hardware floating point processing. I do not know if the G120’s processor has hardware floating point processing, or only integer math. I assume that if there is no hardware floating point processing, that floating point calculations are emulated in software in the Microsoft part of the NETMF library.
I know I can use routines with floating point calculations, but I don’t know if that is a good idea, speed-wise, for typical NETMF devices. It all depends on the presence of hardware floating point capability.
Should I implement the floating point calculations, or the integer math calculations?