Hello, I tried the following simple program on a Fez Panda II(SolutionReleaseInfo.solutionVersion: 4.1.7.0)
using System;
using System.Threading;
using Microsoft.SPOT;
using Microsoft.SPOT.Hardware;
using GHIElectronics.NETMF.Hardware;
using GHIElectronics.NETMF.FEZ;
using GHIElectronics.NETMF.System;
namespace FEZ_Panda_II_Application1
{
public class Program
{
public static void Main()
{
double t = 0.1;
for (; t <= 1.0; t += 0.1)
{
Debug.Print(t.ToString() + " ;;; "+(1.0/MathEx.Sin(60 * 2 * MathEx.PI * t)).ToString());
//Debug.GC(true);
}
Thread.Sleep(Timeout.Infinite);
}
}
}
Enter a value of “.1”, then copy the Binary64 string and paste it as value. You’ll see it will represent this number instead: “1.000000000000000055511151231257827021181583404541015625E-1”
Round to nearest value and you’ll get “1.0000000000000001E-1”
If you do not want that, use an integer and increment with 1 instead of .1, then do “6 * 2 * MathEx.PI * t” instead of “60 * 2 * MathEx.PI * t”
Nope. VS will show a decimal type in Intellisense but it’s either barely implemented or not really at all. It’s useless. It doesn’t even support addition.
I studied & mentioned this several months ago. From as much as I can tell the precisions of float and double are the same and neither even comes close to the precision defined in the .NET docs. But, what I can’t figure out is if this is actually the case or if VS is just making it appear this way through the debugger. I’ve found several cases where people have converted code written to use doubles to then use floats and gained significant performance increases. So, my guess is that there is a difference and in fact doubles may actually be 64-bit floats. But, it’s hard to know for sure. I searched netmf.codeplex.com source and couldn’t find anything that defines the base types. So, I assume this is done at a lower level that isn’t quite open source. I’d really love to see someone at Microsoft write up something about this.
On embedded device, I always try to avoid float or double and stick with integer math. Of course when calculating Sin and Cos, it is much easier to use float or double.
How did you determine this? Through the docs or experimentation? Through experimentation I haven’t been able to show anything close to 64bit precision.
My bad. According to the Microsoft docs a double has the range of -3.4 × 10^38 to +3.4 × 10^38 with 15-16 bits precision. I had forgotten that last part and was trying to get something closer to 38 decimal places. :-[