We have an existing EMX based board, and a G120 replacement under development. Both currently run from the same source, with the only difference being compile-time definition of IO mapping.
In order to run the same executable on each, I want to detect the platform and map the IO at run-time. Some methods to do this include:
Allocate one or more IO pins that are currently unused on the EMX board, and tie down on the G120, as LMODE does. Eg IO72 and IO73 would work. Can detect the board, and also use if there is a later updated board with more or moved features.
Read a pin that doesn’t exist on the EMX (eg Cpu.Pin 157), which will succeed on the G120, trap the resulting exception on the EMX, and map IO accordingly.
Read CPU registers to determine the board.
Decode SystemInfo, which probably has the board type encoded in it somewhere.
No doubt there are other methods too. But is there a preferred, definitive method that should be used?
Yes, I’ve been looking at using one of those on the new board. We need some sort of unique ID to track boards out in the field, and there doesn’t seem to be anything built into the modules (eg fixed MAC address or CPU registers) that we can use.
The DS28E05 looks good, low cost and it also has 112 bytes of EEPROM that we can use for calibration and board-configuration information. It doesn’t seem to have hit the local suppliers here (RS and Element14) as yet though.
However the existing EMX based boards don’t have unique ID chips, so it won’t work as a way to identify which board type the code is running on (except by its failure to work!).
[quote]We have an existing EMX based board, and a G120 replacement under development.
Both currently run from the same source, with the only difference being compile-time definition of IO mapping. [/quote]
I am curious about code portability from EMX to G120. I didn’t know that it was possible to have same source for both modules ! What I have understood is that porting a code from EMX to G120 is not fast end easy, because EMX works with .NETMF and G120 with gadgeeter and then the code need a lot of adaptations… So my guess is that there is more to do than just IO remapping… Tell me if I am wrong…
In my particular case, we have debelopped a prototype board with EMX and we are considering G120, (more power, cheaper) but we would like to spend less time as possible to redesign all the code (it includes Glide MMI, I2c and spi sensors, ethernet, serial…)
Some processors (e.g. the STM32F4) have a serial number stored in a read-only sector of memory. By simply reading the memory at that specified address, you’ll get the unique ID.
That is exactly the information I was looking to use, but couldn’t find for the EMX. It is the way we have done it on other projects in assembler and C, but the TinyCLR abstracting us away from the hardware has some downsides as well as upsides. According to the User Manuals the device serial is not available on the LPC2478, but is on the LPC1788.
I too would like to see GHI make it easily accessible where available.
You can have not only source but binary compatibility. This works for me:
namespace IOMapdemo
{
public static class IOMap
{
// LCD Display
public static Cpu.Pin RS = GHI.Hardware.EMX.Pin.IO21;
public static Cpu.Pin Enable = GHI.Hardware.EMX.Pin.IO20;
public static Cpu.Pin LCD_Data_4 = GHI.Hardware.EMX.Pin.IO15;
public static Cpu.Pin LCD_Data_5 = GHI.Hardware.EMX.Pin.IO16;
public static Cpu.Pin LCD_Data_6 = GHI.Hardware.EMX.Pin.IO17;
public static Cpu.Pin LCD_Data_7 = GHI.Hardware.EMX.Pin.IO18;
// PWM Channels
public static Cpu.PWMChannel BackLight = Cpu.PWMChannel.PWM_0;
public static void SetG120()
{
// LCD Display
RS = GHI.Hardware.G120.Pin.P1_0;
Enable = GHI.Hardware.G120.Pin.P1_1;
LCD_Data_4 = GHI.Hardware.G120.Pin.P4_29;
LCD_Data_5 = GHI.Hardware.G120.Pin.P4_28;
LCD_Data_6 = GHI.Hardware.G120.Pin.P0_4;
LCD_Data_7 = GHI.Hardware.G120.Pin.P0_5;
// PWM Channels
BackLight = Cpu.PWMChannel.PWM_2;
}
public static bool IsG120()
{
bool b = false;
try
{
OutputPort p = new OutputPort((Cpu.Pin)GHI.Hardware.G120.Pin.P1_19, false);
b = true;
}
catch (Exception ex)
{
b = false;
}
return b;
}
}
}
...
public class Program
{
public static void Main()
{
PWM BackLight;
if (IOMap.IsG120())
IOMap.SetG120();
BackLight = new PWM(IOMap.BackLight, 10000, 0.3, false);
...
where IsG120() can contain any of the various test methods mentioned.
In my case G120.Pin.P1_19 isn’t connected on the G120 board, and the IO doesn’t exist on the EMX so it takes the exception.
I see a clock of 18000000 on the EMX and 120000000 on the G120, so this could be used too, although Gus says it may change.
@ Gus - While the above code works, it throws an exception during Garbage Collection, on a Dispose of the non-existent IOPort (on EMX) which was never actually created in the first place, due to taking the exception on new() as the port didn’t exist!
This is a bit confusing, is there a cleanup failure when the new() fails?
I see:
GC: 295msec 209196 bytes used, 7130472 bytes available
Type 0F (STRING ): 13548 bytes
Type 11 (CLASS ): 87336 bytes
Type 12 (VALUETYPE ): 10812 bytes
Type 13 (SZARRAY ): 23988 bytes
Type 03 (U1 ): 2556 bytes
Type 04 (CHAR ): 336 bytes
Type 07 (I4 ): 72 bytes
Type 08 (U4 ): 156 bytes
Type 0C (R8 ): 72 bytes
Type 0F (STRING ): 264 bytes
Type 11 (CLASS ): 20316 bytes
Type 12 (VALUETYPE ): 216 bytes
Type 15 (FREEBLOCK ): 7130472 bytes
Type 16 (CACHEDBLOCK ): 264 bytes
Type 17 (ASSEMBLY ): 31944 bytes
Type 18 (WEAKCLASS ): 96 bytes
Type 19 (REFLECTION ): 192 bytes
Type 1B (DELEGATE_HEAD ): 1404 bytes
Type 1D (OBJECT_TO_EVENT ): 840 bytes
Type 1E (BINARY_BLOB_HEAD ): 23640 bytes
Type 1F (THREAD ): 4224 bytes
Type 20 (SUBTHREAD ): 528 bytes
Type 21 (STACK_FRAME ): 4872 bytes
Type 22 (TIMER_HEAD ): 360 bytes
Type 27 (FINALIZER_HEAD ): 624 bytes
Type 31 (IO_PORT ): 468 bytes
Type 33 (I2C_XACTION ): 48 bytes
Type 34 (APPDOMAIN_HEAD ): 72 bytes
Type 36 (APPDOMAIN_ASSEMBLY ): 3936 bytes
#### Exception System.Exception - CLR_E_WRONG_TYPE (21) ####
#### Message:
#### Microsoft.SPOT.Hardware.Port::Dispose [IP: 0000] ####
#### Microsoft.SPOT.Hardware.NativeEventDispatcher::Finalize [IP: 0005] ####
A first chance exception of type 'System.Exception' occurred in Microsoft.SPOT.Hardware.dll
I can carry on after the exception, but for now I’ve changed the test to use your cpu clock number:
public static bool IsG120()
{
// Gus' suggestion. Clock is 18000000 on the EMX, and 120000000 on the G120 (currently)
return (Microsoft.SPOT.Hardware.Cpu.SystemClock == 18000000) ? false : true;
}
This succeeds on G120, fails on EMX which doesn’t have that IO, I catch the exception and return a bool identifying the module type.
That all worked nicely, however at GC time (on the EMX) the garbage collector throws an exception when disposing of the IO port, which had never been actually created. So it appears something was left behind after the failed
new OutputPort((Cpu.Pin)GHI.Hardware.G120.Pin.P1_19, false)
that upset the GC.
It points to a problem with the implementation of the constructor for the OutputPort class.
However it is no big thing, as you can see I easily worked around it by using your suggested Cpu clock speed test to differentiate host modules.
I don’t have a problem now, as I used your suggested method!
However the demo project below shows the problem, when run on an EMX, as the GC takes an exception during an internal Dispose() of the port-that-never-was.
Doesn’t happen on the G120, of course, or even on the emulator.
using System;
using System.Threading;
using Microsoft.SPOT;
using Microsoft.SPOT.Hardware;
using GHI.Hardware;
namespace EMX_GC_Demo
{
public class Program
{
public static void Main()
{
try
{
OutputPort p = new OutputPort((Cpu.Pin)GHI.Hardware.G120.Pin.P1_19, false);
p.Dispose();
}
catch (Exception ex)
{
Debug.Print("Exception: " + ex.Message + " (expected when running on EMX)");
}
Thread.Sleep(300);
// Force GC (Takes an exception on the leftover from the failed attempt to create an OutputPort)
Microsoft.SPOT.Debug.GC(true);
for (; ; )
Thread.Sleep(1000);
}
}
}