G120 vs pic32

I’m new with .NET MF and G120 module.

I’m planning a commercial application that will need a powerfull processing due to some reasons that I’ll explain.

Till now I have done projects using MCUs like PIC and PIC32 for main processing.

Now I have a project that will demand a very robust processing due to the amount of things that are going to be connected by serial ports and other interfaces.

I’ll need a MCU/SoM to do the following:

  • Capture and decode data from a GPS receiver at 57600 bauds with GPS fix at an interval of 10Hz.
  • Capture and decode data from a Distance sensor at 9600 bauds with variable interval up to 10Hz.
  • Capture and decode data from an Inertial sensor at 19200 bauds with a fixed interval of 20 Hz.
  • Capture and measure a Pulse Width / frequency constantly from a digital port input.
  • Encode all data from the sensors inside a protocol to be sent by a serial interface.
  • Capture and decode commands from an HMI that will be used to “command” a motor and that will display value of all these sensors.
  • Forward all data from these sensors through a single serial interface at 57600 bauds to the HMI.
  • Perform calculations in real time to control a PID actuator. That will demand one more interruption from a timer (interval of about 100ms) and some Floating point calculations
  • Control a DC motor (The actuator) using PWM for speed control.
  • Recalculate parameters in real time according to the commands received from HMI and send the new parameters to the sensors.

Well all these things can be done by dedicated peripherals on MCUs like PIC32 for example or the MCU used in G120 module.
Any ARM Cortex-M3 based MCU also should have these things but I’m more familiar with PIC32 (MIPS).
I’m going to programm in high level language so assembly is not a factor to insert in this comparsion.

It is very clear that the real challenge is to deal with interrupts and manage correctly to receive all these inputs without losing data due to buffer overflows.
Also it is important to mention that this is for a “Real Time” system, so the user will be looking what is happening and should have no considerable delays on the communication.

Now the question about the performance

The G120 uses an ARM CPU running at 120Mhz with 1.25 DMIPS/MHz which means a performance of 150 DMIPS
The PIC32 uses a MIPS32 M4K CPU running at 80MHz with 1.56 DMIPS/MHz which means a performance of about 125 DMIPS

The G120 module seems to have a few better performance, in the other hand it is running .NET MF that should cost considerable CPU performance in some cases.
The PIC32 will be running native code without RTOS or any management.

Using the peripherals in G120 may need access to the registers and that will ruin the purpose of .NET MF to simplify development. Or not…
In the other hand, manipulate strings and perform floating point operations is much more confortable having the .NET MF lib.

Having in mind all the tasks that I’m going to demand from the MCU/SOM, which solution would you choose for this project ?
The G120 ? The PIC32 ?
Other ?

Should I consider to split the tasks in more than one CPU ? (Using G120 + PIC32)

I’ll be glad to hear your opinions and the details of it.

Thanks.

G120 hardware has LCD controller and 16MBytes of RAM.
G120 has thousands or millions of lines of code that are already compiled and built in fro you to use, in-field update, SQLite, file system, USB host, networking, SSL … a lot more

You simply can’t compare G120 to PIC or to any micro fro that matter. G120 is a compete solution not a chip.

Now, would G120 be better or something else? That totally depends on your application needs.

Of course, the ultimate solution would be to use G120 (high level solution) with a simple micro for time critical task. Combining the two will almost make every task an easy task.

1 Like

Unfortunately, for your purposes, the NETMF overhead might be catastrophic. While it provides high-level language and many useful and easy-to-use features, the code is interpreted, and it is NOT real-time.

For your purposes, you will need to determine how much high-level code there is that can be implemented easily in NETMF and how much must be implemented at a low level. I am not an expert, but I suspect that when you’re working in native code, all of your requirements could easily be fulfilled by either your $7 PIC32 or even a $3 Cortex-M0, if you’re willing and capable of doing the hardware and low-level software implementations.

If you want to avoid that, there’s RLP, which will give you access to the native code capabilities of the G120.

1 Like

@ Gus -

I agree that G120 is a complete solution, and the performance depends of the application but I’m showing to you the application.

I’m not actually comparing the G120 VS PIC32 in a generic way… maybe the title of this topic should be complemented…
But I’m making an analysis of the G120 performance for the application that I’m presenting here.

I agree with godefroi that NET MF overhead maybe catastrophic but I guess that what Gus said is the best approach.

Gus said that the ultimate solution is to use the G120 for a high level solution and a simple MCU for the critical part.
Before posting this thread I was thinking on that. And I also agree that is the best solution.

I’ll make G120 responsible for dealing with interfaces and decoding some informations.
.NET MF certainly will make this very simple when a PIC32 may require a huge work.
For the critical task that is a modified PID system for control an actuator and reading a sensor in real time I’ll use a PIC32.
I could use ARM Cortex-M0 but as I said I’m more familiar with PIC32, also because of the hardware to programm and the compiler.
Also this part of the project is not cost sensitive so I can use the G120 with PIC32.

I could see that NXP announced a MCU with dual CPU M4+ M0 in a single IC.
That should be the best option for this application but of course that the cost in terms of programming are going to be huge.
New architecture, new IC, new compilers for me… So I guess I’ll keep with .NET MF for high level part of the task and a PIC32 for critical part.

Thanks.

I work for GHI and I am part of the G120 development team. My answer will always be biased :slight_smile: Other reader may have an input for you on this.

Well, reading your answer I could interpretate that you are saying that G120 is not needed or can be replaced by other MCU :slight_smile:

But I want the G120 because I’m going to add even more features and I prefer .NET MF for programming some parts.
I could replace even by a simple MCU or I could replace by a SOM with for example WinCE and Compact Framework.
But G120 seems to have the exact “size”… in other words it fits better.

Anything you CAN do in NETMF will certainly be EASIER in NETMF, especially if you are already a C#/VB.NET programmer.

Gus, when do we get G120v2 with integrated Cortex-M0 for running native code?

Lpc4300 :slight_smile:

@ Gus - Please consider it to be PIN compliant with the actual G120 module, instead of what it will be very difficult to change my design for each new version…

I personnaly think that cuting tasks by : communication using NETMF and calculation with another PIC is an interesting idea, in the way that communication does not require low level considerations, and NETMF has all the needed libraries to simplify these tasks.

Regardin to the performance, it is difficult to say without making tests. A solution would be :

  1. To implement the communication part and send it via a serial/usb to your PC, and see how much resources it requires on the G120,
  2. Knowing what it left under the feet, try to insert the calculation to the G120,
  3. If it is ok, stop there, instead, use a PIC and communicate between the G120 and the PIC using the same code used for the PC COmm…

where I work we won’t use the pic32, pic24 sure, but then we jump to ARM or something.
Us firmware guys just had a knock down drag out fight with the HW designers on pic32 vs an ARM7.
We won :slight_smile:

The G120 will kill your BOM costs, if that matters, but your NRE will be way way less.
unless you have some extremely tight timing issues I can’t see where the PIC would win in anything other than the almighty BOM

Keith

I agree on some points but adding a CPU + ram + flash + crystals + 8 layer pcb will not be way too far from the cost of G120. Unless we are talking about a simple micro with no external memories.

There are also the software libraries and the speed of developing the application, debugging the application and future maintenance. Unless you are making high volumes, G120 is actually a lower cost. We believe 10,000 units is the magic number. If you are making more than that, using a module may not make sense to every application but at that level GHI will drop the raw components right on your pcb, no module.

By the way, it took 6 full time developer (USA pay rate, not India) about 6 months to switch from 4.1 to 4.2. Our customers got this for free :slight_smile: we will be working on 4.3 soon and adding more features. Customers will get it all for free as well.

I agree Gus, but customers seem to only care about BOM cost. We try to ad $0.30 and they freak out.
Of course they seem to freak out about NRE too :stuck_out_tongue:
We do our own design and layout, so were familiar with it.

I’d LOVE to use the G120 I have a couple of applications in progress that would be done so much faster and better.
But did I mention BOM costs?

They all seem to think they will sell 1,000,000 units, not the 100 they actually sell

shaking my head
BOM costs…

You are so right. They all care about BOM and they want to know the price for 10000000 :slight_smile: then I love asking: so when next netmf version is out who is doing the porting? Or who is paying $15,000 for the USB library? Which is just one of the many we have built in for free. … Then I hear a few second pause followed by: let’s use G120 :slight_smile:

1 Like

@ Gus - Keith-0

Fully agree with both of you !

Do you want to know another reason that made me choose the G120 :

First I was using the STM32, and however there was a problem of RAM qtty, my broker announced me for about 5 months of provisionning delays, because of the silly game that big actors are playing around components trading and lobbying ! Even more, the MOQ for this chip is about 5000 units (which means a dead stock of 4500 units if I only sold 500 units on a first order !), or you have to play with ‘catalogists’, but then the price is not the same !

For about 100 to 500 in qtty, GUS said ti will be a couple of week, or maybe on stock ! and no dead stock for me !

Don’t you think delays and no stocks are also a good reason ? How pay the dead stock when you have one ? I prefer nobody !

I think that constraints may be really strong for a G120 running netmf. I am facing the same problem with my EMX. Developping on ghi netmf devices is really easy but in my case it suffers from a lack of performance due to a bad conjunction of netmf and my software architecture design (a central hash table to store data that involves a huge overhead to update values).

I am using the 4 com port (1NMEA , 1IMU , 1 XBEE, 1PC) 11 digital inputs, 7 analog in, 1 one wire network with 40 DS2401 and 6 DS2408, CAN devices… I compute data from this inputs and compare with user thresholds. If comparison is true the board logs on sdcard and activate some outputs 6 relays13 leds directly connected and 50 leds on the one wire network.

For a hash table of 200 items (meaning 200 datas to retrieve from inputs or to compute) and about 200 comparisons, my code runs in 500ms (2Hz)… this is slow. Thinking small is not always possible. In this case RLP may help but it will be difficult to manage the hash table with native code.

On the other side, coding on ghi devices is really really easy due to premium library, it allows saving time… and at the end… money.

@ leforban

Thinking small only means you have ot optimize at a very hugh level.

Regarding to your design, I could propose such things like :

  1. Using only one UART and improving an RS485 conversion to all of your modules, to achieve a bus communication with a master, and your serial modules as slaves. That way, you will manage only one UART thread for all the communication between the mainboard and the modules. (have a look on MAX348x chips to help). I use this way to communicate between modules and it is much more simple and efficient has it is the rule of you mainboard to ask for modules values, and manage the request/response process (If I understand well your project).

  2. Digital Inputs can be managed by an external I2C chip such as PCF8575 for example, that provide and INT# PIN you can connect on the mainboard. Then instead of many InterruptInput on the mainboard, only one is needed and when thrown, it means that an input changed and you have to refresh the status…

Doing that, you considerably reduce threads impacts, and can concentrate you mainboard on calculation.

Regarding the hashtable, I can’t help as it is your plumb, but G120 RLP is a problem that will be solved in next release of the G120 Module…

Another thing in your case, would be to design a distributed architecture. Gadgeteer structure can really help you in that ! Distributed architectures are much more scalable and improve decoupling you project functions !

Dealing with thread was horrific in my case due to the fact that the central data structure that need to be share by all threads. Even if interlocking was avoided, most of the time the threads were waiting the release of this lock. Actually the problem is not dealing with uart, timing results shows that the problem is accessing data in the hash table, (I even try to move from hash table to array of loat but didn’t get the results I would expect.

@ leforban -

Is you hash table defined as static or volatile ? Do you use the singleton pattern to access it ?

The hash table is a data member of a class and I am using a static instance of that class as the central data structure.

I am accessing to a data of the hashtable by using: HT[key] . Should I use something else?