If you have just a loop that monitors (polls) an input pin & then makes an output pin match what was read in, what is the rough delay time? 5mS? 1 ms? 10uS 1uS? 250ns? 20ns? Just wondering (rough) how fast the reaction can be between the input pin going hi /lo and the output pin going hi/low.
I’ll set it up & try it myself, when I get a signal generator source.
it’s usually pretty close, but the problem is you can’t guarantee how close it will be unless you move into RLP. GC will kick in at some point and add a chunk of time. If you want consistency for this then the only way to deal with that is to do it outside netmf, ie in RLP.
THANKS for reminding me about GC, never even thought about that…is it possible to disable that for a few “moments”, to avoid the issue?
Well, I assume its faster than the blink of an eye, but really needed to know if its mS, uS, nS, pS …etc.
I’m about to try…I’ll be on the lookout for the garbage truck/trick.
it’ll never be as short as the Mhz rate of the processor would imply if you stay in managed code. That’s often a criticism of netmf, particularly when people are used to working in C on a ATMEL or PIC chip, and it is why RLP is such a great addition.
As to how fast it is, again that would depend on your processor speed. If you assume that the same code is executed on a 72mhz USBizi and a 200mhz ChipworkX device, and assuming there’s no other significant difference then the speed of reaction will be different. The only way to do this is to check against what your requirement is.
Wouldn’t the garbage collector up in netmf land also temporarily stop the RLP processes (since truly only one thing can occurr at once) & slow things (RLP) down?..I guess I need to read up on the interactions and hierarchy between .netmf & RLP There’s always an adventure around the corner.
Also, it seems like the GC could be a severe problem that might occur unexpectedly at any time (if your dealing with millisecond timings & events). Can the automatic GC be stopped momentarily for such critical times?
NETMF is not real time so your design should be made with this in mind, GC or other. If GC would causes “Sever problem” then NETMF may not be the right tool for the job. Keeping in mind tasking in RLP which is very much real time but it is good for specific uses.
This is an embedded and enclosed system, you have control of what the GC is doing. So, before you run the critical task, force the GC to run to get all cleaned up then run your critical code. NEMTF can almost be real time if you plan it right.
I’ve seen several questions similar to this or how fast you can toggle a pin from hi/low; I always wind up wondering what such tests are meant to prove. The question posed by the OP would give one an idea of average latency but without knowing what the end goals is there is no way to extrapolate how that might effect some performance measure of the end product.
The micro-controllers used on the FEZ boards have a wide variety of hardware resources to accomplish tasks that one would need to ‘bit bang’ on lesser processors. My suggestions would be to first define what you are trying to accomplish and the performance measures used to measures your product effectiveness and then find out how you might be able to accomplish that task on a FEZ board.
Toggling a pin on the parallel port on a 3Ghz PC will not give you 3Ghz on the pin But everyone is thinking this is raw microcontroller not thinking of the whole system.
An 8bit micro can toggle a pin MUCH faster than a 3Ghz 64bit PC 555 timer could do the job even better
A good test would be, how much time will it take to decode a JPEG image or how much time is needed ot write a 1MB file…etc.
I will set a timer output, obviously a nice steady precise 1 KHz tick-tock…then will monitor (poll) that on another input pin for these tick-ticks. On certain tick-tocks (say for grins 22, 75, 123, 167, 221, 234) want to toggle another pin. How tight can/will this timing be to the tick tock? Will it be within 1us? 100us? 2 ms? 50ms? 3 seconds? Just a rough estimate, or an idea of the smallest reasonable uncertainty.
RLP could probably do it in sub-microseconds, but what about std c#?
If I needed to do something like this I would use a counter register (look in register class) set the counter to generate an interrupt at the first count you are interested in. When the interrupt fires set your output pin, then set the counter to the next number of counts it should interrupt at, etc, etc.
It sounds like you’re doing some kind of “animation”? If milliseconds is good enough, then a simple Timer object should work. Simply keep an array of time deltas. When the timer fires, do your stuff, then set the next fire time to the next value in your array sequence.
This doesn’t use an external clock source but of course the internal clock.
You can also use the hardware real time clock’s alarm function. It might be able to support shorter intervals.
What will make a big difference here is what happens between the clock “ticks”. If you have a lot of processing to do, it may overrun the next checkpoint. So, I don’t think the problem is about how quick NETMF can respond to an external stimulus but instead how quick your program can get through its functions.