Analog read rate on Panda

fastest I can manage to read an analog pin is ~10kHz. I’m hoping for about 10x that rate, but I’ll take any speed improvements you can suggest.

int[] ai_vals = new int[100];
AnalogIn ai = new AnalogIn(AnalogIn.Pin.Ain0);
for (k = 0; k < 100; k++)
{
ai_vals[k] = ai.Read();
}

In Release build, I am showing about 16.7K cycles doing just a “int r = ai.Read()” in loop on my Panda here. Assigning to an array element kicks it down 14.3K. Debug mode kicks it down more to 12.5K. So try Release build for a few extra K. You may need to offload reads to a pic. Not sure how much a native function may help.

Not sure what you mean by Release build…

as for the array, I need multiple reads’ worth of values. A 16.7kHz read rate does me no good if I throw away the previous value on each loop iteration. I can’t figure out how to do this without an array, other than to shove them into SRAM temporarily.

@ WhyItSmoking.
I was not saying toss your array. I was adding some mental context and experiments to confirm what you are seeing. And seems to pan out based on that. As far as Release. Change your build config to Release on the tool menu. Debug build is default. Release build will remove some overhead. That will be about the best you can do without doing some native library I would think.

So… any documentation on using a “native library”?

Not knowing what you were talking about, I assumed you meant register-level access (and it gave me a chance to try that out, too). I finally got it running and it made no difference to the read rate.

Fundamentally, I don’t get it. At the ADC, it looks like it can read one pin at > 400kHz. (4.5MHz clock, 11 clock pulses for 10 bit). I realize there’s more going on than just looking at the ADC, but really it can only get information out at 1/25th the speed of the ADC?

I just tried setting the ADC resolution down to take fewer clock cycles and use a faster clock. Both made no difference. It feels like the ADC is much faster than it can be accessed…

writing

b00000001001011110000000000000001 or
b00000001001000010000001000000001

to 0xE0034000

and reading results from

0xE0034010

(both via GHIElectronics.NETMF.Hardware.LowLevel) made no difference.

What you are seeing is what we expect to see from a managed loop.

Usually, application that requires data acquisition, use a better ADC chip with a small micro that does the sampling then the data is forwarded on to FEZ for high level processing.

Frustrating. I’ll take it on faith that there are really good reasons why I need another micro and another ADC and that I’m just not knowledgeable enough to appreciate everything managed code does for me. I suspect I’ll eventually drink the “managed code” water but for now it just seems to be getting in the way.

I feel like a teenager asking dad “But WHY?” and not really having been around long enough to “get” the answer.

Why there is a math coprocessor? We can do floating point right on the processor, right? :slight_smile:

You can read analog inputs right on USBizi, really fast (you have done that already). Then you can write data to SD cards…send on network and do a million other things with minimal coding and minimal efforts, thanks to NETMF.

Now, if you want to do 4Msamples/sec on any little micro then you probably can but then the micro is too busy is to do anything else. And you are doing everything right into interrupts handlers, and possibly using assembly not even C/C++!

In short, NETMF is not the answer for every question, it does cover most of embedded developer’s needs, but then when you have tight-timing or specialized-needs then you need a custom NETMF port, or not use NETMF all together.

…still not sure? Give GHI a call and talk to one of the engineers.

fair enough. thanks.