Maximum effort ADC rate?


Pulling all tricks possible including native code, can anyone estimate the maximum 10 bit sample rate I can achieve on 6 channels of a Spider?

Similarly, what might one maximally be able to achieve over 6 channels using the purely managed environment?

Put another way, my aim is to sample one or more channels at approximately 44 KHz. How many channels am I likely be able to service at this rate?

Thank you to all.

Sampling is the easy part. What are you going to do with all sampled data?

Welcome to the community.

Thank you Gus.

Clearly this is an audio application. I can’t go into the secret sauce in the middle. :frowning:

Ideally I would like to fill circular buffers at a fixed data rate. Then, have some upper level code grab a millisecond at a time and “do stuff”. When the stuff is complete on as many channels as I can afford (this is only a prototype so requirements are squishy) I add my computation together and send to a DAC.

From what I see I have access to only 10 bit samples. I totally understand this is too crude for quality audio. This will be just proof of concept.

Thank you again.

You’ll see a lot of jitter on the 44kHz interrupt, even without doing any processing. I would not use a NETMF board to do the audio processing itself, you have DSP’s for that purpose. Spider could be perfect to the all the other higher level stuff like controlling the DSP parameters.

For a proof of concept, you might consider first writing a VST plugin that does the audio processing you like before moving it to hardware.

[quote]Similarly, what might one maximally be able to achieve over 6 channels using the purely managed environment?
I can guarantee you that you will not be able to sample even a single 44kHz channel in a managed environment. You could maybe read and save the data into a managed array, but the moment you start processing, you’re way over your budget. Based on the timing tests I performed and posted about about a month ago, a single comparison or multiplication operation will take 10+ micro seconds, as will an assignment from one variable to another (x = y), where you have 23 microseconds for everything you need to do between your samples.

RLP is the way to go, especially if you want to do processing.

Thank you MoonDragon.

In RLP, do I lose access to the provided drivers?

Thank you

MoonDragon, the timing you quoted really worries me. I’ll search for your posting but if it is convenient (and easy) for you, could you post it in this thread? It might help others who read this thread later.

Holy cow. MoonDragon’s thread is here:

This kind of overhead is lethal.

I’m sorry I missed your previous posts.

For your first question, I’m not really sure of the answer. It would probably be better to wait for someone more knowledgeable to answer.

For the timing part, I did a tiny bit more detailed post on my blog, along with my followup example of RLP processing. You may find that somewhat interesting:

Thank you MoonDragon - btw my principal area of expertise is also imaging related.

[quote]Holy cow. MoonDragon’s thread is here:

This kind of overhead is lethal.

That’s why, even on dedicated DSPs, almost all audio processing is done in assembly. Heck, even on my desktop, audio apps that generate sound using VSTs and similar need to use a very low latency specialized audio driver and additional external hardware, and that’s a 6 core 4ghz beast :slight_smile:

Like most higher-level languages and platforms, NETMF just isn’t well suited to real-time signal processing. I’ve tried it myself with attempting to generate a waveform (not using PWM) as samples to be fed to an external DAC.

I have yet to use RLP, but moving your audio sampling/processing to that while keeping the rest of the application as NETMF may work well for you. Keep in mind that if you decide to go stereo, you need to double your sampling.