I have a FEZ Spider Starter Kit, which I am using for evaluation of possibility to use .NET Micro Framework for my project. I would like to detect circles of known radius at relatively high speed, from 20 to 30 fps, in images streamed from a camea. Resolution of 320x240 px would be the minimum, since precision in detection (besides of speed) is critical. Actually, I need to detect only one circle.
Having experimnted with the camera module, I could achieve only about 3 fps at 320x240 px and about 10 fps at 160x120 px using StartStreamBitmaps/BitmapStreamed. With TakePicture/PictureCaptured speed is even less, e.g. only 1 fps at 320x240 px. As I understood, adding Hough transform processing will take at least other 200 ms, even implemented in RLP. I built the project in Release mode, no debugger attached. Data transfer over serial port is also slow, so using serial cameras would not help either. It seems image processing using EMX module simply cannot be done. I have read about that on this forum, but wanted to make prove myself.
So at the moment I see only one possibility to solve the problem by using an external microcontroller processing video from the camera connected to that microcontroller and streaming only coordinates of the detected circle(s) to the main module, e.g. via UART. CMUCam project might seem as possible solution:
But it detects objects based on color and I am afraid black objects would not be recognized properly or other black objects on the background would not allow to detect the desired circle precisely.
I would very appreciate if you could point me to the right direction, e.g. what chip/board could make that work. I could program it myself. C/C++ is not a problem. Or maybe you know someone who could implement that or I could talk to?
Image processing is a very specific function. Often there are better uC selections that have direct camera inputs that do DMA transfers rather than a higher level buffering into general RAM constructs. You may find people who have experience in this on this forum, but don’t be surprised if you don’t get too many suggestions, sorry.
Thanks for quick response! Sadly, I could not find any commercial or close to commercial solution on the market. It seems nobody is doing high speed image processing with NETMF. Okay, I will keep searching.
Thanks a lot for suggestion! That cam looks very promising.
By the way, I saw that Oberon microsystems works with Mountaineer boards based on STM32F407 microcontroller. Have you tried to capture images from the camera module using that board? It is much faster than the Spider board based on the EMX module (168 MHz vs. 72 MHz).
I would very appreciate if you could check that. If it is capable of capturing images at 20…30 fps, I would buy one board for evaluation purposes and if everything is working well, I might be interested in co-operation with your company for designing the final product.
no, we haven’t tried that yet. With a serial camera module that should be possible in principle. If the camera yields one byte per pixel, this means 75 KB per image. On a Mountaineer board, one such image may or may not fit into the microcontroller’s RAM, depending on how much additional memory your algorithm needs. If not, you would have to process the image incrementally.
The other hurdle is that NETMF uses a densely coded intermediate format (.NET CIL code), which is interpreted at run-time. This is roughly two orders of magnitude slower than native code. For many applications, this doesn’t matter at all, as the processing-intensive parts (e.g. network stacks) are part of the platform itself, written in C. But for compute-bound processing in an application, this approach is definitely not optimal. It is possible to use the NETMF interop mechanism to implement the algorithm itself in C, but this is more involved and requires relatively expensive tools (Keil MDK-Standard for Moutaineer boards).
S4o you may be better off looking at a system perhaps like the Hydra that has a lot more memory. But I think you’re unlikely to get the frame rate you’re talking about without a proper camera and processor pair, which is not what a general system like Spider/mountaineer/hydra etc are.
Lets talk hypothetically about this for a while. If you get 1fps, will that work for your application? What about if you manage 2fps? 5fps? How sensitive is your application to frame rate? What about if you are getting 5fps and that drops back to 2fps, is that an issue?
What do you need to do with the data once you have established it? Will you have user interaction? Connected to any other devices/sensors other than the camera?
It seems that existing NETMF solutions cannot handle 30 fpg from a camera at QVGA (320x240) resolution. Assuming 8 bits per pixel, the transfer rate for one image would be 17.578125 Mbps at 30 fps and 11.71875 Mbps at 20 fps. The latter is lower than theoretical 12 Mbps supported by the Spider board, so 20 fps would be the maximum. As far as I remember, the camera module included in the Spider Starter Kit delivers image in 16 bits per pixel format, so theoretical max frame rate for QVGA resolution would be not higher than about 10 fps, not taking in account sonversion from the raw data to one wrapped in Bitmap class.
Is it possible to raise transfer rate for USB devices? The Spider board is compatible with USB 2.0, but data rate is lowered to 12 Mbps just because of low microprocessor speed (72 MHz). So what about Cortex-M4 with its 168 MHz? Would it be possible to have higher USB data transfer rate for boards built on its basis?
If the camera delivered bitmap stream at 30 fps and the rate dropped to 20 fps, that would not be an issue. Lower frame rates would not be desirable.
As soon as an a bitmap arrives in the main module, it is processed (using Hough transform), and the coordinates of the detected circle along with some additional data are buffered and streamed to the outer world via USB, Ethernet or WiFi. No user interaction will be required. Other modules will be connected to the board as well, but they will not be so greedy.
NETMF isn’t really designed for this type of number crunching. I think you will have to look at something like an STM32F407, with a camera connected directly to the STM’s camera interface, which is an 8 bit interface designed for the job. You will have to find a decently documented camera though, which apparently isn’t as easy as it sounds. The code will have to be developed on bare metal, in C or C++, without the aid of NETMF.
The EMX, nor the Cerberus based boards from GHI, supports USB2 High Speed, they only support USB2 Full Speed, which is 12Mb/s. The STM32F4 does support USB2 High Speed, but requires an external USB2 transceiver, which the Cerberus boards does not have, and there are currently no drivers implemented that does support and external tranceiver. I might be wrong though.
This year the First robotics competition game was to shoot basket balls. Many teams had great success with dead on camera aiming. My team used Labview with the National Instruments Vision Assistant to track and aim our robot. We got it working on a NI C-RIO (400mhz power PC chip). We used an Axis IP camera with MJEG or MPEG4 encoding over a hard wire LAN connection. We could do 30 frames per second of rectangle detection and distance estimation. However there was no time left for other robot tasks. We got it working on the driver control lap top but intermittent lag caused reliability issues. Some other teams used a beagle bone with some stripped down code. They also had TI support. Look into OPENCV. It’s free and very capable. Do you need to continuously process a video stream? If so you need some serious hardware. Can you take a snap shot process it , take action, Take another snap shot, take action, ect. Opencv supports usb cams well. There is also a dot net version that you can use visual studio and C#. This fall our team is looking at using opencv and x86 hardware. Atom and Via. Via has some rugged embedded systems although they are pricey. Vision does like to eat hardware. I don’t think any of GHI’s hardware can handle anything but very simple vision processing.
@ s4o. If 1 module does 2 frames per second, and you need 20 frames per second, then get 10 modules and distribute the task among the processors. Also, GHI has Run time Loadable Procedures which may give you 10 frames per second; you’ll never know until you try!
The main issue is not processor speed. Of course, I would use RLP for Hough transform. There is simply no other way of doing it. The problem is low USB transfer rate. GMod(Errol) already mentioned other drawbacks.
Unfortunately, NETMF is distinctly unsuitable for this kind of image processing. Your best bet is to grab yourself an STM32 (I think you need at least the 100 pin package to get the DCMI peripheral), grab a cheap camera unit, and break out your C skills. I don’t know for sure, but I would imagine that there would be enough processing power available (especially with the DSP and hardware FPU on the STM32F4) to do the Hough transform on top of managing the camera.
You’d need an STM32F407 to get the DCMI peripheral. Such as found on the STM32F4 discovery board. I have been contemplating a vision project using a Toshiba TCM8230MD 640x320 camera. One of these days…