I got curious if SPI generated output looks the same when different boards are used, hence I did an experiment. I have used the same code (except for Cerb, I had to change [em]Clock_Edge[/em] to the opposite value, this issue is already reported) and the same firmware that came with SDK 2016 R1 Pre 1 to send 4 bytes over the SPI port in one instance and captured those signals (SCLK, MOSI and CS) with a data logger. I have used all the boards I had, so only G30 is missing in this comparison.
There are small pauses/delays (look at the SCLK line) between bytes sent when G120, G400 and Hydra mainboards are used compared to all the other boards. I believe it is called “SCLK setup time”, which is not 0 for these three boards.
It takes more than 10 ms for a MOSI line to get high after transmission has ended when G120 mainboard is used. It is considerably longer compared to the other boards. Further test showed that adding a pull-up resistor eliminates this issue.
When SCLK line is inactive/low (SCLK setup time) during the packet transmission, MOSI line is high on G400 and Hydra mainboards. Other mainboards do not behave this way. Pull-down resistor on MOSI line has no effect on this behavior.
Sequence of line assertions is different for G120, G400 and Hydra mainboards when compared to the rest: CS -> MOSI -> SCLK vs MOSI -> CS -> SCLK.
CS line goes high just before the last SCLK transition from high to low when G80 mainboard is used (look at the zoomed segment). Other mainboards do not behave this way.
SCLK line is asserted before data transmission is started when G400 and Hydra mainboards are used. Other mainboards do not behave this way.
CS assertion to the first SCLK edge timings and the last SCLK edge to CS deassertion timings were different for all boards and not exactly zero even though I have specified [em]ChipSelect_SetupTime[/em] and [em]ChipSelect_HoldTime[/em] to be zero.
I did some further tests with pin voltages and states (look at the summary table). There were quite a lot of inconsistencies. The most serious looks to be the floating MOSI pin on G120 during periods when no data is transmitted over SPI. I was also a bit surprised to see 2V6 and 2V8 voltages. I must point out, that those readings are averaged, hence why you see unusual values of SCLK line in the last row.
I guess the biggest possible impact of all these inconsistencies could be during the transition from one mainboard to another - something that was working correctly before could theoretically no longer function as expected.
I too had hope to move an SPI device from platform to platform but found that not to be possible.
I have a couple of interfaces that require near optimal access to an SPI devices, so I will need to investigate this more as well. It would be kind of hard to accept the your findings in issue #1 with out understanding the cause or guidance as to what to expect for variable delays.
@ iamin - anything you see happening outside when the chip select is asserted is undefined. This is normal. For example, mosi becomes floating in some systems to allow multi masters, which is why you needed a pull up on G120. This will not affect a spi slave that uses a chip select, which all should. In cases like led strips, a resistor will be needed.
Timing is also not critical on spi devices, like the time from cs going low to the first clock. The added time does not effect the end results.
What you are guaranteed is where mosi is at when the clock transitions and guaranteed 8 clocks per transfer. If you see variation between devices, we will happily investigate.
@ iamin - We are not able to reproduce the behavior you mentioned in #5. Using the latest pre-release SDK with a Display N18 connected to socket 8 on a FEZ Reaper with a SPI clock of 12 MHz, chip select was deasserted 0.5us after the last clock finished. If you some more information about your setup, we can always try again.
Looks like your Cerb test is setup differently than the others. I can barely make out the arrows, but it looks like data is reading on rise for Cerb and on the fall for all the others. Was that an accident or another issue?
I agree with you Gus, for normal SPI operations, timings are correct… but delay between bytes slows down the SPI transaction so that if you want to use SPI for very fast transactions in a real-time context, this wasted time is an handicap and could give some strange behavior when porting Apps from one platform to another one.
It’s also surprising that smaller (G80, Cerb) platforms give better results than powerful ones (G400, G120). I’m working since years on writing lot of WEC drivers for many SoC’s, including Atmel Sam9 and Nxp LPC ones. So, I’m sure that SPI could be faster and more efficient than it currently is (for example on Sam9X35: DMAC enabled, FIFO enabled, DLYBCT=0, …).
Finally, some people (like me) wanted to use SPI MOSI only to elegantly generate accurate timings to drive WS281x RGB addressable LEDs (e.g. Adafruit neopixel strip, matrix, ring LEDs)… which is not possible with a delay between transferred bytes…
So, now, what is the next step?
Should GHI optimize its SPI driver to give the best result on all platforms?
Should GHI add driver’s IOCTLs or enhance its SPI configuration class to allow programmers to fine tune the controller?
Should we have to write our own, complete an efficient driver with RLP? (I don’t know if it is possible (DMAC, interrupts, …))?
@ Olivier.ov - What boards are you having trouble with driving WS28xx LEDS as G120 G30 G80 etc all work fine.
The only issue between the boards driving the WS28xx’s is to change the leading edge between STM’s and G120’s
Cerb’s and G80 are running at 168-180mhz vs 120mhz for G120 hence they are faster.
Just to add a voice to the SPI timing comparison, for a commercial project we have a need for 20 PID temperature controllers, SPI looks like the best way to handle the control and feedback values. To achieve those optimal SPI timings, we would be ok with a set of conditions, like no serial port receive handlers, or no networking. However we would like to keep custom USB client operational or restrict its use for a period we are using SPI.
Anyways should be exciting times ahead, we know this can be done with an FPGA or a FPSLIC but there is currently a window to design in a G400 base solution.
So I noticed this timing problem a while ago, but I had thought that simply bit banging with RLP would have solved it. Some one on this forum had done I2S audio with NETMF and gotten it to work very well. The audio part was in RLP and the other functions were being done in CLR. Basically the interrupts ran the RLP in real time, while the NETMF just run in whatever time was remaining on the CPU. When I was working on the Wiznet Driver (see my code contributions) I looked at the native code that controls the SPI on the Cerb family, so I know that it was bound to produce some degree of jitter; which I was willing to live with.
Since my project requires the control of stepper motors over a long period of time (cartesian robot), I looked at things like dual port srams to “buffer” the spi data from the Cerb to dspins. I decided that RLP would be the best solution moving forward for that real time control. Further, it allows me to do things like have 4 data pins on the SPI.
Does anyone have a comparison of what the SPI would look like if emulated from RLP instead of from within NETMF?