16Bit SPI Mode


How do you handle data sent over the SPI bus when the settings are using “DataBitLength=16” ?

Do you enable the MCU 16bit transfer mode ? If yes, why is there not any method that is taking ushorts arrays as parameters ?


Correct, why not?! Thanks for pointing it out.

But you wouldn’t cheat and convert the ushorts back to bytes under the hood, would you ? :wink:
I mean that it would have to be real 16bit transfer all along.

You can’t cheat. The point of 16 bit is sending 16bits…that is if we support it.

We either remove 16bit option or we fully support it.

Why do you care yourself about 16bit?

Because of performance, and mainly for displays.

For example, in our NETMF 4.4, using 16bit transfer mode on the ST7735 (128x160) increased the framerate from 36 to 51 fps (with full-screen flushes). Overall, in the drivers that support 16bit mode, we had roughly 35%-40% increase in framerate.

Of course, we may not have such increase here because of managed-to-native stuff, but I still think it should be more than noticeable.

That said, this was not a criticism but rather a question, since the MCU has the ability to handle that but not TinyCLR.

And I talked about cheating because another OS does that… :wink: Please don’t take that for you.

Out of curiosity : since the DatabitLength is an integer, how do you handle other values like 7bit, 18 bit or 37 bits ? I understand that those are strange values and I did not see such values yet.

OK so what you need is fast transfer, regardless on 8 vs 16. Here it is https://github.com/ghi-electronics/TinyCLR-Libraries/issues/491

Just like the underlying processor, these values are not supported. And they did, we would have to decide how to support them.

It is very important to keep a nice and clean API that works for everyone. We do not want edge cases and confusion. Like above, we focus on the need, not the feature…you need fast display support, not 16bit. Our job is to find a way to make it fast…probably already is but we will check again!

Thanks for your reply. I understand what you say about needs vs features.

I didn’t want to argue and insist to have this feature. It’s just that 16bit transfer is faster by nature (and supported by the MCU) and you did not have methods to handle those, despite the Databit.Length parameter. That was all I wanted to notify.

I may comment on the issue, however, to give more examples or just chat :wink:

Agree 100%. We are working on it.


I’ve put a comment the commit for PR #529 on Github but I think the forum is more suitable to discuss, hence my post here.

The object of the PR is about SPI being 8bit only.

There are different things that I still don’t understand, or that are not clear to me, and I would like to know more details about your implementation.

First thing is : to me, endianness has no sense for a byte only. So how is this property handled under the hood ?

You say that you do 16bit internally. Ok, but support for 16bit databit length is not only on the MCU side. So how does it work if the device does not support 16bit frame ? You can’t know in advance if the device will accept 16bit frames.

Also, and this is more or less related to the point above, what’s happening if I send an array with an odd number of bytes ?

Last point : in issue #491, you say your’re doing 16/32 bit auto internally. What are those 32bit ? Is it a mode supported by the MCU ?

We talked about supporting 16bit for speed and we have it done even better internally than that. Data is pushed at maximum possible speed. Doing 16bit for displays if silly at this point.

As far as byte ordering, this is something that can be handled at application level.

If there is something else needed then provide part number of device number you are trying to support and example code please. We will be more than happy to take a look.

About endianness, this is on bit level not byte level.

Thanks for your reply. It’s not what I expected but there are answers anyway.

16 bit was not only for transfer speed.
Many sensors return 16bit data (even 24 or 32 bits, sometimes), so reading them directly as Int16 contributes to the overall speed. You don’t need to do conversions in the managed code since the value is already decoded by the firmware.

In this regard, endianness at bit level is useless, to me.

This subject is over for me, now. The current behaviour does not fully satisfy me but it is so. I’ll do with it.
The important thing is that I now know what to do and how.

I know. We can revisit in the future but now there are much more critical things to tackle. This is a “nice to have” vs a “show stopper” issue.

Just checked, when flushing display, bytes are back to back with no idle time.

As soon as I get a board, I will make my opinion with the Logic Analyzer :wink:

In the meantime, I take your word for it.

Oh please! We want you to test that and everything else. Get your analyzer ready. By the way, your boards will not ship today as we are busy releasing preview 5.

1 Like

I will not test… I will torture :smiley:

And that is why you are one of my favorite insiders. Love the honesty :wink: