Main Site Documentation

RLPLite for Cerb- implemeting a simple DMX transmitter


Hi people,

As I said in another thread, I’m trying to make a simple dmx transmitter with a cerberus board and rs485 transceiver as required by the DMX standard

As you can see in the frame example, it is not advisable to do this in managed code (someone has done it with a netduino plus and a logical AND chip, but it seems to work only on this board because of specific timing assumptions)

Ok, then let’s do it in RLPLite.
I managed to compile and make RLPLite demo for Hydra working on cerberus. But now, in my abysmal ignorance of the microcontroller world, I ask myself : what now ? And that’s where all hell breaks loose : I’m litteraly lost in the datasheets, code samples and documentations.

My questions :

  • when I enter RLPLite, all managed code stops, that means, all interrupts are ignored or just postponed ? As the synchronicity of the DMX frame is of the essence, it should not be interrupted while sending the frame, but interrupts should resume normally after this

  • How to implement a break signal on an usart serial port ? Should I have to revert the serial pins to GPIO output mode for this ? If so, when I go back to serial (to send the actual data) will it be instataneous (because we can’t lose time there)

  • What’s the correct way to make the timing on STM32F4 ? I don’t understand the sample, which uses an interrupt, I think it is too slow for the μs resolution we need there.

Thanks you very much in advance for any pointers or help on this!



Have you searched code share for RS485?


@ Gus - OK, found it :

For the moment, I have absolutely no clues at all about how to port this from fez cobra and full RLP to STM32F4 boards and RLPlite :frowning:


I would not “port” as it will very different but the existing code is a good for learning how it was done.


Made some progress : it seems RLPLite is overkill and a simple transmit could be Register based by switching the baudrate on the fly. The following code works on my cerberus, The baudrate is lowered to 101851 to simulate the break + MAB within 108µs (the MAB lies in the simply) with a sent null byte.

There is still one bug : the mapped channels on the DMX fixture doubles the indice of the byte array. For example, channel 1 is Data[2], channel 2 Data[4]. Odd indices are ignored, I don’t know why. Someone got a clue ? is it a timing issue?

        private GTI.Serial _sp;

        // GPIOA
        const UInt32 PERIPH_BASE = 0x40000000;
        const UInt32 USART3_BASE = (PERIPH_BASE + 0x00004400);
        const UInt32 USART_BRR_OFFSET = 0x00000008;
        const UInt32 SYSTEM_APB1_CLOCK_HZ = 42000000;
        static Register USART2_BRR = new Register(USART2_BASE + USART_BRR_OFFSET);

        static private byte[] nullbyte = new byte[1] { (byte)0 };

        public DmxTransmitter(int socketNumber)
            // This finds the Socket instance from the user-specified socket number.  
            // This will generate user-friendly error messages if the socket is invalid.
            // If there is more than one socket on this module, then instead of "null" for the last parameter, 
            // put text that identifies the socket to the user (e.g. "S" if there is a socket type S)
            Socket socket = Socket.GetSocket(socketNumber, true, this, null);

            _sp = new GTI.Serial(socket, 250000, GTI.Serial.SerialParity.None, GTI.Serial.SerialStopBits.Two, 8, GTI.Serial.HardwareFlowControl.NotRequired, this);
            Data = new byte[513];



        private static UInt16 Brr(int baudrate)
            return (UInt16)((SYSTEM_APB1_CLOCK_HZ + (baudrate >> 1)) / baudrate);

        private void Transmit()
            _sp.Write(Data, 0, Data.Length);


Something is definitvely wrong with the timing: when I replace the (DMA-enabled ?) Write(byte[] buffer, int, int) by a managed loop with WriteByte, the “odd-ignored bug” disappears. Also, timing the buffer write shows me it takes actually half the time it should. Another problem is that the MSB bit of my chars is not correctly transmitted, but I can’t prove it for the moment, without an oscilloscope :expressionless:


I was right, due to a bug in the usart iniatilization of cerberus, the two stopbits were never seen in buffered writes. Refer to :
It was working in managed loop around the writes, because of the delay between bytes writes, the high state of the TX is seen as two stop bits by the dmx receiver…

The workaround is to force the stopbits in the register after the serial.Open()

USART2_CR2.Write((UInt16)(2 << 12));

The whole code is working now, performance is good, transmitting a whole dmx universe (512 bytes) only takes 24ms, and my 48 channels fixture could be refreshed in less than 3ms! That’s really nice.
I would like to submit a codeshare snippet about this, but codeshare submission is apparently disabled for me :frowning:


Is it possible to read DMX channel data in managed code?


Any idea about why the above code works with socket 6 (U) but NOT with socket 2 (UK), even if I precised no hardwareflow ?


Answering myself : because I am a moron and I forgot to update the register to use another USART.