Code slower in thread

Has anybody an idea, why the code in a thread is slower than code in the main loop.

I tested on cerbuino latest firmware and netmf 4.3

2.4 vs 2.75 seconds

Paste this into an empty project…

  var led = new OutputPort(Generic.GetPin('B', 2), false);



            //while (true)
            //{
            //    long d1 = DateTime.Now.Ticks;
            //    for (int i = 0; i < 100000; i++)
            //    {
            //        led.Write(true);
            //        led.Write(false);
            //    }
            //    long d2 = DateTime.Now.Ticks;
            //    Debug.Print((d2 - d1).ToString()); //2.4 seconds


            //}


            Thread t = new Thread(() =>
            {

                while (true)
                {
                    long d1 = DateTime.Now.Ticks;
                    for (int i = 0; i < 100000; i++)
                    {
                        led.Write(true);
                        led.Write(false);
                    }
                    long d2 = DateTime.Now.Ticks;
                    Debug.Print((d2 - d1).ToString()); //2.75 seconds

 
                }
            });

            //t.Priority = ThreadPriority.Highest;
            t.Start();

@ Alex111

What is happening in the main thread?

This is the main thread…

The main thread just starts the thread.

It makes no difference if I have a Sleep at the end of the main thread or not…

how are you converting ticks to seconds?

There is some context switching between threads, so this might be cause.

Even if you do not create a thread, the main is a thread itself.

Could be that my conversion is wrong. I just divided the ticks, I thought ticks is in ns…

But nevertheless this code is slower

       public static void Main()
        {
           var led = new OutputPort(Generic.GetPin('B', 2), false);
            Thread t = new Thread(() =>
            {
                while (true)
                {
                    long d1 = DateTime.Now.Ticks;
                    for (int i = 0; i < 100000; i++)
                    {
                        led.Write(true);
                        led.Write(false);
                    }
                    long d2 = DateTime.Now.Ticks;
                    Debug.Print((d2 - d1).ToString());
                }
            });
            //t.Priority = ThreadPriority.Highest;
            t.Start();
            //Thread.Sleep(500000);
        }

than this one:

   public static void Main()
        {
            var led = new OutputPort(Generic.GetPin('B', 2), false);
           while (true)
            {
                long d1 = DateTime.Now.Ticks;
                for (int i = 0; i < 100000; i++)
                {
                    led.Write(true);
                    led.Write(false);
                }
                long d2 = DateTime.Now.Ticks;
                Debug.Print((d2 - d1).ToString());
           }
}

I always want to understand things. I would have expected that it makes no difference…

This is strange. I don’t understand it either.

Yes, there are two threads, but there should be no thread-switching — main thread sleeps forever…

I only have the cerbuino board. Maybe somebody can test, if it is hardware related?

Yes, but Main thread is never waken up in this case. There’s no context switching, so the other thread should get 100% of CPU. I’m surprised the difference is so big.

@ Simon from Vilnius -[quote] Yes, there are two threads, but there should be no thread-switching — main thread sleeps forever…[/quote]

Each thread gets a 20ms time slice (I think, may not be the exact number); So, every 20 ms of interpreter time (note CPU); a thread is stopped (which probably entails a lot of data shuffling). How and/what it does to manage its thread queue is unknown to me; but, the time to check the execution state of the idle main may actually be significant… just a guess on my part.

One way that might be used to check would be to add threads that do nothing that spin doing the LEDs; add more threads and see how much the timing changes.

Thread priority makes no difference… Don’t know if it is even implemented in the runtime?

May be the difference is just because of VS debugger is attached while code running? Thus CLR needs to track states of the two threads in slower case and state only for one thread in faster case.

I have tested it on Cobra II. First, it was much slower (yes, it was running at 120 Mhz). Second, there was no significant time difference.

Results:
Inside a separate thread:
7686.69 ms
7677.82 ms
7677.84 ms
7678.41 ms
7677.75 ms
7677.97 ms

Inside the main thread:
7683.14 ms
7682.70 ms
7682.63 ms
7682.60 ms
7682.66 ms
7682.42 ms

I would have expected this result…

Maybe the low level implementation is different on both platforms? Maybe someone at GHI could explain what happens behind the scenes…

I do not understand it and only Microsoft can explain this, or someone with a lot of free time :slight_smile:

you are right. It is not that important to solve this.

But I’m very interested to understand the API. And if the API does something unexpected then it is a bug or “by design”…
Just wanted to find out what it is…

If I had knowledge about the mikrocontroller details in this deep, I would have had a look…

Unfortunately I never coded against something bigger than 8bit mikrocontrollers at bare metal level…