How to Load a Bitmap into Fez Touch at Runtime

I recently perchance this FEZ Touch screen:
http://www.ghielectronics.com/catalog/product/262

I understand it doesnt support native .Net Micro Framework images, but instead provides a tool to convert images to an array of pixels. What Im trying to do is convert a bitmap to a FEZ touch image format in real time, on the device. For example I want to show a QR Code on the screen, but the image could vary depending on the text Im encoding. Ive been thinking of making a web request to generate the image with google, example:

The response of the web request will be a bitmap, at this point I want to convert this to a FEZ Touch image format so I can show the QR code on the LCD screen. When I tried doing the above steps, I got an UnsupportedException because I was trying to use the bitmap type which comes with the core .net micro framework. It appears you simply cant use any of the .net graphic libraries.

So at this point Im wondering if someone can help me identify some solutions for converting the array of bytes I get from google (see above URL) into a FEZ Touch compatible image at run time.

Thanks, Paul

Can you convert the image on the server then send Panda a simple byte array to display?

Implementing a converter on Panda can be done as well in C#, or faster in RLP but if server can do it then why load the little device.

@ Gus, I thought about doing a server based conversion and just returning a byte array as you suggested. I guess I was hoping there would be a way to do it on the device, so I don’t have a server dependency. I’m trying to get this prototype I’m working on to be totally self contained - meaning no external dependecies (other than google, etc).

I’ll probably try doing the server based thing next (using a .net HTTP handler - or something like that). I guess I should use use the source code in the FEZ Touch image converter project as a starting point - correct?

Also, can you provide a little more details on how one would do the real time conversion on the device with C#? And what exactly is RLP?

Thanks, Paul

Yes that is a good place to start.

Runtime Loadable Procedures. Support - GHI Electronics

@ Gus, I have implemented a server side http handler that gets an image from google, then translates that to an array of bytes. The array of bytes are returned as the payload. Here is an example:
http://macrowebservices.com/ImageConverter/ImageConverter.ashx?url=http://google.com

I used Method 2 from the this blog post:

I originally tried the source from the image converter tool you provided, but that was returning all values of 256 or something like that.

Check out the attached image. Looks like the problem is that the SIGNAGURE in the image class isn’t matching the first 4 parts of the byte array I am returning.

Any ideas what the problem is?

Where are you getting that signature from???

The signature of a PNG file is 8 bytes, 0x89504E470D0A1A0A. You are saving the stream in memory as a PNG file, which is a packed structure and isn’t really the image size == bytes…

I think Gus is telling you that you need to save it as an array of color values on the server (not just turn the bytes of the image to an array). The size of this would be 3 * width * height. (3 values for red/green/blue, 0-255). You can decode your own file format on the Panda, but I don’t think it has enough memory or capability to directly work with bitmap structures.

You have to follow the our image converter code… I don’t see where the image bytes are getting converted in the code you provided.
The ImageConverter first inserts its own “signature” for verification and then the image bytes converted into a format that panda can easily display.

Thanks for the help guys. I built a little console application that seems to be working correctly… Next step is to move it into the web server. Here is the console app:


using System;
using System.Drawing;
using System.Net;

namespace ImageConvertTest
{
    internal class Program
    {
        private static void Main()
        {
            // Get the image from a remote web server and convert it to a fez touch format.

            var imageUrl = "https://chart.googleapis.com/chart?chs=80x80&cht=qr&chl=Hello%20world&choe=UTF-8";
            var webRequest = WebRequest.Create(imageUrl);
            var webResponse = webRequest.GetResponse();
            var image = Image.FromStream(webResponse.GetResponseStream());
            var bitmap = new Bitmap(image);
            var fezTouchByteArray = ImageConverter.ConvertImage(bitmap);

            // Print out the pixel values to the console.

            foreach (var pixel in fezTouchByteArray)
                Console.Write(pixel);

            Console.ReadKey();
        }
    }
    public static class ImageConverter
    {
        private const uint SIGNATURE = 0x354A82B8;
        public static byte[] ConvertImage(Bitmap img)
        {
            if (!BitConverter.IsLittleEndian)
                throw new Exception("Unexpected endianness.");

            // signature is 4 bytes. Width 2 bytes. Height 2 bytes. Each pixel is 2 bytes.
            var result = new byte[4 + 2 + 2 + img.Width*img.Height*2];
            var index = 0;

            Array.Copy(BitConverter.GetBytes(SIGNATURE), 0, result, index, 4);
            index += 4;

            Array.Copy(BitConverter.GetBytes(img.Width), 0, result, index, 2);
            index += 2;

            Array.Copy(BitConverter.GetBytes(img.Height), 0, result, index, 2);
            index += 2;

            for (var y = 0; y < img.Height; y++)
            {
                for (var x = 0; x < img.Width; x++)
                {
                    var v = Color16FromRGB(img.GetPixel(x, y));

                    result[index] = (byte) (v >> 8);
                    result[index + 1] = (byte) (v);
                    index += 2;
                }
            }

            return result;
        }

        private static ushort Color16FromRGB(Color c)
        {
            return (ushort) ((c.R >> 3) | ((c.G & 0xFC) << 3) | ((c.B & 0xF8) << 8));
        }
    }
}

Uploaded code to webserver, but got an out of memory exception of the Fez Panda II. Any other ideas/approaches??

I haven’t played with the FezTouch, but can you send the image to the screen in a stream instead of storing it memory first? An 80x80 bitmap converted to 16bpp is 80x80x2 is 12.8kb in memory, which is probably too much (as you get the OutOfMemory exception).

You may have to read the image in smaller “chunks” to display to the screen.

The display is 320x240 with 2 bytes per pixel…that is

320x240x2 = 153,600 bytes … Obviously you can’t have full image in RAM. What you need to do is get the image from your server directly to the display not to RAM. You will getting the image in chunks of course.

  1. get first chunk (maybe 2KB)
  2. draw on display
  3. get next chunk
  4. draw on display
  5. back to step 3

Thanks for the advice. I have been trying to implement chunking, somehow drawing lines, but am having a hard time figuring this one out. Does anyone have a more specific code sample I can look at for guidance? Basically I need to iterate through the response stream and draw out the pixels on the LCD.

Look at the Pyxis 2 code for downloading files from the web; this will show you how to get files in chunks. You’ll need to know the height and width of the image. Pull down bytes = number of rows to render in a chunk * number of pixels in a row* 2. Call drawimagefast (or whatever the name is) with your x/y vaules. Inc your y by the number of rows. Lather, rinse, repeat.