I have a requirement to load 1bpp monochrome bitmap images into memory from a network data stream, resize them and then parse through the pixel data. I found from experimentation (and docs) that 1bpp bitmaps can not be loaded without causing unhandled exceptions in Microsoft.SPOT.Graphics.dll. Only 16 or 24bpp.
At odds with this is the ability to use GHI.Utilities to convert Bitmaps objects to 1bpp.
Is there a way to do this that does not require the source images to be converted to 16 or 24bpp before being loaded? That would add a lot of data transmission overhead.
Or perhaps an alternative library that can resize a 1bpp image stream that does not use the SPOT.Graphics library?
@ Superpanda - I think you are going to have to process the data stream, and build a boolean array with the image bits, and then convert it to the required size for analysis. You can google the format of a BMP file.
When calling the Bitmaps.Convert, how is the bitmap data organized in the Byte array? I was expecting a RAW format of row after row of data, but it’s not that. Appears each byte represents 1 column and 8 rows? Are there some more in-depth docs on the Bitmap object besides MSDN?
I haven’t been playing NetMF for a while(I might have need of it again soon though), but the latest GHI premium libraries support raw access to the bitmap bytes without needing to know the header formats etc. I used this in the smoothline RLP that I put on codeshare. This is a bit like using lockbits in normal dotnet.
I think there is a bug in the GHI.Utilities.Bitmaps.Convert function. I started testing images with a stride not equal to the width. With these images I see a drift of 1 pixel per scan line in the resulting array.
Bitmap img= new Bitmap(Resources.GetBytes(Resources.BinaryResources.DRIFT), Bitmap.BitmapImageType.Gif);
byte imgBytes = new byte[Bitmaps.RequiredBufferSize(img, Bitmaps.BitsPerPixel.BPP8_RGB)];
GHI.Utilities.Bitmaps.Convert(_img, Bitmaps.BitsPerPixel.BPP8_RGB, imgBytes);
Can anyone verify this as an issue, or is this the expected behavior? I think bug only because it ultimately results in data loss. I have attached an image that is 49 pixels wide. I was expecting the returned 8bpp byte array to have a black pixel in every 49th element. The first scan line is correct, but starting with the second scan line I see an extra black pixel added thus causing the bitmap data to drift.
Also attached is the same image with 48 pixels wide and I do not see any drift.