Does anyone know what the limits are on numbers of files/folders on an SD card?
I am storing a large number of small files on SD card and lots of the files and folders seem to disappear. It works fine in Windows, but not when plugged into my G120.
I have tested this with 4.2 and 4.3 and with 32 and 64GB cards with no luck.
[quote]// All these length includes the NULL termination at the end of the string
#define FS_MAX_PATH_LENGTH (260 - 2) // To maintain compatibility with the desktop, the max “relative” path we can allow. 2 is the MAX_DRIVE, i.e. “C:”, “D:” … etc
#define FS_MAX_FILENAME_LENGTH 256
#define FS_MAX_DIRECTORY_LENGTH (FS_MAX_PATH_LENGTH - 12) // As required by desktop, the longest directory path is MAX_PATH - 12 (size of an 8.3 file name)[/quote]
That would indicate that the longest directory length is (260-2)-12 = 246
My longest directory name is only 40 chars long including the file name and extension and null termination.
I will try and dig into the open source code to see what more I can find.
There is no limit on files and folder count. SD cards are picky when it comes to power. Maybe check your power source. Try a different card and lower the sd clock.
I’ve tried sd clock speeds from 7MHz to 30MHz.
I’ll check the power on Monday, but it should be ok as it’s hooked up with some pretty fat traces and has a 10u and .1u cap so close that I can barely get a probe in.
I have well over a million files on the card and am only struggling to read the larger directories. The smaller directories work fine.
It could be a bug in the netmf core?
No limits? There are always limits.
You are right, let me change the answer to this. There are no limits beside the limits of the file system itself, which is FAT.
@ Mike - thanks. I have read conflicting information on these limits. The conclusion I am coming to is that some FAT32 implimentations support more than others. Windows 7 certainly doesn’t seem to have any issues, but it would not surprise me if Netmf had an implementation closer to the original FAT32 spec. I guess the solution will be to organise my files in a different way. Either by breaking up the overpopulated folders into more less populated one’s and or combining files into larger units.
… or reassessing why a million files makes “sense”
Beleive me, if I didn’t need them I wouldn’t be doing it.
The simplistic storage structure I’m currently using is very performant and easy to manage. More complex structures will be slower and requires a lot more coding. Plus it takes an age to convert the data into a different format. It does look increasingly unavoidable however.
I’ll need to do more tests when I’m back in the office, but I think it’s falling over at around the 1 or 2 thousand folders within folders and not even near the 65535 limit. I’m using a long filename for my root folder and I am wondering if this is exasabating the problem.