I’m currently working on a project which collects data from serial port and sends it to a server on the internet. That’s all working fine, but sometimes the internet connection is unavailable for some time (can be one minute, until several days.)
In case the connection is down, I need to store the serial data and send it as soon as the connection is restored. I take an GPS position every minute, so the amount of data collected can be quite a lot.
I tested this by using sqllite on EMX, but quickly ran into issues with the database file, mainly when the power to the module is interupted. After reboot, I often get messages like invalid file format.
I create json objects from the serial data I receive and could easliy store them in textfiles, but I can’t figure out a simple way to remove them from the file after I transfered them to the webserver.
I’m also struggeling with the diffrent tasks that my program has to perform. Right now, I have one thread collecting the data and another thread sending it to the server. I wonder what the best way is to approach all this, when implementing the third task of storing the data when connection is unavailable.
I ran into performance problems when writing log files onto SD.
Also if power goes off while writing the file or even the whole directory gets corrupted.
To avoid that you could add a ‘UPS’ or battery backup and flush the file system on power down.
To remove your json objects from file you could overwrite them with zeros or remember the last ‘already sent’ position in a separate file.
Shortening a file at the beginning is not possible I think.
Also smaller files which are sent at once would be possible.
There’s no reason you can’t have a log file that holds your data and uploads it when the connection is back up. Something like:
Lock file object so new data doesn’t get added
Copy current logged data file to temporary file
open connection to server
loop through for each data point
push data to server - if fails, write data to retry queue file
remove temporary file
Then you know each iteration has either pushed the data up or it has resulted in something you can retry later (again). Making sure nothing writes to your file at the start is important, and the only scenario that you really have to worry about is failure (power loss etc) in the middle of that process - which if you had sequential IDs on records for example you could just track what point in the file you were up to.
This whole scenario though doesn’t protect you from power disruption and what I expect the “corruption” you saw actually is. I’d invest in some power-detection circuitry and a “battery” backup. I’ve been thinking about using a supercap as the “power supply” and detecting when source power before the cap drops low, and go into “flush and shutdown” mode. The probable cause of the corruption is half-written writes to file entries causing you to not be able to parse the file correctly next time round, and that will be no different just using text files. I always use a protection block like the following when I log data to SD:
And just to be clear, that block of code is only to minimise the risk of data loss/corruption during the logging. If the card isn’t properly unmounted you can still lose data (I use a button-press to safely unmount it and then you can remove it with little chance of issue)
@ Brett - I see what you mean, but I won’t be taking out the SD card anyway, so I have no need to unmount it.
I’m just using the SD to store the data, because I would run out of ram very fast when internet is gone for a couple of days and all gps positions should be stored in RAM.
Not to mention the loss of data when power loss would occur.
How about unmounting the SD after using it? Would mounting and unmounting it all the time take up extra resources? I’m asking this, because it seems that unmounting it will force the data to be flushed to the card right?
unmounting can require you to reinsert it physically, so don’t do that For me, unmounting was the scenario where I needed to take the card and read it elsewhere, so doesn’t really apply in your circumstance. Re-mounting is also an expensive (time wise) process so not something you want when you have high data rates.
The construct I showed earlier is sufficient to protect against a “normal” powerloss situation, as long as it doesn’t happen when the write is going on (very small chance really) plus it’s reliable in that the handle gets disposed properly and doesn’t leak - thanks to the forum for sharing that. That’s all I’d do to protect against that scenario.
I have a Panda2-based product that continuously logs NMEA-0183 data to a microSD card (GPS, depth, etc.) at rates of up to 20 Hz. Power can be turned off at any time, and I do not have supercaps or any kind of power backup implemented. This product has been shipping to customers worldwide since 2011, and not one customer has yet complained about file corruption. I perform a FileStream.Flush() after every write, and I create a new log file after every power cycle, appended with an increasing index number in the filename.