Thanks for sharing your code along with this detailed description of the issue.
I see that the GPS class preserves a different file for each day. Can you confirm whether the data loss is only about the current logfile or whether it also affects files from previous days?
As a first step, as you mentioned you could try adding an
f.flushI() or even an
os.fsync() call as the python docs suggest.
On the other hand since your
GPS.dump() just opens the file using a
with statement, which closes the file right after the single operation finishes, I would expect that all the buffers would have already been flushed.
Do you have any metrics about how much time might
GPS.dump() need to complete on a production device?
As far as I can tell, you loop rewrites the file from scratch, is that correct?
That would not only might reduce the lifespan of your storage media, but also increases the chance that the file gets corrupted if there is a sudden power loss.
Ideally you should only append the new records to the file instead of rewriting its whole content.
An easier approach that would also confirm our suspicions could be to always have a secondary file to write the new data into and then swap it with the primary one. For example, you could always dump your data on a file named
gps_log_%Y%m%d_next.json, and then once that’s flushed, rename the files to make the
_next be the proper
Hope that helps.