This is indeed a puzzle. Let me “write through” this problem to see if there’s a gap somewhere that could cause this.
A few things:
The sync engine doesn’t care about “old” vs “new”. It simply compares lastModified timestamps and if they’re different it assumes a file changed (locally or remotely). The reason is that I’m going to assume clocks drift, timezone issues etc, so those timestamps are likely not a reliable way to order changes.
A conflicting copy is created when:
- Both the lastModified timestamp of the file locally and remotely have changed since the last sync cycle
- AND the content is actually different (this will for sure be the case, if you updated frontmatter).
To check what has changed in between sync cycles, a snapshot is kept (as a big map, stored in IndexedDB) with for each file the lastModified timestamp of the file locally and remotely (separately, just in case the remote doesn’t reliably update last modified dates) as is the state when last sync cycle completed.
As of the 2.0 release, the sync engine runs in the service worker which and uses mutexes to make ensure only one sync cycle happens at a given time.
What is a bit unspecified and OS specific is how long a service worker is kept alive when you close tabs, switch apps etc. From what I have found experimentally is that on iOS service workers are even kept running in the background for up to a minute or so even when you switch apps. However, we’re always at the mercy of the browser vendor, they can shoot this process down at any given time. Both on the desktop and on mobile. I just noticed you mentioned you’re using Chrome on iOS, which makes things even more complicated because this isn’t actual Chrome, it’s just a skinned Safari with probably quasi-random Apple-imposed restrictions, so behavior may be slightly different again.
Now what makes your case interesting is that you likely had a batch of files to sync, which could take a bit of time.
The flow is this:
- Fetch a list of all local files (with timestamps)
- Fetch a list of all remote files (with timestamps)
- Fetch the snapshot from storage
- Compare local and remote timestamps with what is in the snapshot
- For each “mismatch” perform the sensible operations (upload, download, conflict resulution) updating the snapshot as we go along
- Save the snapshot back to storage
What could hypothetically happen is that the sync process is killed during step 5, a bunch of files have synced but since we never make it to step 6 this is not reflected in the snapshot in a persistent way.
In this scenario, when the service worker is booted again and a new sync cycle kicks in, it would go through step 1-3. Then when it compares the locally (presumably already updated) files with the snapshot it will conclude: oh, there were changes here, the timestamps don’t match with the snapshot! Then it looks at the remote list, and here too, changes were made.
It would indeed go into conflict resolution mode in this case. However since the content it will find locally and remotely would be the same, it would still not result in conflicts…
So, even in this scenario this shouldn’t result in many conflicting copies.
Still comparing actual content is a bit risky, especially if the remote had already changed again since the last time. So if you have a lot of churn on a bunch of notes a lot, and this “mid sync cancel” happen a lot, you can run into problems occasionally.
What I can do to make this less likely is to save the snapshot more aggressively, for instance after each individual file sync rather than after a full cycle. This comes at some cost, but maybe not a lot.
