This is still happening to me. I’m currently only using Obsidian on a single Mac. As suggested by @WhiteNoise in the last post before that thread was locked, I reduced my recovery snapshot time to 2min. This helps in the sense that there might be more recovery points, but does not address the underlying problem.
I don’t have 100% repro steps for this nor do I think I will be able to produce them. I’m more than willing to put in some time and do any sort of testing but I can’t sit here and bang the keys like a monkey until this happens. All I know is it is happening and it’s very unnerving for a “second brain” app to be vaporizing information like this.
In the screenshot below from a few minutes ago, you can see the history of the snapshots as I was working in this note, every 2-3 min. All of a sudden poof there’s a 0 byte version saved.
I use very few plugins and they are all widely-used ones like Templater and Advanced Tables. Also, this data loss is not happening while I’m editing the note, it’s happening during either quit or launch of the app - because in this case I was only working on this particular note over the course of a few hours. I had recently quit Obsidian and was just launching it to resume work when I saw the note was blank.
Where is the logic in whatever routine sees a 2.14 KB file suddenly become 0 bytes and decide “yeah that must be right, let’s keep the 0 byte version”. Without any notice/log/user confirmation? It makes no sense to me. Even if this somehow IS due to a filesystem bug in macOS, there needs to be some other warning or log produced, otherwise this is simply impossible to track down.
Honestly I am stuck. I’m a huge fan of Obsidian but I am at a loss of how to proceed. What do you suggest I do?
Hmm. @WhiteNoise your post made me think. I do have a script that backs up my vault every so often. It’s a simple bash script that just tar.gz’s the whole vault to a timestamped archive elsewhere on the disk. Should be totally non-destructive.
But, the last time it ran was at 1:51pm today. Which I note is right before that file got nuked.
Maybe there’s some sort of file-locking that happens when tar is zipping everything up that causes a race condition vs. the file being saved by Obsidian or something…
As you can see from the timestamps in the logs below, my backup script fired at 1:51:31pm, but inside the gzip itself, the file that was nuked has a timestamp of 1:48pm, so the zero’ing out happened minutes before the backup took place.
Sorry this keeps happening; I know it’s frustrating.
whitenoise suggestion to binary search your plugins is a good one.
I know you’ve eliminated your backup script to your satisfaction, but it might be worth trying to run without that as well.
Running lsof | grep Path/To/vault and log the results periodically with a little bash script might be good to try as well. I know you think there are no other processes using your vault files, but it would be good to verify.
Where is the logic in whatever routine sees a 2.14 KB file suddenly become 0 bytes and decide “yeah that must be right, let’s keep the 0 byte version”. Without any notice/log/user confirmation? It makes no sense to me.
Maybe you are not sharing your vault via Dropbox or another Sync service, but what you are describing is not an error case in general. I share my vault among multiple devices and if I zeroed out a file on my laptop, I’d expect that to be reflected on my iPad too without a nag screen.
It’s overwhelmingly likely this is something specific to your setup.
It’s likely small comfort, but I’d be shocked if this was an issue with Obsidian; this is kind of an outlier report, even less than the panics you get. If all else fails, could try running dtruss against obsidian to verify filesystem activity and satisfy yourself that it isn’t over-writing files.
Thanks @pmbauer for trying to help. Really appreciate it. The dude abides!
I agree that in general, a file that’s being synced among a group of machines going to zero bytes isn’t necessarily an error condition–if there was intent. I will continue the hunt. I am storing this vault on iCloud Drive, but I believe that’s a fairly common setup. Good suggestions on lsof, I will see if that turns up anything.
I also see that 0.12.3 just got released with a bump to Electron 12. So, will re-test all around on the off chance this was some weird Electron bug.
@pmbauer Thanks. True, I’d considered that as well. But, I don’t use that “optimize” feature:
Not to say there might not be a bug on Apple’s end (I’ve had my fair share of iCloud problems over the years). When I check right now, all of my files say “Downloaded” though.
I also have a bash script that periodically checks the iCloud database (directly via SQLite) for “stuck” items (you can see these with brctl status also) and reports those errors to me. So I try to stay on top of iCloud shenanigans.
Bigger picture I’m thinking as my vault gets larger and more important, that my best move will be switching to Obsidian Sync and/or a git repo combined with a periodic 1-way sync up to iCloud to be able to view on mobile.
Your experience is a little bit frightening to me. Because I use a script to do the backup: copy the entire vault to a different location and then archive the vault. It is similar to your approach.
@WhiteNoise The file becomes empty. It is not being deleted. I confirmed this by looking at the file creation (inode-born) timestamp with stat, see below. Also, I am sure that when this has happened, that I’ve just been navigating via the File pane on the left, or through search– not clicking into a Wikilink.
$ stat -Lf 'modified: %Sm%n created: %SB%n' "Intune.md"
modified: May 11 21:39:46 2021
created: May 10 12:46:16 2021 <===
I don’t have 100% repro steps for this nor do I think I will be able to produce them. I’m more than willing to put in some time and do any sort of testing but I can’t sit here and bang the keys like a monkey until this happens. All I know is it is happening and it’s very unnerving for a “second brain” app to be vaporizing information like this.
In the screenshot below from a few minutes ago, you can see the history of the snapshots as I was working in this note, every 2-3 min. All of a sudden poof there’s a 0 byte version saved.
Thanks. I’m keeping a close eye on this. Recently updated to 11.4 and have not had the data loss in a little while. Still storing my main vault on iCloud. I have my recovery snapshot interval set at 2min. Also doing hourly backups to a separate gzipped folder, and have a Bash daemon that checks for 0byte files every 10 minutes and pops up an alert if any are found.
Good to know that so far it seems to be isolated to iCloud. Anyone else who is experiencing this please post your setup.
But, there are some nice things about having the vault on iCloud, especially if you already pay for additional storage there as I do. I’m interested in getting to the bottom of this bug.
My plan now is to try to use Hammerspoon’s hs.pathwatcher() function to set up a realtime alert if an outside process modifies a file in the vault. If that doesn’t work, I was also looking at trying to fork Sinter and modify it to track filesystem changes to the vault in realtime using Apple’s Endpoint Security Framework.