If I create lot of small notes will obsidian keep on working

I am thinking of using Zettlekasten notes to do focused work. So one note when I am doing work on one thing. But I think that can cause problems if there are a lot of these small notes. Has the performance (execution speed) and memory requirements of obsidian been tested if I have a lot of these in a vault? I won’t have all of them open at once. But I was just wondering if caching can cause problems if there are a lot of these notes.

4 Likes

I believe Obsidian has been stress tested with a 10,000 note Vault and works okay minus some lag when you have the file explorer open. Beyond that, as long as you keep the file explorer closed it runs fine.

This issue is known and I have no doubt will be solved eventually

1 Like

It works just fine - especially if you keep the Vault folder free of anything else than Zettel.

Was also worried with this and created 10,000 vault by duplicating my existing notes. The interface continued to be responsive, the search was not that instant but acceptable. Then again, it was not a proper test because most of the files were identical.

I stress-tested on a computer with a mechanical hard drive and a computer with a solid-state drive.

Test file:IMF v3 - Advanced Starter Kit

I duplicated the notes file in this vault to test obsidian’s ability to manage a large number of files. In the vault the largest file is 9.16KB and the smallest file is 309 bytes.

On the computer using a mechanical hard drive, when the number of notes reaches 7000 the delay in operation can be felt. It works properly but the experience is not good.

On the computer using a solid-state drive, there is a noticeable freeze when the number of notes reaches about 12,000.

12,000 is a big number, but not enough to use as a second brain.

If I store all my notes in a vault, maybe 12,000 notes will not be reached in two or three years. But if I plan to use obsidian for a lifetime, I have to consider its capacity now. I don’t want to edit and manage my notes in perceptible delay forever when the number of notes exceeds 12,000 in five or ten years.

I would like to know if obsidian performance is still possible to improve.

9 Likes

I know this is an old post, but I’m a new user so everything is new to me. :slight_smile:

I’ve been using a personal note taking system for the last 23 years that uses nothing but vi and a single file. Notes can be free-form text, but the log structure is well defined, to the point that it represents an unambiguous grammar with features like hierarchical categories, links to external documents, key-value pairs, etc. Given that the line limit in vi is a billion lines, I’ll never hit the end. I’ve been thinking about writing a web front-end to provide certain features, and Obsidian captures some of them, so I’m considering the use of Obsidian instead. Although Obsidian doesn’t currently have every single feature I’d like, it does have a lot of potential, and given that it embraces my credo that the only universal interface is text, and therefore I can perform analysis externally if necessary, it’s very attractive to me. It would be straightforward for me to write a script to break my notes into individual files for a vault. However I too considered performance.

While I really like the fact that Obsidian is working with text files, it does go against another philosophy I have, which is, a file system is not a DBMS. And so I’ve had to wonder about two things.

  • What happens when you have thousands of files/notes, you change the name of a note, and Obsidian has to look for links that need to be updated? In a phrase, it must beat the hell out of the file descriptor table because it needs to open, check, potentially rewrite, and close all notes in the vault. Of course the vault directory(ies) take a beating as well for access/modification time updates. Write-cache is your friend here.
  • What happens if, during one of these updates, Obsidian crashes, or your computer crashed, or you suffer a power outage while your desktop machine has no UPS? You’d be left with broken links because the process never finished. Playing devil’s advocate, a transactional DBMS could roll back the whole in-flight mess.

I don’t know how Obsidian “does it’s thing” internally. It might mitigate this problem by using the following process for a rename:

  1. Copy the note to a new note with the new name, leaving the old note intact.
  2. Scan the whole vault for links to be updated.
  3. When the scan/update is complete, remove the old note.

By following this protocol, in the event of a crash, you’ll wind up with two copies of the note (new and old) with some notes pointing to new and some pointing to old, but no broken links. You’d certainly know if the system crashed right when you did a rename, and could check for this situation. The other thing that could happen is a note rewrite was in flight, and that note is now corrupted. And of course the worst-case-scenario is that the file system is corrupted, but that’s out of Obsidian’s hands.

Given that there are practical limits to the number of files that can be handled by Obsidian (Developer’s pun there, file handles…) I’m thinking that maybe I should take a middle-of-the-road approach, between my using one huge file, and a boatload of little files. Instead use Obsidian with a reasonable (couple of thousand?) medium-to-large files. Just random thoughts here.

Most test based database systems – such as source code depots – have the same issue. The industry standard is source control, or for non-programmers, daily or frequent backups.

Just a note, we maintain a cache\index. We don’t have to open and search all the files for every operation otherwise everything would be terribly slow.

4 Likes