Obsidian with very large vaults / Performance results

I’ve wondered how current “tools for thought” behave when dealing with large, highly interconnected data sets. To that end, I came up with a method and developed a Python script to investigate this systematically.

If this interests you, you can read more about it in the article “TfT Performance: Methodology.”

You can find a look behind the scenes in “TfT Performance: Machine Room.”

The results of Obisidian can be found in “TfT Performance: Obsidian.” As icing on the cake, there is also an article with an outstanding big data set (100,000 highly linked files) “Interlude: Obsidian vs. 100,000.”

On my website, you can also find results for Roam Research and Logseq - RemNote is currently in preparation.

I hope you find this interesting.

If you have any questions or comments, feel free to contact me.