With the recent advancements in LLMs, Chat-able documents are now a feasible reality. The primary purpose of bi-directional links is to simplify note-taking and searching processes. But thanks to Chat-able documents, this has become effortless now. No more link creations and no more note browsing, just ask, and the LLM could give you answer and citation. It’s imperative that Obsidian understands the urgency to react promptly, or users will abandon this product soon. However, since most users have already accumulated an enormous amount of content in their tools, reacting swiftly will encourage them to remain loyal.
There are already some open-source solutions to this: e.g. LlamaIndex
And services built on top of it such as Meru
Please make it available ASAP, even as a paid feature, or Obsidian will be covered by remnants of the past.
The links attached to projects I mentioned were deleted by someone. Otherwise it should be enough a showcase of how this feature should be like. While meru’s website is a bit confusing right now, try chatpdf.com, it’s the same idea.
I use this title for developer’s attention, as this is not just a single feature but a new paradigm Obsidian could adopt.
Obsidian Sync simply transfers from one local to another. Obsidian Publish is for sharing, not managing your workflow
Bi-directional links are a part of most workflows. Making it into an online AI component isn’t anything like Obsidian Publish
I’m all for going against tradition but locality and security is Obsidian’s thing. Many 3rd party plugins have online elements. However, your implication is not including this feature would be a downfall to Obsidian…which wouldn’t make sense
Also, I’m a computer science major and I have studied AI. In order to make a chatable vault-wide system the model has to index a lot (if not most) of your vault
For example, ChatGPT, “The model is able to reference up to approximately 3000 words (or 4000 tokens) from the current conversation-any information beyond that is not stored.”
I don’t know about you but my vault is way over 100k characters. Mind you, every time a prompt is called, the AI scans the vault again. With 4 characters per token, thats 1k per prompt
That does not counter my argument,
They’re describing a retrieval-based question answering system, which is extremelyyyyy cost effective compared to traditional prompting. However, its still can get very expensive with large vaults. Even if a vectorized index is used, its still going to amount to a lot of resources for bigger vaults
That was never argued it will change my workflow. However, it does go against traditional Obsidian values
Yes, I removed your links. Because you are a new user with no previous activity. So I removed your links to prevent the risk that your account was just link-farming. (I see that’s not the case now, but as a new user, you can participate in the community for a while before coming in with a bunch of external links. The point you were making does not require those links.)