Chat-able Documents Have Already KILLED Bi-Directional Links

Use case or problem

With the recent advancements in LLMs, Chat-able documents are now a feasible reality. The primary purpose of bi-directional links is to simplify note-taking and searching processes. But thanks to Chat-able documents, this has become effortless now. No more link creations and no more note browsing, just ask, and the LLM could give you answer and citation. It’s imperative that Obsidian understands the urgency to react promptly, or users will abandon this product soon. However, since most users have already accumulated an enormous amount of content in their tools, reacting swiftly will encourage them to remain loyal.

Proposed solution

There are already some open-source solutions to this: e.g. LlamaIndex

And services built on top of it such as Meru

Please make it available ASAP, even as a paid feature, or Obsidian will be covered by remnants of the past.

Could you describe the feature you want more specifically (and change the title to match)? I think you have something more specific in mind than “add AI” but it’s not coming thru.

1 Like

Seems the opposite of local to me

I highly doubt that will happen. There are also many concerns with large vaults. Indexing and searching through a huge vault can rack up prices pretty quickly. The more tokens, the higher the price

In addition, most Obsidian users enjoy the locality and offline aspects of Obsidian

You can use Smart Connections if you desire this functionality. But, I doubt its as urgent as you think it is

“Overall, only 1 in 10 (9%) Americans believe computer scientists’ ability to develop AI would do more good than harm to society.” Source

We have to keep in mind that a lot of people aren’t that AI hungry. I’m a tech dork myself but…man…a lot of people don’t want AI in EVERYTHING

2 Likes

The links attached to projects I mentioned were deleted by someone. Otherwise it should be enough a showcase of how this feature should be like. While meru’s website is a bit confusing right now, try chatpdf.com, it’s the same idea.

I use this title for developer’s attention, as this is not just a single feature but a new paradigm Obsidian could adopt.

It has nothing to do with locality. It’s the same concept of a service as obsidian publish.

Obsidian Sync simply transfers from one local to another. Obsidian Publish is for sharing, not managing your workflow

Bi-directional links are a part of most workflows. Making it into an online AI component isn’t anything like Obsidian Publish

I’m all for going against tradition but locality and security is Obsidian’s thing. Many 3rd party plugins have online elements. However, your implication is not including this feature would be a downfall to Obsidian…which wouldn’t make sense

If you can do more research you would find out that it doesn’t need to prompt the whole vault to generate an answer. Source

“A summarization query requires GPT to iterate through many if not most documents in order to synthesize an answer.” - your source

Adding an optional service will NOT change your traditional workflow. You can choose not to use it.

“ To answer a query, the vector store index embeds the query, fetches the top-k text chunks by embedding similarity, and runs the LLM over these chunks in order to synthesize the answer.”— my source.

There are already community plugins that offer some AI functionality.

We won’t add any AI features ourselves anytime soon.
If we were to add AI features into Obsidian we would want them to work entirely offline.

Here is a longer answer by kepano:

1 Like

Also, I’m a computer science major and I have studied AI. In order to make a chatable vault-wide system the model has to index a lot (if not most) of your vault

For example, ChatGPT, “The model is able to reference up to approximately 3000 words (or 4000 tokens) from the current conversation-any information beyond that is not stored.”

I don’t know about you but my vault is way over 100k characters. Mind you, every time a prompt is called, the AI scans the vault again. With 4 characters per token, thats 1k per prompt

That does not counter my argument,

They’re describing a retrieval-based question answering system, which is extremelyyyyy cost effective compared to traditional prompting. However, its still can get very expensive with large vaults. Even if a vectorized index is used, its still going to amount to a lot of resources for bigger vaults

That was never argued it will change my workflow. However, it does go against traditional Obsidian values

1 Like

Moving this to #knowledge-management as there is no clear feature request.

1 Like

I going to close this thread. @zhy3213 if you had bothered searching before bosting you would have found several other posts about this topic, that I recently pulled together in one thread

4 Likes

Yes, I removed your links. Because you are a new user with no previous activity. So I removed your links to prevent the risk that your account was just link-farming. (I see that’s not the case now, but as a new user, you can participate in the community for a while before coming in with a bunch of external links. The point you were making does not require those links.)

1 Like