Text Embeddings

Vector text embeddings would allow for semantic search and many other powerful features. I think it would be fun to open up a discussion about what powerful note-taking workflows could be unlocked in Obsidian by using text embeddings.

  • Imagine if we had a graph view that visualized notes by embedding distance
  • Imagine a chrome extension that surfaced the closest notes to any web-page you are on, by embedding vector distance
  • Imagine if when sharing an item to obsidian, it suggested potential destination notes based on an embedding search
  • Imaging adding semantic search; so you could describe what you want to find, and the meaning behind your words would be captured even if the exact words used in the note were different.

What are other ideas for how recent advances in Large Language Models might unlock new workflows in PKM and second-brain development / organization?


Had the some idea- let’s talk! I’m exploring this idea right now and want to create a prototype.

Sure; I’m down to chat: Calendly - Zach Doty

Interested in building this out, throw me in the loop! Discord djmango#8778

Love this idea! Semantic embedding has also been proposed at Conceptarium - Paul Bricman