Neural Composer: Local Graph RAG made easy (LightRAG integration)

TL;DR

I built Neural Composer because standard vector search plugins weren’t connecting the dots in my vault. It integrates LightRAG (a specific “flavor” of Graph RAG, faster and efficient) directly into Obsidian to give you answers based on relationships, not just keywords. It automatically manages a local LightRAG server for you (starts/stops with Obsidian), supports PDFs/DOCX, (and many others), and offers a hybrid search mode. Free and open source.

The “Why”

Hi everyone! :waving_hand:

I’ve been using Obsidian for years to manage engineering projects, research, and personal journal. Like many of you, I hit a wall: my vault grew huge, and finding specific notes was easy, but synthesizing concepts across different notes was hard.

I tried a Smart Composer plugin, which uses RAG (Retrieval Augmented Generation), and I used a lot. It is great, but rely on simple vector search. If I asked: “How does the methodology in Paper A contradict the results in Project B?”, the AI often failed because the text chunks weren’t mathematically similar, even though they were logically connected.

Recently I read about LightRAG and Knowledge Graphs and realized that was the missing piece. I wanted a graph that could “traverse” my notes to find those hidden connections.

What I Built

I’m an engineer, so I decided to build a solution (with the heavy assistance of my AI co-pilot). I forked the UI of Smart Composer (credits to glowingjade for the amazing base!) and completely re-engineered the backend.

“Neural Composer” is the result. It’s a client for running a local LightRAG server that:

  1. Builds a Knowledge Graph of your vault (Entities + Relationships).
  2. Manages the Server: I didn’t want to open a terminal every time I wanted to chat with my notes. The plugin auto-starts the LightRAG server when Obsidian opens and kills it when you close the app.
  3. Hybrid Retrieval: It combines local file reading (for precision) with global graph queries (for synthesis).

Who is this for?

I built this for my own needs, but I think it fits well if you are:

  • A Researcher: Trying to synthesize literature reviews from a lot of PDFs.
  • A DM/Writer: Needing to track complex lore and character relationships without manual wikis.
  • Privacy Conscious: You can run this 100% locally with Ollama if you have the hardware (I use it with a RTX 2070 and it flies).

How to try it

It requires a bit of setup (installing Python and the library), but I tried to make the rest as “plug-and-play” as possible.

  1. pip install "lightrag-hku[api]"
  2. Install Neural Composer (Manual install from GitHub releases for now. Also you can use BRAT).
  3. Point the plugin to your lightrag-server.exe file.
  4. Right-click your notes folder → “:brain: Ingest into Graph”.

It’s open source. I’m not selling anything, just sharing a tool that solved a big headache for me.

:backhand_index_pointing_right: Repository & Download: https://github.com/oscampo/obsidian-neural-composer

I’d love to hear if this helps your workflow or if you find any bugs (it’s a v1.0, so be gentle!).

Happy connecting! Oscar.

3 Likes

I tried Ollama today with BGE-M3 and found that no matter how small the chunk size, how I even attempted to patch LightRAG utils.py, nothing doing, I always hit the 60sec worker timeout bottleneck.

Is it architectural that LightRAG cannot seem to reliably embed long-form content using Ollama models?

These env settings are always dropped:

EMBEDDING_TIMEOUT=600
EMBEDDING_FUNC_MAX_ASYNC=1
EMBEDDING_BATCH_NUM=1

As I definitely wanted to try this locally on a CPU-only laptop, do you see a way forward on this one?

Hi @Sunnaq445!
Thanks for testing the limits on a CPU-only setup! That is a very challenging environment for RAG.

1. Why your settings are “dropped”
Neural Composer currently manages the .env file authoritatively. Every time the plugin loads or you change a setting in the UI, it regenerates the .env file to ensure consistency, which overwrites any manual edits you made externally.

The Workaround:
Use the plugin’s built-in editor:

  1. Go to Settings > Neural Composer.

  2. Scroll down to the orange button: :gear: Review .env & Restart”.

  3. Add your custom lines (EMBEDDING_TIMEOUT=600, etc.) inside this modal window.

  4. Click “Save & Restart”.
    Note: This will apply them for the current session. However, be aware that changing a dropdown in the UI later might reset the file again (we are working on a “Custom Env Variables” feature to make this permanent).

2. The Bottleneck (Architecture)
BGE-M3 is a very heavy model for a CPU-only laptop. It generates 1024-dimension embeddings and handles multiple languages. It is likely timing out because the CPU simply cannot crunch the math fast enough for the default HTTP timeout window.

My Recommendation:
Switch to nomic-embed-text in Ollama.

  • It is significantly lighter and faster on CPU.

  • It is optimized for RAG.

  • It usually stays well under the 60s timeout even on modest hardware.

Try changing your Embedding Model in the settings to nomic-embed-text and let us know if that stabilizes the pipeline!

1 Like

Hi fellow knowledge workers!

I’m excited to share a major update to Neural Composer , a plugin that brings LightRAG (Graph-Augmented Generation) to Obsidian.

Unlike traditional plugins that rely solely on vector similarity, Neural Composer builds a Knowledge Graph from your notes, allowing you to ask questions that require understanding relationships and global context.

What’s new in v1.1.x?

We focused heavily on Usability and Sovereignty:

  • Native Graph Manager: Visualize your brain in 2D (Sigma.js) or 3D (WebGL) without leaving Obsidian. It’s not just a viewer; it’s a manager. You can Merge duplicate entities and Edit descriptions generated by the AI to curate your knowledge base.
  • Local & Private: Full support for local LLMs (Ollama), local Embeddings, and now Local Reranking. You can run the entire pipeline offline.
  • Custom Ontology: Teach the graph your specific domain language (e.g., “Experiment”, “Character”, “Theorem”) instead of generic categories.

Latest Hotfix (v1.1.5):
We just pushed a fix for users using Google Gemini, updating the default embedding models to match Google’s latest API changes (deprecation of text-embedding-004).

How to get it:
Currently available via GitHub Releases (manual install) or via BRAT (Repo: oscampo/obsidian-neural-composer). We are currently in the review queue for the Community Plugins list!

Link to Repository & Documentation

Happy connecting!