Worked with a friend on a plugin that leverages AI inside of Obsidian.
We focused our efforts around three areas:
- Writing content
- Rewriting content
- Connecting your notes
Current landing page explains it better:
Technical infrastructure is a mix of gpt-3 and some ML goodies. Ping me if you want to know more
Fantastic work - I can’t wait to test it out!!
Eager to test it out asap.
I am looking forward to trying it.
There are two plugins in Obsidian for using AI:
How does the “link vault” and “ask questions of my vault”, handle my privacy?
I cannot find any privacy statement about this. Please let me know where my vault data would go?
When installing Ava, your notes are turned into numbers which are written in a database (pinecone.io or supabase.com in some new versions).
When you use links, similar notes are found in the database and the paths are retrieved (your notes are not stored on the cloud).
Ava uses GitHub - different-ai/embedbase: The open source database for ChatGPT to achieve this, everything is open-source.
Recently, we moved our back-end to Supabase because it is both aligned with Embedbase & Obsidian open-source philosophy.
Thanks for this response. I’m still not clear on one thing. If I use AVA to rewrite a note or search for similar notes, will my content be absorbed into the “collect knowledge” of ChatGPT? I have some proprietary/confidential/IP type content in my vault that I wouldn’t want shared with the world (indirectly via AI). Is there a risk of this if I use Ava?
Ava by default uses OpenAI
Everything that you send to OpenAI is tracked, recorded, and probably read by humans (Learning from human preferences)
In Ava, you can ignore some folders, it means the “Link” will not use these folders/files, but “Rewrite”, “Write”, etc. will send the data you provide to the command to OpenAI in order to process it. (Ava does not store any of these data)
Ignore confidential folders in Ava settings and don’t run Ava command over these files
Local first Ava (Links and Ask) for tech users
For technical people, you can use a local first Ava (for Link and Ask) like this:
Harder but funnier using ChatGPT-performance model (Vicuna) Embedbase Documentation
Then setup localhost:8000 in Ava settings
Thanks very much this reply. The ignore functionality provides some reassurance. And I suppose a separate vault is always an option for even “protection” if someone felt the trade offs were worth it.
The process to use local embeddings whether llama or sentence transformers is super clear
But then do you confirm that Ava will use them rather than openAI embeddings model ? Is there any more steps to do for the plugin configuration ? To be honest for me this is the killer feature
sorry … didn’t see the last sentence about setting localhost in AVA settings. So now clear