FYI:
Fully offline, in-line with obsidian philosophy.
Note the install note for Intel OSX install.
We’ll need something to monitor the vault and add files via ‘ingest’
FYI:
Fully offline, in-line with obsidian philosophy.
Note the install note for Intel OSX install.
We’ll need something to monitor the vault and add files via ‘ingest’
I’ve been testing this with online providers and found that they’re nothing like the full chatgpt:
https://app.libraria.dev](https://app.libraria.dev
I tried it on some books in pdf format. Perhaps the formatting of the data in a book isn’t good enough?
I don’t have enough computing power to really test this self install GPT on my own vault, but I hear that it’s basically unusable without a powerful GPU.
Then, if renting a more powerful computer, or using services like the above, we lose privacy and some security.
So, this is a reason to upgrade to a faster computer.
However, current GPUs are not really designed for AI, and while there’s AI specialist hardware just barely available now, who’s to say how long for that to be available commercially?
Interesting, because I thought even phones are coming with AI specialist chips built in these days.
On the upside, I noticed that chain of reasoning and split personality prompts give better responses now.
As an aside, I notice that ChatGPT, as a chat interface with the bigger GPT behind it, echos the structure of a human, with the ego at the front and the unconscious behind it all.
I’m testing it on my MacBook pro (Intel) with 3000 notes and it’s taking hours. I don’t know how many it’s going to take, but I’m probably going to have to run it in the cloud.
OK, so since posting this I found this system is probably built off LANGCHAIN, just like those various pdf upload services are.
However, Langchain can also use ChatGPT to process large files. However, what I’m not clear about is just how much data is getting out by using a ChatGPT API key this way.
I wondered if it might be possible to use remote CPU power, yet keep the files secure and local, a bit like DISTcc distributed compilation on Gentoo.
edit: Another idea that is maybe slightly more secure than putting everything in the cloud could be to rent a runpod.io and then mount a network drive. Technically the cloud provider still has easy access, but maybe it’s better than nothing.
The Meld Encrypt plugin might be handy here?
I also observed the slowness of running privateGPT on my MacBook Pro (Intel). I tested with the default single text file that comes with the installation, and it took around 15 min to give an answer for a query.
Perhaps Khoj can be a tool to look at: GitHub - khoj-ai/khoj: An AI personal assistant for your digital brain. There is also an Obsidian plugin together with it.
Searching can be done completely offline, and it is fairly fast for me. QA with local files now relies on OpenAI. According to the project page, the tool would first search for related notes/files and then send the most relevant ones to OpenAI for giving the final answer. I am uncertain how many relevant notes/files would be sent to OpenAI, though.
On the other hand, on the Khoj Discord channel, there is a thread discussing the usage of local LLMs together with Khoj:
That Discord link seems to be dead?
Khoj looks great, but I haven’t yet been able to show to myself that for sure it’s searching my notes with chat mode. Any way to test?
That’s the direct link to a message. You can join their server here Khoj (link taken from their homepage https://khoj.dev/)
Oh, I did not realize that. It is a link to a thread “Support local LLMs” within their Discord channel. One probably need to first log in to their channel, and then the link would most likely work.
Have you succeeded in setting up OpenAI keys and such by following the instructions on their project page?
I managed to get it working.
The issue was that many of my notes are not in human readable format; designed to be read only by myself. That was stopping my queries.
When I focused on stuff with longer prose, I got some unique answers back.
This limits the usefulness of it though because longer prose is not really what I want my vault for. I only want it to remember the kind of things a computer remembers well.
Hey Andrew
What do you mean not human readable…? I mean how did you make a method of making it none human readable? And how can I do/develop a way for myself to make my notes self-understandable?
In my zettlekasten, I basically just prioritize only the index part; keywords.
I try to avoid using sentences. They’re not very useful to me. Well, they weren’t until now.