Just added a feature to easily self-host file organizer.
Self-hosting will stay free forever, but if you’re lazy you can use the cloud hosted version today!
Just added a feature to easily self-host file organizer.
Self-hosting will stay free forever, but if you’re lazy you can use the cloud hosted version today!
Hi folks,
This is not ready for prime time yet but just wanted to share my excitement about an upcoming feature.
I’ve been building a plugin called File Organizer 2000 for people who hate organization but want a neatly organized vault.
I shipped it a month ago and until now it’s been using OpenAI.
Right now I’ve been migrating this to start using open-source models.
If you want to support this in any shape, we’d appreciate you signing up here:
Thanks you
This is pretty cool. I am looking for a tool which has an Inbox and a place to put organized files.
Right now I do it by hand and unix scripts The idea is that if I put a file “Dumb stuff.md” into the inbox folder, it would create “Content/D/u/Dumb Stuff” and put the file into it. Also, if I have “Food;Dessert.md”, then it creates “Content/F/o/Food/” and puts the file in there since I treat the semicolon as a special character. Any idea how this could be done super easy?
hey should be pretty easy.
one of the things that we’re working on is “Custom work flow”
Where you’d be able to describe common workflows just like you did right now and have the plugin automatically make it work for you.
But I’d say it’s a month out as we need to ship some other things first.
If you know how to code and want to help I have a pretty good idea on how to make it happen
Here’s a mock of how you’d explain that:
Under the hood we’re using some AI goodies that would make this possible.
Hi folks,
I created a small plugin that uses the last GPT Vision to automatically annotate and move it to a specific folder.
Pretty barebones atm.
Note: atm requires your own OpenAI API key
Few queries on this?
Now supports audio also
hi @puneetjindal sorry for the late reply.
There’s a right now answer, and soon answer:
The right now answer: the plugin sees a small subset of your files and on top of that it’s completely under your control. The way it works i that the plugin only looks in a single folder (of your choice).
The soon answer:
I’m moving to open-source private models that you can control! This will be only text models and audio in the beginning, but i hope to extend it in the future to photos and videos.
Hey folks,
I finally got File Organizer 2000 plugin running with LLAVA + LLAMA3
it’s now possible to run it fully locally!
if you like this consider giving us a star on github
nice. what was your prompt for that action? unclear what is Obsidian+plugins functionality and what LLM
it’s a series of different prompts and different llms
llava -one prompt for extracting text from image
llama3 one prompt to create tags, suggest a folder, suggest name
and the plugin runs all of this in a sequence
it’s file organizer 2000 plugin available on the store
hi folks
just excited to share a new feature i built on File Organizer 2000.
fo2k can now “understand” what file you’re looking at and format it based on a custom prompt!
this is v0 so don’t expect to much, but in the video below you’ll be able to see how the AI assistant understands that it’s looking at a workout file and decides to apply a special workout prompt to it.
how it works:
and then when you’re viewing a file you can execute the “AI Format” command which use ai to know what type of document you’re looking at and then automatically format it.
to learn more go to https://fileorganizer2000.com
what do you folks think?
Small demo of using FileOrganizer 2000 + Apple Shortcut to easily get my handwritten notes into Obsidian.
For the people curious about how it works
The FileOrganizer plugin is designed to automatically organize your files in Obsidian. Here’s a simplified overview of its workflow:
Folder Monitoring: The plugin watches a specific folder in your Obsidian vault, as defined in the settings. Any new or renamed files in this folder trigger the plugin’s processing workflow.
File Detection: When a new or renamed file is detected, the plugin identifies the file type. For example, it can distinguish between markdown, audio, and image files.
File Transformation: Depending on the file type, the plugin processes the file using AI:
For example, if you add an image file to the watched folder, the plugin will generate a description of the image, create a markdown file with that description and a link to the image, give the markdown file a human-readable name, and move it to the appropriate folder in your Obsidian vault.
finally available without an account
tested w/ codegemma works like a charm
would love to hear your feedback
/Volumes/µ/fs/resilio/Learning & Labs/AI/file-organizer-2000/app on master [!]
1 % npm run build
> [email protected] build
> npm run db:migrate && next build
> [email protected] db:migrate
> dotenv -c -- drizzle-kit migrate
drizzle-kit: v0.22.7
drizzle-orm: v0.25.3
Please install latest version of drizzle-orm
now what? install drizzle-orm
doesn’t seem to fix anything and then walks me down an audit and patching procedure not documented in your repo.
edit:
here’s my .envs
MODEL_FOLDERS=llama3
MODEL_RELATIONSHIPS=llama3
MODEL_TAGGING=llama3
MODEL_NAME=llama3
MODEL_TEXT=llama3
MODEL_VISION=llava-llama3
both models are usable and i use them all the time.
Where are all the OLLAMA Setting gone?
Who to set up the Plugin with Ollama?
hey @benjaminshafii !
i just tried everything to get the connection with ollama running without much success. Ollama itself is running fine with a custom URL set in my environmont variables. OLLAMA_ORIGINS are set, ass seen here File Organizer 2000 - Cybersader Wiki
I had to re-configure the next port to :3666 instead of :3000 because it was in use by another app i was working on.
I tried to specify a custom model using ollama as well:
import { anthropic } from "@ai-sdk/anthropic";
import { openai } from "@ai-sdk/openai";
import { ollama } from "ollama-ai-provider";
const models = {
"gpt-4o": openai("gpt-4o"),
"gpt-4o-2024-08-06": openai("gpt-4o-2024-08-06", ),
"gpt-4o-mini": openai("gpt-4o-mini"),
"claude-3-5-sonnet-20240620": anthropic("claude-3-5-sonnet-20240620"),
"llama3.2:3b-instruct-q8_0": ollama("llama3.2:3b-instruct-q8_0"),
};
export const getModel = (name: string) => {
if (!models[name]) {
console.log(`Model ${name} not found`);
console.log(`Defaulting to gpt-4o-2024-08-06`);
return models["gpt-4o-2024-08-06"];
}
console.log(`Using model ${name}`);
return models[name];
};
the environment file .env looks as follows:
# Uncomment lines below for a fully local setup
# MODEL_FOLDERS=llama3
# MODEL_RELATIONSHIPS=llama3
# MODEL_TAGGING=llama3
# MODEL_NAME=llama3
# MODEL_TEXT=llama3
# MODEL_VISION=llava-llama3
MODEL_FOLDERS=llama3.2:3b-instruct-q8_0
MODEL_RELATIONSHIPS=llama3.2:3b-instruct-q8_0
MODEL_TAGGING=llama3.2:3b-instruct-q8_0
MODEL_NAME=llama3.2:3b-instruct-q8_0
MODEL_TEXT=llama3.2:3b-instruct-q8_0
MODEL_VISION=llava-llama3:latest
OLLAMA_API_URL=http://192.168.0.222:11434/
### The fastest way to get started is just to add your OpenAI API key to the .env file.
OPENAI_API_KEY=
PORT=3666
USE_OLLAMA=true
to no avail
all options that are visible in this official screenshot don’t exist on my end
and it keeps begging for an openai key
please help and thanks for your efforts in advance!