Getting rid of QuickAdd with Templater
Despite the impression this title might give, I’ve been a big fan of QuickAdd. It’s been one of my essential plugins for almost two years. I think it’s very well made, keeps improving all the time, and was a big help to me when I began this Obsidian journey. But as I kept accumulating more and more QuickAdd actions, it became difficult to keep track of what they did, and it bugged me that many of these actions used an inconsistent mix of Templater and QuickAdd logic.
Then one day it occurred to me, everything I did in QuickAdd, I could do just as well in Templater. (It’s been a habit of mine to periodically browse through my installed plugins and hunt for redundancy.)
I then proceeded to replace all of my QuickAdd actions with templates, and after a day or two I had the immense satisfaction of uninstalling QuickAdd with (almost) no regrets.
Here is a quick summary of the main principles I applied. This will cover templates, captures, macros and a replacement for the AI Assistant feature. Maybe it will help somebody else do the same, or maybe you will see something I didn’t to make my templates more efficient.
Templates
This is the easy part, of course. You just replace {{VALUE:QuickAdd placeholders}}
with <% Templater variables %>
and prompt for values if necessary. Like in this small template for dream reports:
<%*
const tf = tp.file.find_tfile(tp.file.title);
await app.vault.modify(tf, "---\ntype: rêve\n---\n" + tp.file.content);
await tp.file.move("/notes/rêves/" + tp.file.title)
-%>
Pretty straightforward, and a lot of documentation is already available on the subject. My QuickAdd templates already had some Templater logic in them, so it was just a matter of harmonization.
Captures
The logic for captures is to define a target note, prompt for some content, and then append this content to that note. Here’s one way to do it:
<%*
// target note
const targetPath = "listes/citations.md";
let file = app.vault.getAbstractFileByPath(targetPath);
let data = await app.vault.read(file);
// captured content
const citation = await tp.system.prompt("Citation");
const auteur = await tp.system.prompt("Auteur");
// adding this to that
data += `\n\n>[!QUOTE] ${auteur}\n${citation}`;
await app.vault.modify(file, data);
new Notice("Citation ajoutée.");
%>
Macros
I had a bunch of QuickAdd macros that were easily replaced, like transclude the currently active note in today’s daily note, or pick a project to associate a note to and update frontmatter values accordingly. The only difficulty I came across was when I needed to change the behavior of a template depending on the context where it was called.
For this, I didn’t quite find a way to achieve exactly the same result as with QuickAdd – hence the “almost” in my introduction. However I did find a solution I’m happy with.
Passing values
In some cases, my QuickAdd actions would pass values from the source note and use them in the capture. For instance I had a capture action for new tasks that would take an argument, and on my project dashboards, a button “add new task” which was a link to an URI through which the name of the project was passed.
I don’t know how to pass values when invoking a template. It would be really cool if we could for example call a template and define a title at the same time, with something like obsidian://advanced-uri?vault=MyVault&commandid=templater-obsidian:create-_system/templates/template.md&title="This is a title for my new note!"
. But I didn’t find a way to do this in the documentation or anywhere else.
The closest I could do was a Templater user function that retrieves frontmatter values from the previously active note – the note that was active when the template was called. Like so:
module.exports = async (tp) => {
const dv = app.plugins.plugins["dataview"].api;
const activeFile = tp.config.active_file;
let meta = app.metadataCache.getFileCache(activeFile);
let project = meta?.frontmatter?.project || null;
let type = meta?.frontmatter?.type || null;
let active_projects = dv.pages('"travail" or "vraie vie"')
.where(p => p.type === "dashboard" && p.status !== "terminé");
let choices = active_projects.values.map(p => p.project).filter(Boolean);
let uniqueChoices = [...new Set(choices)];
if (!project || type !== "dashboard") {
project = await tp.system.suggester(uniqueChoices, uniqueChoices, true, "Projet");
}
const capitalizeNames = (name) =>
name.split(' ').map((n, index) => index === 0 ? n : n.replace(n[0], n[0].toUpperCase())).join('');
let tagName = capitalizeNames(project);
const dashboard = active_projects.where(p => p.project === project).first();
const category = dashboard?.category || "travail"; // Default to "travail"
return {
project,
tagName,
category
};
};
The function looks for frontmatter keys in the source note, and if it doesn’t find them, falls back to a suggester for the user to pick a value.
We call it from a template and then use the returned values, like in the two examples below.
New note for a specific project
---
<%*
// here we call the user function
const projectData = await tp.user.getProjectData(tp);
// name our new note
const titre = await tp.system.prompt("Titre", null, true);
// determine its future file path
const baseFolder = projectData.category === "vraie vie" ? "vraie vie" : "travail";
const targetPath = `/${baseFolder}/${projectData.project}/${titre}`;
// move the note
await tp.file.move(targetPath);
-%>
// and here is how we access the values returned by the user function:
project: <% projectData.project %>
tagName: <% projectData.tagName %>
---
New task for a specific project
Using the same user function, we can retrieve the tag of a project and use it to create a new task with the Tasks plugin API:
<%*
// this will be for our daily note
const year = tp.date.now("YYYY");
const date = tp.date.now("YYYY-MM-DD");
const dailyNotePath = `journal/${year}/${date}.md`;
let file = app.vault.getAbstractFileByPath(dailyNotePath);
if (!file) {
await tp.file.create_new(
"daily",
false,
`journal/${year}`);
file = app.vault.getAbstractFileByPath(dailyNotePath);
}
let data = await app.vault.read(file);
// were we on a dashboard when the template was invoked? if so, tagName won't be null:
const dashboard = await tp.user.getProjectData(tp);
if (dashboard.tagName) {
const taskTag = dashboard.tagName || "";
}
// capture the task with the Tasks plugin modal and insert the tag
const tasksApi = this.app.plugins.plugins['obsidian-tasks-plugin'].apiV1;
let taskLine = await tasksApi.createTaskLineModal();
taskLine = taskLine.replace("#task", `#task #${taskTag}`);
// add on our daily note under a specific header
const header = "## Journal";
if (data.includes(header)) {
data += `\n\n${taskLine}`;
} else {
data += `\n\n${header}\n\n${taskLine}`;
}
// write the result
await app.vault.modify(file, data);
new Notice("Tâche ajoutée à la note du jour.");
%>
Books and media notes
I’ve been very well served by QuickAdd’s creator’s Movies & Series Script, which retrieves your media’s metadata and passes it to QuickAdd as values to be used in a template. So I just translated it into Templater logic (I hope they won’t mind).
module.exports = async (tp) => {
const API_URL = "https://www.omdbapi.com/";
// API keys are stored in a JSON file
let settings;
settings = await app.vault.adapter.read("_system/data.json");
settings = JSON.parse(settings);
const API_KEY = settings.omdb_api_key;
// prompt for title
const query = await tp.system.prompt("Titre ou n° IMDB: ");
let selectedShow;
if (/^tt\d+$/.test(query)) {
selectedShow = await getByImdbId(query, API_URL, API_KEY);
} else {
const results = await getByQuery(query, API_URL, API_KEY);
if (!results.length) return;
const choice = await tp.system.suggester(
results.map(formatTitleForSuggestion),
results
);
selectedShow = await getByImdbId(choice.imdbID, API_URL, API_KEY);
}
if (!selectedShow) return;
// store the retrieved movie data
tp.user.movieData = {
Title: selectedShow.Title || "",
Runtime: selectedShow.Runtime || "",
Year: selectedShow.Year || "",
// etc...
};
return tp.user.movieData;
};
// then the functions for the API requests are pretty much the same as in the original script...
And then made my movies and series templates (here in a simplified version) ↓
---
<%*
// call the user function
const movieData = await tp.user.OMDbImport(tp);
// move the note to the right place
await tp.file.move(`vraie vie/films/${movieData.Type} (${movieData.Year})`);
-%>
// and here we go
title: <% movieData.Title %>
director: <% movieData.Director %>
// ...a bunch of other values...
---
# <% movieData.Title %> (<% movieData.Year %>)
// fancy table using Columns plugin
````col
```col-md
flexGrow=1
textAlign=end
===
![](<% movieData.Poster %>)
[IMDB](https://www.imdb.com/title/<% movieData.imdbID %>/)
```
```col-md
flexGrow=6
===
<% movieData.Plot %>
<% movieData.Runtime %>
Avec <% movieData.Actors %>.
**Genres :** <% movieData.Genre %>
**Date de sortie :** <% movieData.Released %>
```
````
Then the same for books, using this script by forum user JamesKF. Adapting the script into a Templater function and setting up your own template is pretty straighforward from here.
At this point I’ve replaced pretty much all of my QuickAdd actions with templates. But there is still one useful feature I might miss. So, last but not least…
AI Assistant
I’ve tried a variety of AI plugins, but in the end I always came back to QuickAdd’s built-in AI assistant for its simplicity and versatility. The only featured I missed was the ability to stream the model’s response directly into my note. I think Text Generator does this, and probably some other plugins by now.
But why not build a Templater user function that would do exactly this: make a request to OpenAI’s API and stream the response directly in the note? That would be even better than the built-in QuickAdd feature.
So, here it is: a Templater user function that calls the OpenAI API. It takes the folllowing arguments: multiline input (user prompt), a system prompt (with a fallback value), a model, a maximum token number, a temperature, and whether to stream the response or not. It works here for OpenAI models but I suppose it could easily be adapted for other LLM providers or local solutions.
module.exports = async (tp, input, systemPrompt = "", model = "gpt-4o-mini", maxTokens = 300, temperature = 0.7, stream = false) => {
// load API key
let settings = await app.vault.adapter.read("_system/data.json");
let parsedSettings = JSON.parse(settings);
const API_KEY = parsedSettings.openai_api_key;
// set a default system prompt for when none is provided
if (!systemPrompt) {
systemPrompt = `You are a helpful assistant in the context of an Obsidian Vault. Respond clearly and concisely. Your response will be directly appended to a note in Obsidian, so you can use markdown formatting in your response, use tables, or leverage Obsidian features such as search or Dataview queries.`;
}
const apiUrl = "https://api.openai.com/v1/chat/completions";
const headers = {
"Content-Type": "application/json",
"Authorization": `Bearer ${API_KEY}`,
};
const body = JSON.stringify({
model: model,
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: input }
],
temperature: temperature,
max_tokens: maxTokens,
stream: stream
});
try {
new Notice("Requête envoyée à OpenAI...");
console.log("🔵 Sending request to OpenAI:", input);
const response = await fetch(apiUrl, { method: "POST", headers, body });
if (!stream) {
// Regular response handling
const jsonResponse = await response.json();
if (response.ok && jsonResponse.choices) {
const output = jsonResponse.choices[0].message.content.trim();
console.log("🟢 OpenAI Response:", output);
await tp.file.cursor_append(output);
return output;
} else {
console.warn("⚠️ OpenAI response error:", jsonResponse);
return "⚠️ API error from OpenAI";
}
} else {
// Streaming response handling
const reader = response.body.getReader();
let decoder = new TextDecoder();
let streamedResponse = "";
while (true) {
const { done, value } = await reader.read();
if (done) break;
let chunk = decoder.decode(value, { stream: true }).trim();
let lines = chunk.split("\n");
for (const line of lines) {
if (line.startsWith("data: ")) {
let jsonString = line.substring(6).trim();
if (jsonString === "[DONE]") return streamedResponse.trim(); // Ensure function returns full response
try {
let json = JSON.parse(jsonString);
if (json.choices && json.choices[0].delta.content) {
let chunkText = json.choices[0].delta.content;
streamedResponse += chunkText;
//console.log("📩 Streaming:", chunkText);
// Append each chunk directly into the note
await tp.file.cursor_append(chunkText);
}
} catch (error) {
console.error("JSON parsing error:", error, "Raw chunk:", jsonString);
}
}
}
}
return streamedResponse.trim();
}
} catch (error) {
console.error("❌ OpenAI API Error:", error);
return "⚠️ Unable to contact OpenAI";
}
};
This function can then be used in a variety of templates. For example here is a simple template that inserts a callout with the question and the LLM’s answer:
<%*
const input = await tp.system.prompt("Question", true, true);
await tp.file.cursor_append(`>[!QUESTION] ${input}\n> `, tp.file.path);
await tp.user.callOpenAI(
tp,
input,
"Respond clearly and concisely.",
"gpt-4o-mini",
300,
0.7,
true
);
-%>
Or here is a part of another template where I use the same function to define an icon for a newly created project:
<%*
let bannerIcon = '📂'; // fallback emoji
new Notice("Envoi d'une requête à OpenAI pour l'icône...");
bannerIcon = await tp.user.callOpenAI(
tp,
"Please respond with a single emoji to best illustrate the project name you were given. It must be a single emoji, without any other character. (The project name is in French.)",
projectName,
"gpt-4o-mini",
1,
0.5,
false
);
-%>
This is in fact very similar to the (then expensive) inline AI feature I saw in Notion when I tried it two years ago before I decided to go with Obsidian. You can call the function directly while you’re typing if you’re using the Slash-command native plugin. Or you can set up templates to pass more information to the model, such as the current selection, or the current note, or a bunch of other notes, or all notes with a given tag… I think there’s a lot to explore before needing to install more plugins.
Maybe everybody’s already doing that and I’m just reinventing lukewarm water… But if you’re in the same case as I was before I made this, you will probably find it useful!