I see that “mobile quick capture” is on the roadmap, but I was wondering if someone had already tried getting there with a Tasker (or similar) automation?
Speech to text, ideally run it through an LLM for pre-processing (similar to the WebClipper prompt?, but that could also be done later on the desktop), add to Vault into a dedicated folder, done?
Ideally without even having to unlock the phone - I’d love to trigger that via, say, the “Assist” button or a long-press on my headset?
The building blocks are all there, so I assume it’d be possible to achieve via some sort of quick prototyping framework - before I look further, has maybe someone already done it?
It seems that the web site is dead? The docs are broken, the download link too, and the last activity by the sole author on Github was early October’24 …