I hope there are some measures in place which block these plugins from being accepted and their applicants being deterred from sending their “material” (they don’t know anything about or have never tested thoroughly).
The rise of “vibe coded” or AI-assisted or fully AI coded development is a disruptive force, and your concern is completely valid. A flood of low-quality, untested, and potentially insecure plugins would overwhelm the review team and erode user trust.
However, another thing is equally important. An outright ban on AI-generated code would be a form of gatekeeping that punishes the very people who stand to benefit most from these new tools—passionate users who have great ideas but lack the formal training to implement them.
I don’t think the solution isn’t a simple “block” or “allow.”
I mean what about those who doesn’t have experience on coding but want to create their ideas into a plugin, those who actually realised/created them, tested them, showcased them in the forum to get some beta testers before applying for review.
BUT on the flip side of that coin, however, is the equally critical issue of trust. The Obsidian plugin ecosystem thrives because users trust that the tools are generally safe, stable, and made by people who are, at some level, accountable for their work.
What happens to that trust when the ecosystem is flooded with plugins that were ‘vibe-checked’ but never truly engineered? A single, widely-used plugin with a subtle, AI-generated security flaw could do immense damage. While we want to open the door to new ideas, how do we do that without inadvertently weakening the foundation of trust that makes the entire community possible?
Maybe adding some labels on the community plugin to let user know of AI assisted plugins. When user submits the plugin for review maybe their should be option to explicitly clarify if AI was used or not.
Can’t agree more. To tag AI generated plugins is the minimum the Obsidian Dev team should do, to protect users from installing low grade code projects into Obsidian.
Plugins are very valuable to personalize our user experience, to improve our workflow with Obsidian, but AI produced plugins need stricter rules.
My suggestion:
peer review to pass
labeled as such, if they pass the peer review
Peer review
Ideally, authors should post their GitHub links in the plugins section of our forum here, explaining what their plugin does and also, what their plugin doesn’t do, to understand if a plugin respects users privacy. Finally, it should be possible to set up a poll to give users a way to vote for these AI generated plugins.
I don’t want to install any of these AI generated plugins myself, but that’s just me.
All plugins go through the same process. Vibe-coded or not.
I also don’t see how vibe-coded plugins pose more of a security threat than non-vibe coded ones.
With all due respect.
I didn’t make up the term ‘vibe-code’. Actually, I don’t really much care for who coined it, what it actually means, etc.
So we should create a new title and go by that. The Idiot’s Vibe Code. Those who have no clue what’s going on. I don’t think I need to enumerate the common AI issues: the duplicate functions, the iffy Obsidian API usage, etc.
Security issues: corruption of index or creating stale states, you people know better than I.
I’m gonna go ahead and remove my uploads from the thread in question and have fun with what the person submitted.
In lakesh
I still don’t understand what is specific about vibe-coded plugins that make them unsafe. There are plenty of human-coded plugins that are low quality and maybe even unsafe.
I can’t find it right now but I think there was another thread that was complaining that plugins are reviewed only once at the beginning. That’s a valid point.
The crux of the matter is that a fraction of the userbase would like to have strong guarantees that all plugins are safe, nice to use, performant, etc. I get it. However, doing so would require a fundamental shift in the way plugins are accepted (and are allowed to remain) in the directory (=> strong gate-keeping). We are not sure we want to go in that direction.
I don’t think peer-reviewing is the answer because: 1) very few developer actually wanna do that and do it for free 2) we can’t trust their results because they are not trained to review plugin the way the team members are.