Calculations and Scripts Execution in Your Notes - Interactivity Plugin Release

Hi there!
I created the Interactivity plugin because I really missed the math and scripting functionality in my notes and thought others might benefit from it too!

Sometimes you need to compute numbers or access data while writing your notes. It’s handy to do this without leaving the Obsidian workspace, using your favorite tools like Python, Perl, Node.js, or others. For example, if you need to quickly calculate a project’s budget while taking notes, you can type the numbers and hit Enter in your Obsidian note to execute the code in the desired REPL:

## Mike's rate is $120. Thus, it will cost us:
@120*8*21*12+8000
249920

This plugin allows you to run shell commands and scripts directly within your notes, providing their output right alongside your written content, making your note-taking process more dynamic and interactive. By default, it supports running JavaScript, but you can also configure it to run any other shell commands.

Python Modules Collection:
My favorite daily tool is Python. This plugin includes several essential modules that enhance productivity while working in Obsidian.

  • chat.py Integrates ChatGPT directly into your notes.
  • tables.py Imports Excel and CSV tables into your notes.
  • plots.py Embeds matplotlib plots directly into your notes for quick visual data representation.

Here’s a demo of how they work with the plugin:
demo

Check out the GitHub repository for detailed instructions, examples, and source code.

I look forward to your feedback and suggestions. Feel free to open issues on GitHub or discuss on this thread.

3 Likes

I suggest a more descriptive name — “Interactivity” conveys nothing about the plugin. I haven’t read the description in detail, but perhaps something like “Calculations and Scripts” (from the title of your announcement post) would work.

2 Likes

Thank you for this suggestion. The plugin is now called ‘Interactivity: Calculations and Scripts’ for a better clarity.

1 Like

It’s an excellent plugin, but I have a few questions. It seems that plot.py, chat.py, and table.py aren’t functioning properly for me. I’ve noticed there are data.json, main.js, manifest.json, and styles.css files in the interactivity folder. I’ve downloaded your py_modules and py_manager.py files and added them to that folder, but it’s still not working. Could you provide some details?

– here is obsidian result
@x=[100,200,300,400]
@y=[1,2,3,4]
@print(x)
[100, 200, 300, 400]
@plot(x,y)
Traceback (most recent call last):
File “”, line 1, in
NameError: name ‘plot’ is not defined. Did you mean: ‘float’?

I’m glad that you liked the plugin! About the issue, please check that you have configured the following setting: Shell CLI Arguments

With this setting, you should place the python modules in the py_modules directory, so the path would look like this: ./your_vault/.obsidian/plugins/interactivity/py_modules/*.py.

Thanks for the reply, but I still don’t understand what to do.
What exactly should I enter in the Shell CLI Arguments?
-iq
C:\Obsidian\.obsidian\plugins\interactivity\py_modules/tables.py
C:\Obsidian\.obsidian\plugins\interactivity\py_modules/plots.py
C:\Obsidian\.obsidian\plugins\interactivity\py_modules/chats.py

Is this correct?

No worries, let me explain how it works.
I assume that you have set up Python and have all the necessary Python files in the py_modules directory and py_manager.py in the plugin’s root directory. Next, all you need to do is configure the Shell CLI Arguments setting with the value specified on the GitHub page:

-iq
##plugin##py_manager.py

The plugin will automatically replace ##plugin## with the actual path to its directory. After this, everything should work.

1 Like

Thank you so much for your kind response. the problem is resolved and everything is working well

1 Like

Agreed. Just accidently installed thinking it might interact with my tables and things written in my notes out of the box.

Hi, just started using this plugin for making notes on math and it’s working great. But, is there a way to make it so that the “@2+2” is deleted once I press enter, and it just replaces the line with “4”? If not, could this feature be added in the future? Thanks.

Very underrated plugin unknown to many forum users.

The plugin’s py script manager reads the user-added custom python scripts which are triggered via plugin set trigger character and results are printed straight into the markdown editor, with no Execute or Run buttons like I’ve seen in other plugins.

Bit tricky to set up for the first-time user, I admit.
But if they persist, it pays dividends.

Here leveraging AnyTXT Searcher’s index of 140GB worth of PDF’s, mobi’s, etc via the program’s API (here I have too many results, so script decides that no context will be given here):
Obsidian_qmq7mQdDji-ezgif.com-crop

The text expanding at the start is done with Typing Transformer (see below*), so I need to add search_term only.
At the end, I show the trigger characters used for AI chat and my custom functions.

The highlights are from the Dynamic Highlights plugin, which highlights English texts via a regex I picked up here on the forum (I am bilingual and bicephalic :slight_smile: ).

* Note:
I pressed Đ, which was expanded to what I want output:

'Đ|' -> 'Đ `add_regex|`'

On my Linux Mint, with a .pyenv needed for google-generativeai to be installed, I had a bit of a hard time to figure out how to have access to my Python and to get the scripts/functions to work.
I am enclosing a sample setup for the data.json of the plugin (for Linux).
You add your Python executable’s path in shellExex, and then you seem to need to add the absolute paths in shellParams and enviromentVariables, otherwise it may not work:

  "shellExec": "/home/<user>/.pyenv/versions/<versionnumber>/bin/python3",
  "shellParams": "-iq\n/path/to/Obsidian/vault/.config/plugins/interactivity/py_manager.py",
  "executeOnLoad": "",
  "notice": false,
  "decorateMultiline": false,
  "linesToSuppress": 1,
  "separatedShells": true,
  "prependOutput": "",
  "enviromentVariables": "PYTHONIOENCODING=utf8\nPYTHONPATH=/path/to/Obsidian/vault/.config/plugins/interactivity:/path/to/Obsidian/vault/.config/plugins/interactivity/py_modules",

I don’t use a setting for ‘executeOnLoad’, as I like to have everything set up script by script or through the tweaking of the py_manager.

hoi hoi
i’m trying to use python
testing it out on a simple display excel command

@excel_table(path)

i just see “interactivity is busy”

double-checked everything…


and

not sure what’s up :frowning: pls help

Had same problems, managed to solve, see next post.

Had same problems and spent quite some time getting it to work. It seems this is a very powerful plugin, but documentation is a bit scarce for beginners. Also in the documentation on github there are a few errors, e.g. it suggests we type ‘which python3’ instead of ‘where python3’.

Download the whole pluginfolder on Github and replace the contents in YOURVAULT.obsidian\plugins\interactivity with it. Chat now works.
I also tried importing an excel file. Its very picky, you need to escape backslashes, then: @excel_table(‘E:\Mijn Documenten\test tabel.xlsx’). This first gives an error:
[ERROR] ImportError: Missing optional dependency ‘openpyxl’. Use pip or conda to install openpyxl.
You can install that in command window with pip install openpyxl.
Now it actually imports the table, but it looks nothing like the beautiful demo.

It took me quite a bit of fooling around to get it to work, but I love it now.
One big request! A way to add context to the chat (for example note’s content). Or more generally, how can I pass the content of the note to the python script. Or better yet, the note’s filename, so the script can choose to include or not.
A second, not so important, request: a way to switch LLMs for the chat. Right now, I just define different chat functions and that’s ok, but lot’s of manual shuffling.

Script:

import re
import os
import platform
import openai
import logging
from typing import List, Optional, Tuple

# Example usage (Gemini also has section support with regex, of course)
# @groq_chat_with_files 'Filename' `start[\s\S]*?end` Tell me about this section
# @gemini_chat_with_files 'Filename' Tell me about this file

# Path configuration - MODIFY THESE TO YOUR PATHS
WINDOWS_BASE_PATH = r"C:\path\to\your\obsidian\vault"
LINUX_BASE_PATH = "/path/to/your/obsidian/vault"
MACOS_BASE_PATH = "/Users/yourusername/path/to/your/obsidian/vault"

# Determine base path based on operating system
if platform.system() == "Linux":
    BASE_PATH = LINUX_BASE_PATH
elif platform.system() == "Darwin":  # macOS
    BASE_PATH = MACOS_BASE_PATH
else:  # Windows or other
    BASE_PATH = WINDOWS_BASE_PATH

# Add the default system instruction
DEFAULT_SYSTEM_INSTRUCTION = """Use markdown but do NOT apply markdown code blocks."""

# Initialize global variables
__chat_messages = []

# API Keys - REPLACE WITH YOUR OWN KEYS
GROQ_API_KEY = "your_groq_api_key_here"
GEMINI_API_KEY = "your_gemini_api_key_here"

def create_groq_client():
    """Create and return a configured OpenAI client for Groq."""
    return openai.OpenAI(
        api_key=GROQ_API_KEY,
        base_url="https://api.groq.com/openai/v1"
    )

def create_gemini_client(model_name: str):
    """Create and return a configured Gemini client."""
    try:
        import google.generativeai as genai
        
        genai.configure(api_key=GEMINI_API_KEY)
        
        return genai.GenerativeModel(
            model_name=model_name,
            generation_config=genai.GenerationConfig(
                temperature=0.7,
                max_output_tokens=64000,
            ),
            safety_settings=[
                {
                    "category": "HARM_CATEGORY_HARASSMENT",
                    "threshold": "BLOCK_NONE",
                },
                {
                    "category": "HARM_CATEGORY_HATE_SPEECH",
                    "threshold": "BLOCK_NONE",
                },
                {
                    "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
                    "threshold": "BLOCK_NONE",
                },
                {
                    "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
                    "threshold": "BLOCK_NONE",
                },
            ]
        )
    except ImportError:
        raise ImportError("Google Generative AI package not found. Please install it using: pip install google-generativeai")

def normalize_regex_pattern(pattern: str) -> str:
    """Normalize regex pattern to handle both [\s\S]*? and [^]* style patterns."""
    if not pattern:
        return pattern
    if '[^]*' in pattern:
        pattern = pattern.replace('[^]*', '[\s\S]*?')
    elif '.*?' in pattern and not any(p in pattern for p in ['[\s\S]*?', '[^]*?']):
        pattern = pattern.replace('.*?', '[^]*?')
    return pattern

def parse_input(input_str: str) -> Tuple[str, Optional[str], Optional[str]]:
    """Parse input string to extract filename, pattern, and question."""
    try:
        # Handle both raw string format and regular format
        input_str = input_str.strip()
        
        # Clean up raw string artifacts from JSON mapping
        if input_str.startswith('r"""'):
            input_str = input_str.replace('r"""', '', 1).strip()
            if input_str.endswith('"""'):
                input_str = input_str[:-3].strip()
        
        # Now handle the actual quoted filename
        quoted_filename_match = re.match(r"^'([^']+)'\s*(.*)$", input_str)
        if not quoted_filename_match:
            raise ValueError("Filename must be wrapped in single quotes")
            
        filename = quoted_filename_match.group(1).strip()
        remaining = quoted_filename_match.group(2)

        # Process pattern and question
        if '`' in remaining:
            pattern_parts = remaining.split('`')
            pattern = pattern_parts[1]
            pattern = normalize_regex_pattern(pattern)
            question = '`'.join(pattern_parts[2:]).strip()
        else:
            pattern = None
            question = remaining.strip()

        return filename, pattern, question
        
    except Exception as e:
        print(f"Input parsing error: {str(e)}")
        raise

def find_file(base_path: str, filename: str) -> Optional[str]:
    """Find file recursively in the base directory and return its full path."""
    filename_with_ext = f"{filename}.md"
    for root, _, files in os.walk(base_path):
        if filename_with_ext in files:
            return os.path.join(root, filename_with_ext)
    return None

def read_file_content(filename: str, pattern: Optional[str] = None) -> str:
    """Read file content with optional regex pattern filtering."""
    try:
        filepath = find_file(BASE_PATH, filename)
        
        if not filepath:
            raise FileNotFoundError(f"File '{filename}.md' not found in your vault directory or its subdirectories.")
            
        with open(filepath, 'r', encoding='utf-8') as f:
            content = f.read()
            
        if pattern:
            try:
                match = re.search(pattern, content, re.DOTALL)
                if match:
                    return match.group(0)
                else:
                    raise ValueError(f"Pattern '{pattern}' not found in file content.")
            except re.error as e:
                raise ValueError(f"Invalid regex pattern: {str(e)}")
        
        return content
        
    except Exception as e:
        raise Exception(f"Error reading file: {str(e)}")

def groq_chat_with_files(input_str: str, system: Optional[str] = DEFAULT_SYSTEM_INSTRUCTION, 
                        save_context: bool = True, model: str = 'llama-3.3-70b-versatile') -> None:
    """Chat with file content using Groq."""
    global __chat_messages
    
    try:
        filename, pattern, question = parse_input(input_str)
        content = read_file_content(filename, pattern)
        
        client = create_groq_client()
            
        msg = []
        if system:
            msg.append({"role": "system", "content": system})
            
        if save_context:
            msg += __chat_messages
            
        context_msg = f"Content from file '{filename}':\n\n{content}\n\n"
        if question:
            context_msg += f"Question: {question}"
        
        msg.append({"role": "user", "content": context_msg})
        
        try:
            completion = client.chat.completions.create(model=model, messages=msg)
            response = completion.choices[0].message.content
            
            if save_context:
                __chat_messages += [
                    {"role": "user", "content": context_msg},
                    {"role": "assistant", "content": response}
                ]
                
            print(response + '\n')
            
        except Exception as e:
            if save_context and len(__chat_messages) > 2:
                del __chat_messages[-2:]
                return groq_chat_with_files(input_str, system, save_context, model)
            else:
                print(f"Chat API error: {str(e)}")
                
    except Exception as e:
        print(f"Error: {str(e)}")

def gemini_chat_with_files(input_str: str, system: Optional[str] = DEFAULT_SYSTEM_INSTRUCTION, 
                          save_context: bool = True, model: str = "gemini-2.0-flash-thinking-exp") -> None:
    """Chat with file content using Google's Generative AI (Gemini)."""
    global __chat_messages
    
    try:
        filename, pattern, question = parse_input(input_str)
        content = read_file_content(filename, pattern)
        
        # Create Gemini model instance
        gemini_model = create_gemini_client(model)
        
        # Prepare the prompt
        prompt_parts = []
        if system:
            prompt_parts.append(system + "\n\n")
            
        if save_context and __chat_messages:
            # Convert previous messages to a format Gemini can understand
            history = "\n\n".join([f"{'User' if msg['role'] == 'user' else 'Assistant'}: {msg['content']}" 
                                 for msg in __chat_messages])
            prompt_parts.append(history + "\n\n")
        
        context_msg = f"Content from file '{filename}':\n\n{content}\n\n"
        if question:
            context_msg += f"Question: {question}"
        
        prompt_parts.append(context_msg)
        
        try:
            # Generate response using Gemini
            response = gemini_model.generate_content("".join(prompt_parts))
            
            if save_context:
                __chat_messages += [
                    {"role": "user", "content": context_msg},
                    {"role": "assistant", "content": response.text}
                ]
                
            print(response.text + '\n')
            
        except Exception as e:
            if save_context and len(__chat_messages) > 2:
                del __chat_messages[-2:]
                return gemini_chat_with_files(input_str, system, save_context, model)
            else:
                print(f"Chat API error: {str(e)}")
                
    except Exception as e:
        print(f"Error: {str(e)}")

def clean_chat() -> None:
    """Clean chat history."""
    global __chat_messages
    __chat_messages = []

The above script is untested as I had to remove multiple lines relating to my own use case, languages, etc.
But it should work and if it doesn’t, take it to AI to customize it.

Guide is written by AI too.

Tip: Currently set Groq model is stupid. Need a better one than that if Gemini is overloaded.

Tip 2: Create a Templater script that you can call while in a file so it will populate the currentFile’s basename so you don’t have to write it in each time:

<%*
const currentFile = app.workspace.getActiveFile();
tR += "@gemini_chat_with_files " + `'${currentFile.basename}'` + " \`start[\\s\\S]*?end\`" + " add_question_here";
_%>

Or just this, if you don’t want to use regex to match sections from large documents:

<%*
const currentFile = app.workspace.getActiveFile();
tR += "@gemini_chat_with_files " + `'${currentFile.basename}'` + " add_question_here";
_%>
  • Filename expects single quotes around it, as you can see.
  • You call the Gemini function of the script with @gemini_chat_with_files so you need to add to the config of the plugin @gemini_chat_with_files -> gemini_chat_with_files(r\"\"\"##param##\"\"\") as per the dev’s guide, and the same for groq_chat_with_files function, or any other you cook up in other scripts.
    It is these function names in Python scripts that you call.


Obsidian AI Interactivity Script Guide

This script allows you to use Groq and Gemini AI models to analyze and interact with your Obsidian notes directly through the Interactivity Plugin. You can query entire files or specific sections within them using regular expressions.

Setup Instructions

  1. Install the Interactivity plugin in Obsidian
  2. Copy the chat_with_gemini_and_groq.py file to your scripts folder for the Interactivity plugin
  3. Update the following in the script:
    • Set your Obsidian vault path for Windows, macOS, and Linux
    • Add your Groq API key (get one from groq.com)
    • Add your Gemini API key (get one from Google AI Studio)
  4. Install required Python packages:
    pip install openai google-generativeai
    

Using the Script

The script provides two main functions that you can use within Obsidian’s Interactivity plugin:

1. Using Groq

Basic syntax:

@groq_chat_with_files 'Filename' Your question here

With section selection using regex:

@groq_chat_with_files 'Filename' `regex_pattern` Your question here
  • Regex example: start[\s\S]*?end

2. Using Gemini

Basic syntax:

@gemini_chat_with_files 'Filename' Your question here

With section selection using regex:

@gemini_chat_with_files 'Filename' `regex_pattern` Your question here
  • Regex pattern for sections again: start[\s\S]*?end, e.g. The boss said[\s\S]*?not happy about it\.

3. Clear Chat History

To clear the conversation context:

@clean_chat

Examples

Example 1: Ask about an entire file

@groq_chat_with_files 'Project Ideas' Summarize the main project ideas in this file

Example 2: Ask about a specific section

@gemini_chat_with_files 'Research Notes' `## Literature Review[\s\S]*?##` What are the key insights from the literature review?

Regular Expression Tips

  • Use [\s\S]*? to match any content (including newlines) between two markers
  • For sections in Obsidian, you can match between headers with: ## Header1[\s\S]*?##
  • If you want to match from a header to the end of the file: ## Last Section[\s\S]*

Customizing AI Models

Changing Groq Models

You can change the Groq model by modifying the default parameter in the groq_chat_with_files function:

def groq_chat_with_files(input_str: str, system: Optional[str] = DEFAULT_SYSTEM_INSTRUCTION, 
                        save_context: bool = True, model: str = 'llama-3.3-70b-versatile') -> None:

Available Groq models include:

  • llama-3.3-70b-versatile
  • llama-3.1-8b-instant
  • gemma-2-27b-it
  • mixtral-8x7b-32768

Changing Gemini Models

You can change the Gemini model by modifying the default parameter in the gemini_chat_with_files function:

def gemini_chat_with_files(input_str: str, system: Optional[str] = DEFAULT_SYSTEM_INSTRUCTION, 
                          save_context: bool = True, model: str = "gemini-2.0-flash-thinking-exp") -> None:

Available Gemini models include:

  • gemini-2.0-flash-thinking-exp
  • gemini-2.0-flash-thinking-exp-01-21
  • gemini-2.0-pro-latest
  • gemini-1.5-pro
  • gemini-1.5-flash

Customizing System Instructions

You can modify the DEFAULT_SYSTEM_INSTRUCTION variable to change how the AI responds:
E.g. a different prompt would be:

DEFAULT_SYSTEM_INSTRUCTION = """Use markdown but do NOT apply markdown code blocks. Do NOT use emojis."""

Troubleshooting

  1. File not found error: Make sure your base path is correctly set for your operating system (Windows, macOS, or Linux) and that the file exists within your vault. On macOS, remember that the system name is “Darwin”, not “MacOS” in Python’s platform detection.

  2. API errors: Verify your API keys are correct and have sufficient credits/quota.

  3. Module not found errors: Install the required packages with pip:

    pip install openai google-generativeai
    
  4. Regex pattern not found: Test your regex patterns separately to make sure they match the content you’re looking for.

Staying Updated

AI models are frequently updated. If a model becomes deprecated:

  1. Check the provider’s documentation for the latest model names:

  2. Update the default model parameter in the relevant function

1 Like