I tried it with Interactivity today. It works (VSCode Copilot’s Claude Sonnet can handle the Python code changes on first try), but as I had chunking enabled, the output came out as overlapping sheets of text and then further tries I didn’t go into. If you have a large context windows and unlimited output tokens on paid API or if you use a local LLM model (possible to use with Interactivity), then you can get streaming all right. No problem on the plugin’s side.