Exploring the possibilities of Obsidian Copilot with LM Studio.
I’m using the llama-3.2-3b-instruct model. After a few successful prompts in ‘vault QA’ mode, I get the (no status code or body)
error.
The LM Studio console reports The number of tokens to keep from the initial prompt is greater than the context length
.
With my limited understanding my guess is I need to clear some kind of cache or start with a clean context, but how do I do this? Or is it something else that’s causing this behavior?