Skip to content

logancyang/obsidian-copilot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ” Copilot for Obsidian

GitHub release (latest SemVer) Obsidian Downloads

Copilot for Obsidian is a free and open-source LLM interface right inside Obsidian. It has a minimalistic design and is straightforward to use.

  • πŸ’¬ ChatGPT UI in Obsidian.
  • πŸ› οΈ Prompt AI with your writing using Copilot commands to get quick results.
  • πŸš€ Turbocharge your Second Brain with AI.
  • 🧠 Talk to your past notes for insights.

My goal is to make this AI assistant local-first and privacy-focused. It has a local vector store and can work with local models for chat and QA completely offline! More features are under construction. Stay tuned!

UI

If you enjoy Copilot for Obsidian, please consider sponsoring this project, or donate by clicking the button below. It will help me keep this project going to build toward a privacy-focused AI experience. Thank you!

Buy Me A Coffee

SPECIAL THANKS TO OUR TOP SPONSORS: @pedramamini, @Arlorean, @dashinja, @azagore, @MTGMAD, @gpythomas, @emaynard, @scmarinelli, @borthwick

Changelog

Announcement 🚨

We are migrating off of PouchDB for better Obsidian Sync and mobile support (starting from v2.6.3). Your existing custom prompts MUST be dumped to markdown using the command "Copilot: Dump custom prompts to markdown files". After running it you should be able to use your Add/Edit/Apply/Delete custom prompts as usual.

Please make sure you run it, or you will lose all your old prompts when PouchDB is removed!

v2.6.0: MOBILE SUPPORT is here! πŸŽ‰πŸŽ‰πŸŽ‰

  • Huge thanks to our awesome @gianluca-venturini for his incredible work on mobile support! Now you can use Copilot on your phone and tablet! πŸŽ‰πŸŽ‰πŸŽ‰
  • Complete rehaul of how models work in Copilot settings. Now you can add any model to your model picker provided its name, model provider, API key and base url! No more waiting for me to add new models!
  • Say goodbye to CORS errors for both chat models and embedding! The new model table in settings now lets you turn on "CORS" for individual chat models if you see CORS issue with them. And embedding models are immune to CORS errors by default!
    • Caveat: this is powered by Obsidian API's requestUrl which does not support "streaming" of LLM responses. So streaming is disabled whenever you have CORS on in Copilot settings. Please upvote this feature request to let Obsidian know your need for streaming!

πŸŽ‰ HIGHLY ANTICIPATED v2.5.0: Vault QA (BETA) mode (with local embedding support)! Claude 3! πŸŽ‰πŸŽ‰πŸŽ‰

The highly anticipated biggest update of all is here!

The brand new Vault QA (BETA) mode allows you to chat with your whole vault, powered by a local vector store. Ask questions and get answers with cited sources!

What's more, with Ollama local embeddings and local chat models, this mode works completely offline! This is a huge step toward truly private and local AI assistance inside Obsidian!

(Huge shoutout to @AntoineDao for working with me on Vault QA mode!)

Model Providers

OpenAI, Anthropic, Azure OpenAI, Google Gemini, OpenRouter, GROQ, 3rd Party Models with OpenAI-Compatible API, LM Studio and Ollama are supported model providers.

OpenRouter.ai hosts some of the best open-source models at the moment, such as MistralAI's new models, check out their websites for all the good stuff they have!

LM Studio and Ollama are the 2 best choices for running local models on your own machine. Please check out the super simple setup guide here. Don't forget to flex your creativity in custom prompts using local models!

πŸ› οΈ Features

  • Chat with ChatGPT right inside Obsidian in the Copilot Chat window.
  • No repetitive login. Use your own API key (stored locally).
  • No monthly fee. Pay only for what you use.
  • Model selection of OpenAI, Azure, Google, Claude 3, OpenRouter and local models powered by LM Studio and Ollama.
  • No need to buy ChatGPT Plus to use GPT-4.
  • No usage cap for GPT-4 like ChatGPT Plus.
  • One-click copying any message as markdown.
  • One-click saving the entire conversation as a note.
  • Use a super long note as context, and start a discussion around it by switching to "Long Note QA" in the Mode Selection menu.
  • Chat with your whole vault by selecting "Vault QA" mode. Ask questions and get cited responses!
  • All QA modes are powered by retrieval augmentation with a local vector store. No sending your data to a cloud-based vector search service!
  • Easy commands to simplify, emojify, summarize, translate, change tone, fix grammar, rewrite into a tweet/thread, count tokens and more.
  • Set your own parameters like LLM temperature, max tokens, conversation context based on your need (pls be mindful of the API cost).
  • User custom prompt! You can add, apply, edit, delete your custom prompts, persisted in your local Obsidian environment! Be creative with your own prompt templates, sky is the limit!
  • Local model support for offline chat using LM Studio and Ollama.

🎬 Demos

πŸ€— New to Copilot? Quick Guide for Beginners:

  • Chat with ChatGPT, copy messages to note, save entire conversation as a note
  • QA around your past note
  • Fix grammar and spelling, Summarize, Simplify, Emojify, Remove URLs
  • Generate glossary, table of contents
  • Translate to a language of your choosing
  • You can find all Copilot commands in your command palette

To use Copilot, you need API keys from one of the LLM providers such as OpenAI, Azure OpenAI, Gemini, OpenRouter (Free!). You can also use it offline with LM Studio or Ollama!

Once you put your valid API key in the Copilot setting, don't forget to click Save and Reload. If you are a new user and have trouble setting it up, please open an issue and describe it in detail.

πŸ’¬ User Custom Prompt: Create as Many Copilot Commands as You Like!

You can add, apply, edit and delete your own custom Copilot commands, all persisted in your local Obsidian environment! Check out this demo video below!

🧠 Advanced Custom Prompt! Unleash your creativity and fully leverage the long context window!

This video shows how Advanced Custom Prompt works. This form of templating enables a lot more possibilities with long context window models. If you have your own creative cool use cases, don't hesitate to share them in the discussion or in the youtube comment section!

πŸ”§ Copilot Settings

The settings page lets you set your own temperature, max tokens, conversation context based on your need.

New models will be added as I get access.

You can also use your own system prompt, choose between different embedding providers such as OpenAI, CohereAI (their trial API is free and quite stable!) and Huggingface Inference API (free but sometimes times out).

βš™οΈ Installation

Copilot for Obsidian is now available in Obsidian Community Plugin!

  • Open Community Plugins settings page, click on the Browse button.
  • Search for "Copilot" in the search bar and find the plugin with this exact name.
  • Click on the Install button.
  • Once the installation is complete, enable the Copilot plugin by toggling on its switch in the Community Plugins settings page.

Now you can see the chat icon in your leftside ribbon, clicking on it will open the chat panel on the right! Don't forget to check out the Copilot commands available in the commands palette!

⛓️ Manual Installation

  • Go to the latest release
  • Download main.js, manifest.json, styles.css and put them under .obsidian/plugins/obsidian-copilot/ in your vault
  • Open your Obsidian settings > Community plugins, and turn on Copilot.

πŸ”” Note

  • The chat history is not saved by default. Please use "Save as Note" to save it. The note will have a title Chat-Year_Month_Day-Hour_Minute_Second, you can change its name as needed.
  • "New Chat" clears all previous chat history. Again, please use "Save as Note" if you would like to save the chat.
  • "Use Long Note as Context" creates a local vector index for the active long note so that you can chat with note longer than the model's context window! To start the QA, please switch from "Chat" to "QA" in the Mode Selection dropdown.
  • You can set a very long context in the setting "Conversation turns in context" if needed.

πŸ“£ Again, please always be mindful of the API cost if you use GPT-4 with a long context!

πŸ€” FAQ (please read before submitting an issue)

"You do not have access to this model"
  • You need to have access to the model APIs to use them. Usually they require an API key and a payment method.
  • A common misunderstanding I see is that some think they have access to GPT-4 API when they get ChatGPT Plus subscription. It's not always true (depending on when you signed up). You need to have access to GPT-4 API to use the model in this plugin. Please check if you have payment available on your OpenAI account. Then check OpenAI playground if you can use that particular model https://platform.openai.com/playground?mode=chat. Again, API access and ChatGPT Plus are two different things! You can use the API without the ChatGPT Plus subscription.
  • Reference issue: #3 (comment)
It's not using my note as context
  • Please don't forget to switch to "QA" in the Mode Selection dropdown in order to start the QA. Copilot does not have your note as context in "Chat" mode. Settings
  • In fact, you don't have to click the button on the right before starting the QA. Switching to QA mode in the dropdown directly is enough for Copilot to read the note as context. The button on the right is only for when you'd like to manually rebuild the index for the active note, like, when you'd like to switch context to another note, or you think the current index is corrupted because you switched the embedding provider, etc.
  • Reference issue: #51
"insufficient_quota"
  • It might be because you haven't set up payment for your OpenAI account, or you exceeded your max monthly limit. OpenAI has a cap on how much you can use their API, usually $120 for individual users.
  • Reference issue: #11
"context_length_exceeded"
  • Please refer to the model provider's documentation for the context window size. Note: if you set a big max token limit in your Copilot setting, you could get this error. Max token refers to completion tokens, not input tokens. So a bigger max output token limit means a smaller input token limit!
  • The prompts behind the scenes for Copilot commands can also take up tokens, so please limit your message length and max tokens to avoid this error. (For QA with Unlimited Context, use the "QA" mode in the dropdown! Requires Copilot v2.1.0.)
  • Reference issue: #1 (comment)
Azure issue
  • It's a bit tricky to get all Azure credentials right in the first try. My suggestion is to use curl to test in your terminal first, make sure it gets response back, and then set the correct params in Copilot settings. Example:
    curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=VERSION\
      -H "Content-Type: application/json" \
      -H "api-key: YOUR_API_KEY" \
      -d "{
      \"prompt\": \"Once upon a time\",
      \"max_tokens\": 5
    }"
    
  • Reference issue: #98

When opening an issue, please include relevant console logs. You can go to Copilot's settings and turn on "Debug mode" at the bottom for more console messages!

πŸ“ Planned features (feedback welcome)

  • New modes
    • Chat mode (originally Conversation mode): You can now provide multiple notes at once as context in conversations, for LLMs with an extended context window.
    • QA mode: You can index any folder and perform question and answer sessions using a local search index and Retrieval-Augmented Generation (RAG) system.
  • Support embedded PDFs as context
  • Interact with a powerful AI agent that knows your vault who can search, filter and use your notes as context to work with. Explore, brainstorm and research like never before!

πŸ™ Thank You

Did you know that even the timer on Alexa needs internet access? In this era of corporate-dominated internet, I still believe there's room for powerful tech that's focused on privacy. A great local AI agent in Obsidian is the ultimate form of this plugin. If you share my vision, please consider sponsoring this project or buying me coffees!

Buy Me A Coffee

Please also help spread the word by sharing about the Copilot for Obsidian Plugin on Twitter, Reddit, or any other social media platform you use.

You can find me on Twitter/X @logancyang.