New Plug: OpenAI / LLM AI integration

Hello!

This is something I’ve been working on for a few days and have already shared on the discord (thanks for the feedback there!). I feel like this plug is at a point where it’s usable enough if others want to try it out:

(Please backup your space! This does insert and replace text, so better to be safe than risk your data)

It’s still pretty early, but here are a few things it can do:

  • Create and insert a summary of either selected text or the whole note
  • Generate tags for a note and set them in the frontmatter
  • Generate images using Dall-E and save them locally
  • Send selected text (or the note) and use it as a user prompt, putting the response in the open note
  • Have an interactive user/assistant chat inside of a note so you can keep your chat history in SB
  • Support for changing the openai base url, allowing local LLMs - I’ve only tested this briefly with Ollama so far

The readme has a list of available commands available through the command palette. There are no slash commands or keyboard shortcuts yet, but I’m open to suggestions for them.

Here’s an example of the chat feature:
streaming-chat-example

On a side-note: I’m not really a javascript dev, so I’d be more than happy if anyone wants to help make it better. Feel free to suggest changes/features/etc too.

4 Likes

This is very cool! Thanks for building this!

Amazing!
Thanks for doing this.

How difficult/easy would it be to adjust this Plug to use Gemini?

Would this work with Mistral’s API?

The OpenAI API seems to be more or less a standard for other LLMs to implement, so just check if they’re OpenAI compatible.

Llamafile implements it for instance: GitHub - Mozilla-Ocho/llamafile: Distribute and run LLMs with a single file.

1 Like

I just checked and it is. I must be setting up something wrong in SETTINGS or SECRETS.

It doesn’t look like Gemini has an openai compatible api, so it wouldn’t be supported right now.

Maybe you could try using the proxy server from GitHub - BerriAI/litellm: Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs) ? Docs: Quick Start | liteLLM
I haven’t used it, but it says it supports Gemini.

Overall I’d rather support one type of API, but I do plan on adding support for configuring multiple models and switching between them, so I’ll do some planning around how that could extend to different APIs too if there’s demand for it

1 Like

Mistral’s API does look like it should be compatible. I don’t have an account to test with yet, but what error(s) are you getting?

In SECRETS, you should only need

OPENAI_API_KEY: "mistral api key here"

And in SETTINGS, it should have:

ai:
  defaultTextModel: gpt-4-0125-preview
  openAIBaseUrl: https://api.mistral.ai/v1

You’ll have to change defaultTextModel too.

And once you change either of those files, you’ll probably need to run the “System: Reload” command and/or do a hard refresh so it picks up the new settings.

I tried Mistral using these settings:

ai:
  defaultTextModel: mistral-medium
  openAIBaseUrl: https://api.mistral.ai/v1

and Ollama with these:

ai:
  defaultTextModel: phi
  openAIBaseUrl: http://127.0.0.1:11434

For Mistral, does the SECRETS page has to look like this:

OPENAI_API_KEY: "aaaaaaaaaaaaaaaaaaaaaaaa"

When trying the chat command I get “page does not match the required format for a chat”.

Other commands show a “Contacting LLM, please wait” but nothing happens.

Thanks for trying it out! I signed up for a mistral.ai account and was able to get it working, using the same settings as you:

SETTINGS:

ai:
  defaultTextModel: mistral-medium
  openAIBaseUrl: https://api.mistral.ai/v1

SECRETS:

OPENAI_API_KEY: "xxxxx"

Right now, the only thing required in SECRETS is that OPENAI_API_KEY. Even for Ollama.

One thing I noticed is that running the “System: Reload” command doesn’t seem to be enough for settings changes to take effect. It seems like I need to run both “System: Reload” and then do a hard refresh of the page I’m on. Can you give that a try?

For Ollama, can you try these settings (with the /v1 on the end of the url)?

ai:
  defaultTextModel: phi
  openAIBaseUrl: http://localhost:11434/v1

I should make this easier, but that command expects the page to look like this:

**user**: What's the population of new york?

To get it to work on an existing page, basically just add **user**: <your query> to the bottom of the page. It’ll ignore anything above the first **user**: it finds though.

Can you check if the javascript console has any errors in it? If there’s an error, it’ll log it to the console but doesn’t flash a notification on the page currently

Thanks a lot! I got mistral-medium working. I still had no success with ollama running on my machine.

1 Like

Are you using Chrome by any chance? I was testing with Firefox originally and Ollama worked, but on Chrome I ran into CORS errors.

I pushed a few changes (release notes), but try this for ollama (the new requireAuth: false):

ai:
  defaultTextModel: phi
  openAIBaseUrl: http://localhost:11434/v1
  requireAuth: false

I added this example to the readme too.

Two other big changes:

  • You can use the “chat on page” command on any page now, and use cmd+shift+enter or ctrl+shift+enter to trigger it
  • SETTINGS and SECRETS now reload automatically when changed, so no messing about with reloading/etc

Thanks a lot! Ollama still does not respond for me, but it might have to do with my machine. Anyway, mistral works great and it is relatively cheap, so I am covered :blush:

I just love this thing. The fact that OpenAI returns content in markdown just makes this perfect for SilverBullet.

3 Likes

I haven’t updated this thread in a while, but wanted to share a few recent highlights from recent releases!

The latest release is 0.0.11. Some of the big changes since I originally posted this thread:

Thanks to everyone who’s provided feedback so far!

2 Likes

Let me just bump this one for visibility. @justyns is just doing an amazing job progressing on this project. There’s just so much opportunity in this one that I wish I had more time to play with it.

Note there’s a whole website (published using silverbullet-pub) as well:

https://ai.silverbullet.md

1 Like

If anyone is interested in recent additions, I’ve also been adding quite a bit related to vector embeddings and search:

It’s in the main branch, but I haven’t made a release yet since I’ll be out of town for a few days and want to make sure nothing’s going to blow up. If anyone is interested in testing out these changes, please feel free!

One of the features I’m pretty excited about is enabling RAG in the chat mode, so I can ask about things without having to actually link directly to a note now.

3 Likes

Version 0.4.0 of the AI plug is now released. There are quite a few changes, you can see the changelog on the fancy new docs website here:

https://ai.silverbullet.md/Changelog/#040-2024-09-16

A few important changes:

  • Support for space-config
  • Add support for post-processors to templated prompts
  • Several new options for the insertAt option, like replacing a line/selection/list.
  • New prompts in the AICore library to demo some of the new features
  • https://ai.silverbullet.md is now being created using a combination of silverbullet-pub+mkdocs+mkdocs-material and looks much better than before imo.

As always, please let me know if you run into any issues or have any ideas!

4 Likes