New Plug: OpenAI / LLM AI integration

There is a problem with index.md on ai.silverbullet.md:

Macro Syntax Error
Line 39 in Markdown file: unexpected char ‘"’ at 1416
{{ include_file("Commands) }}

Thank’s !

Mistral configuration (with SB locally in Firefox under W11) :

  • parameter useProxy is required with the false value

Thanks !

For openrouter:

    textModels = {
      {
        requireAuth= true,
        useProxy=false,
        name = "gemma",
        modelName = "google/gemma-3-27b-it:free",
        provider = "openai",
        baseUrl = "https://openrouter.ai/api/v1",
        secretName = "OPENROUTER_API_KEY"
      }

Bug @justyns

spacelua.interpolate(${JSON.stringify(M)}, ${JSON.stringify(ye)})
=> 
spacelua.interpolate("---\ntags: meta/template/aiPrompt\naiprompt:\n  description: Generate company profile\n  slashCommand: aiCompany\n---\n/ai\n", {"page":{"name":"work/AICompany","size":117,"contentType":"text/markdown","created":"2025-12-20T18:23:43.022","lastModified":"2025-12-20T19:04:04.740","perm":"rw","ref":"work/AICompany","tag":"page","tags":[]},"currentLineNumber":7,"lineStartPos":113,"lineEndPos":113,"currentPageText":"---\ntags: meta/template/aiPrompt\naiprompt:\n  description: Generate company profile\n  slashCommand: aiCompany\n---\n\n"})}

ye must be converted to lua object
spacelua.interpolate(js.tolua(${JSON.stringify(M)}), js.tolua(${JSON.stringify(ye)}))
what do you think?

@malys and @bau - what version of silverbullet are you running? I’m thinking I should have defaulted useProxy to false for now, but useProxy=true will only work (when auth is involved) on the latest silverbullet edge after PR #1721

You’re right, I had to add something similar to convert stuff back and forth from lua. I might be able to just rewrite that command into lua, I’ll take a look

@justyns
I just tested the Edge version whithout the useProxy parameter and all is ok.
Thanks!

1 Like

@justyns I use stable SB version

1 Like

@justyns Do you want to migrate your plugin in full space-lua?

I was considering it, but for now my goal is to keep the core of the plug in typescript and only rewrite parts of it in space-lua.

Another big release! 0.6.0 has a lot of stuff I’ve wanted to add for a long time.

The main addition is an assistant chat panel with tool calling, which means the LLM can actually interact with your space instead of just generating text. Along with that, we can also define additional custom tools using space-lua.

Using the chat panel means the chat isn’t automatically saved to your notes in case you don’t like generating clutter there for simple queries.

There’s also a very basic approval/diff system so that tools making changes to your pages will require an approval first.

There are also some other tools like navigating to specific pages:

The assistant panel also includes a button to export the current chat to a regular note (that you can then continue using the normal ai: chat on page we’ve always had). The Chat on Page feature also gets tool support.

The “AI: Connectivity Test” command tests the llm for its ability to return structured responses and call tools now too.

There’s now a new provider-based config option so you can define your config once and the list of available models will be fetched dynamically.

Links with more info:

Let me know if you have any feedback or come up with any interesting use cases! I plan on focusing on ironing out these new features for a while, but they should be pretty usable as-is.

1 Like

And to show some example tools, I added a few I’ve been using to my shared library:

  • Tool to search your space using the awesome silversearch plug by @MrMugame
  • Tool to search a SearXNG instance
  • Tool to fetch a web page and convert it to markdown using turndown

I even used the assistant panel to update the repo file for me:


1 Like

FYI I have updated and after a reload, SB was totally impacted and very slow.
I had to uninstall it.

I updated last night and did not have that experience, fwiw.

1 Like

@malys did you have embeddings generation enabled by any chance? That’s the only thing I can think offhand that could cause slowness

Big lag on Mark map, marpslides plugins. 1s to open lateral view.

0.6.3 is released.

I didn’t include 0.6.2 in this thread, so release notes for both copied below. I’m hoping a few of the changes will help with performance overall, but I’ll keep looking into what else could cause lag.

0.6.3 (2025-01-10)

  • Embeddings now include page title and section headers. Requires re-indexing to take effect.
  • Benchmark command now shows progress
  • Reuse SB’s theming where possible so that the UI is more consistent
  • Add path-based permissions for agents (allowedReadPaths, allowedWritePaths) to restrict tool access, wiki-link context, and current page context to specific folders
  • Add an explicit “AI: Refresh Config” command
  • Improve potential performance issue where we still do unncessary inits and datastore reads

0.6.2 (2025-01-05)

  • Improvements to default system prompt to use less tokens
  • Generate /llms.txt and /llms-full.txt
  • Agents now inherit the base system prompt by default, but can be toggled off with inheritBasePrompt
  • Fix potential performance issue where page:index events caused unnecessary async work when embeddings are disabled
  • Fix potential performance issue where the config was re-read even when there were no changes
  • Parallelize model discovery and cache Ollama model info to avoid redundant API calls
  • Add new Reindex All Embeddings command
  • Fix Embeddings Search virtualpage
  • Fix error on chat panel when no text model selected
  • Add RAG status indicator in chat panel header, show embeddings context like tool calls, move them to their own messages

Changelog:

Image generation supports gpt-image models now
Support provider-based config for image and embedding models, now you can define a single provider and use it for text, images, and embeddings
Add new defaultEmbeddingModel and defaultImageModel config options
Support batch embedding generation for openai and ollama
Show pricing info for remote apis in the model picker
Add new AI: Reset Selected Models command

The main change is finishing the provider-based config, so you can configure text/images/embeddings all at once like this:

config.set {
  ai = {
    providers = {
      openai = {
        apiKey = "your-openai-key-here"
      }
    },
    defaultTextModel = "openai:gpt-4o",
    defaultEmbeddingModel = "openai:text-embedding-3-small",
    indexEmbeddings = true,
    defaultImageModel = "openai:dall-e-3"
  }
}
1 Like