New Plug: OpenAI / LLM AI integration

Hey there,

I tried out to let a LLM run on my server, using Ollama for that and tried it with the gemma3:1b model. It did run in the terminal during tests. So the server seem to be capable of running it.

Unfortunately I cannot make it run with this plugin. I always get this error, when reloading the site (while on the SETTINGS page it even tells me that it could not save the file + “Error running command: e.provider is undefined”):

An exception was thrown as a result of invoking function reloadConfigEvent error: e.provider is undefined silverbullet-ai.plug.js:1:728
    cr https://my.domain.com/_plug/silverbullet-ai.plug.js:1
    cr https://my.domain.com/_plug/silverbullet-ai.plug.js:1
    (Async: EventListener.handleEvent)
    cr https://my.domain.com/_plug/silverbullet-ai.plug.js:1
    <anonym> https://my.domain.com/_plug/silverbullet-ai.plug.js:285
Error dispatching event config:loaded to silverbullet-ai.reloadConfigEvent: e.provider is undefined event.ts:72:28
    dispatchEvent event.ts:72
    dispatchEvent event.ts:77
    loadConfig client.ts:279
    reloadConfigAndCommands editor.ts:65
    syscall system.ts:134
    syscall plug.ts:46
    onMessage worker_sandbox.ts:72
    onmessage worker_sandbox.ts:59
    (Async: EventHandlerNonNull)
    init worker_sandbox.ts:45
    init worker_sandbox.ts:44
    invoke worker_sandbox.ts:107
    invoke plug.ts:108
    dispatchEvent event.ts:67
    dispatchEvent event.ts:77
    dispatchAppEvent client.ts:649
    init client.ts:249
    <anonym> boot.ts:24
    C2 async.ts:98
    <anonym> boot.ts:7
    <anonym> boot.ts:52
Uncaught (in promise) Error: e.provider is undefined
    onMessage worker_sandbox.ts:93
    onmessage worker_sandbox.ts:59
    init worker_sandbox.ts:45
    init worker_sandbox.ts:44
    invoke worker_sandbox.ts:107
    invoke plug.ts:108
    dispatchEvent event.ts:67
    dispatchEvent event.ts:77
    loadConfig client.ts:279
    init client.ts:239
    async* boot.ts:24
    C2 async.ts:98
    <anonymous> boot.ts:7
    <anonymous> boot.ts:52

While my SECRET looks like this:

yaml
OPENAI_API_KEY: "none needed"

And my SETTINGS part for the ai stuff like this:

ai:
  provider: openai
  defaultTextModel: "gemma3:1b"
  openAIBaseUrl: "http://localhost:11434"
  requireAuth: false
  textModels:
    - name: "ollama-gemma3:1b"
      provider: openai
      modelName: "gemma3:1b"
      baseUrl: "http://localhost:11434"
      requireAuth: false

I also already tried the “/v1” in the urls, did not help. Is it just not possible or am I doing something wrong here?

Edit: I also tried “ollama” as provider.

Hey @tagirijus ,

For your config, can you try this? It’s complaining about the ‘ai.provider’ field because it doesn’t accept it there.

ai:
  textModels:
  - name: ollama-gemma3-1b
    modelName: gemma3:1b
    provider: ollama
    baseUrl: http://localhost:11434/v1
    requireAuth: false

How/where are you running silverbullet and ollama?

Most of the api calls are made from the client-side, so localhost would refer to wherever your client is - not localhost wherever the sb server is. You can inspect the network requests in your browser to verify this.

I mention it because if silverbullet and ollama are running on the same server, but your client is somewhere else - you’ll probably run into another issue. The two ways I’ve gotten ollama working are:

  1. silverbullet on sb.lan.mydomain.net + ollama on ollama.lan.mydomain.net and using https://ollama.lan.mydomain.net/v1
  2. silverbullet on sb.lan.mydomain.net + ollama on local machine using http://localhost:11434

2 is useful when I’m my macbook for example because I can run ollama locally and it is more powerful than the vm I normally run it on.

Thanks for your reply. So this means I could use a local running LLMs as well! This sounds really great thanks for explaining this.

I tried different configurations; with my self hosted online LLMs and also with my locally running LLMs:

ai:
  textModels:
    - name: "ollama online"
      provider: ollama
      modelName: "gemma3:1b"
      baseUrl: "https://ollama.mydomain.org"
      requireAuth: false
    - name: "ollama local 4b"
      provider: ollama
      modelName: "gemma3:4b"
      baseUrl: "http://127.0.0.1:11434"
      requireAuth: false

And I also tried with the /v1 at the end of the url; did not work neither. But now I do not get the errors in the log anymore (not sure if this was some caching problem). I do get these “connectivity test” result, on the other hand:


:satellite: AI Connectivity Test

Status Overview

:speech_balloon: Text Model Configuration

Model Details

Setting Value
Name ollama online
Description gemma3:1b on ollama
Provider ollama
Model Name gemma3:1b
Authentication Not Required
Secret Name Not provided
API Endpoint https://ollama.mydomain.org

:counterclockwise_arrows_button: Starting connectivity tests…

:electric_plug: Provider Setup

:white_check_mark: Provider successfully configured

:clipboard: Model Availability

:cross_mark: Failed to fetch available models: TypeError: NetworkError when attempting to fetch resource.

:electric_plug: API Connectivity

:satellite_antenna: Non-Streaming Test

:cross_mark: Failed to connect to API: TypeError: NetworkError when attempting to fetch resource.

Troubleshooting Tips:

  • Check your API key if needed
  • Ensure the API endpoint is accessible
  • Check if you have exceeded API rate limits
  • Verify you are not using https on silverbullet and connecting to regular http for the api endpoint

And also I am getting these console outputs:

aiSettings unchanged 
Object { openAIBaseUrl: "https://api.openai.com/v1", dallEBaseUrl: "https://api.openai.com/v1", requireAuth: true, secretName: "OPENAI_API_KEY", provider: "OpenAI", chat: {…}, promptInstructions: {…}, imageModels: [], embeddingModels: [], textModels: (2) […], … }
silverbullet-ai.plug.js:61:255
configureSelectedModel called with: 
Object { name: "ollama online", provider: "ollama", modelName: "gemma3:1b", baseUrl: "https://ollama.mydomain.org", requireAuth: false, description: "gemma3:1b on ollama" }
silverbullet-ai.plug.js:61:255
Now navigating to 
Object { page: "🛰️ AI Connectivity Test", selection: {…}, scrollTop: 0 }
client.ts:477:14
Changing cursor position to 
Object { head: 97, anchor: 97 }
client.ts:375:14
RangeError: Selection points outside of document
    Zf state.mjs:5
    al state.mjs:5
    create state.mjs:5
    lm state.mjs:5
    update state.mjs:5
    dispatch view.mjs:5
    navigateWithinPage client.ts:376
    initNavigator client.ts:482
    r navigator.ts:125
    Q2 async.ts:98
    r navigator.ts:106
    navigate navigator.ts:78
    navigate client.ts:1042
    navigate editor.ts:66
    syscall system.ts:134
    syscall plug.ts:46
    onMessage worker_sandbox.ts:72
    onmessage worker_sandbox.ts:59
    init worker_sandbox.ts:45
    init worker_sandbox.ts:44
    invoke worker_sandbox.ts:107
    invoke plug.ts:108
    dispatchEvent event.ts:67
    dispatchEvent event.ts:77
    loadConfig client.ts:277
    init client.ts:239
    async* boot.ts:24
    Q2 async.ts:98
    <anonymous> boot.ts:7
    <anonymous> boot.ts:52
async.ts:99:12
aiSettings unchanged 
Object { openAIBaseUrl: "https://api.openai.com/v1", dallEBaseUrl: "https://api.openai.com/v1", requireAuth: true, secretName: "OPENAI_API_KEY", provider: "OpenAI", chat: {…}, promptInstructions: {…}, imageModels: [], embeddingModels: [], textModels: (2) […], … }
silverbullet-ai.plug.js:61:255
configureSelectedModel called with: 
Object { name: "ollama online", provider: "ollama", modelName: "gemma3:1b", baseUrl: "https://ollama.mydomain.org", requireAuth: false, description: "gemma3:1b on ollama" }
silverbullet-ai.plug.js:61:255
configureSelectedModel called with: 
Object { name: "ollama online", provider: "ollama", modelName: "gemma3:1b", baseUrl: "https://ollama.mydomain.org", requireAuth: false, description: "gemma3:1b on ollama" }
silverbullet-ai.plug.js:61:255
Error fetching Ollama models: TypeError: NetworkError when attempting to fetch resource. silverbullet-ai.plug.js:14:7795
    listModels https://d.mydomain.org/_plug/silverbullet-ai.plug.js:14
Error calling OpenAI chat endpoint: TypeError: NetworkError when attempting to fetch resource. silverbullet-ai.plug.js:14:6151
    nonStreamingChat https://d.mydomain.org/_plug/silverbullet-ai.plug.js:14
Object { path: {…}, tag: {…} }
graphview.plug.js:1:9428
Saving page 🛰️ AI Connectivity Test client.ts:673:20

I replaced my actual domain with mydomain.org in all the pasted snippets here.

Thanks for your help, by the way. I really appreciate it! (=

Edit: I also just tried testing around with “http” and “https”. Did not help.

Edit 2: And I tried “gemma-3-4b-it” as the modelName for the local LLMs, which I run with LM Studio; there it is called that way. But did not help.

1 Like

It might be a CORs issue. Depending on your browser, the networking tab might say something about it - or you could see failed OPTIONS requests.

If you aren’t already, can you try setting OLLAMA_ORIGINS ? This is what I run ollama with to test usually:

OLLAMA_ORIGINS="*" OLLAMA_HOST=0.0.0.0:11434 ollama serve

It’s kind of hidden in the ollama docs here.

Thanks for your help. Unfortunately I cannot see CORs issues. Also I did set your mentioned environment variables for the docker (I am running ollama in). Did also not help, unfortunately. Also I again tested different urls and provider string in the settings.

I enjoy the AI plug, it is incredible how easily we can call GPT from within the note and it seamlessly adds to it.

I feel we can use it to bridge space-lua to regular users eventually. I imagined we have space-lua api docs in the prompt, and can just ask to create buttons / scripts to insert space lua templates anywhere. I think that is more important than summarize / rewrite because I want to do the writing, I don’t always want to do lua.

Like if you just do /ai make me a button to go to yesterdays journal app and make it show up on every journal entry it should tell you the steps and make the button space lua code right there.

There is certainly special care needed for the prompt, but I hope you can imagine the workflow! (Just moving from space-script creator in the ai plug docs to space-lua creator)

1 Like

Yes, I’ve been thinking about this too. I think if you feed an LLM some of the SB specific Lua documentation (perhaps even all of it, including the API docs), it may actually produce very accurate space lua scripts for you.

It might make sense to do a finetuned model / cached prompt given the amount of code and docs.. SB-GPT anyone?

SB docs are already markdown, so it shouldn’t be hard to feed the right context into a prompt. I think Zef mentioned https://llmstxt.org/ before. Something like that shouldn’t be too hard to export.

I had tested something like what @nguyeho7 mentioned before here (for space script):

Basically just a prompt that links to the space script docs. It worked OK, but I didn’t end up using it a lot because the UX was a little weird after the initial response. Like - how do you tell it to iterate on the script without ending up with tons of duplicate space-script/lua blocks on the page?

My ideal UX would be to let the llm edit the existing page with a diff view to approve/reject the edit. And/or a floating/side chat that can use tools to create new notes and modify existing ones. I kind of avoided that originally because it’d make it harder to use on mobile, and I still like using the note itself as the chat interface.

The famous Andy Matuschak did this for Obsidian, “vibe-coding” plugins to replicate Potluck features with astonishing success.

There’s probably something about these pocket universe APIs that makes it work even better.

I love ink and switch! I tried to do it for SB but it definitely needs some prompt tuning and user experience testing to make it usable out of the box (in my case, if often generated lua code which did not work which was a huge pain). Also as justyns mentioned there is no way to edit a page with the LLM so it just adds more duplicate lua scripts. Ideally it would “just work” in first shot in which case it is seamless. Maybe each space-lua generation call would create a draft page to iterate on before pasting the final result?

I definitely think SB being able to self-modify after prompted with natural language is cooler than summarizing or translating notes, but it is a hard problem.

1 Like