Hey there,
I tried out to let a LLM run on my server, using Ollama for that and tried it with the gemma3:1b model. It did run in the terminal during tests. So the server seem to be capable of running it.
Unfortunately I cannot make it run with this plugin. I always get this error, when reloading the site (while on the SETTINGS page it even tells me that it could not save the file + “Error running command: e.provider is undefined”):
An exception was thrown as a result of invoking function reloadConfigEvent error: e.provider is undefined silverbullet-ai.plug.js:1:728
cr https://my.domain.com/_plug/silverbullet-ai.plug.js:1
cr https://my.domain.com/_plug/silverbullet-ai.plug.js:1
(Async: EventListener.handleEvent)
cr https://my.domain.com/_plug/silverbullet-ai.plug.js:1
<anonym> https://my.domain.com/_plug/silverbullet-ai.plug.js:285
Error dispatching event config:loaded to silverbullet-ai.reloadConfigEvent: e.provider is undefined event.ts:72:28
dispatchEvent event.ts:72
dispatchEvent event.ts:77
loadConfig client.ts:279
reloadConfigAndCommands editor.ts:65
syscall system.ts:134
syscall plug.ts:46
onMessage worker_sandbox.ts:72
onmessage worker_sandbox.ts:59
(Async: EventHandlerNonNull)
init worker_sandbox.ts:45
init worker_sandbox.ts:44
invoke worker_sandbox.ts:107
invoke plug.ts:108
dispatchEvent event.ts:67
dispatchEvent event.ts:77
dispatchAppEvent client.ts:649
init client.ts:249
<anonym> boot.ts:24
C2 async.ts:98
<anonym> boot.ts:7
<anonym> boot.ts:52
Uncaught (in promise) Error: e.provider is undefined
onMessage worker_sandbox.ts:93
onmessage worker_sandbox.ts:59
init worker_sandbox.ts:45
init worker_sandbox.ts:44
invoke worker_sandbox.ts:107
invoke plug.ts:108
dispatchEvent event.ts:67
dispatchEvent event.ts:77
loadConfig client.ts:279
init client.ts:239
async* boot.ts:24
C2 async.ts:98
<anonymous> boot.ts:7
<anonymous> boot.ts:52
While my SECRET looks like this:
yaml
OPENAI_API_KEY: "none needed"
And my SETTINGS part for the ai stuff like this:
ai:
provider: openai
defaultTextModel: "gemma3:1b"
openAIBaseUrl: "http://localhost:11434"
requireAuth: false
textModels:
- name: "ollama-gemma3:1b"
provider: openai
modelName: "gemma3:1b"
baseUrl: "http://localhost:11434"
requireAuth: false
I also already tried the “/v1” in the urls, did not help. Is it just not possible or am I doing something wrong here?
Edit: I also tried “ollama” as provider.