I’m no Docker expert, but how about using an env file and pulling a list of packages into the Dockerfile as a variable. Will this not be cached if nothing changes?
Atm I just use this in my docker compose file, it however reinstalls the packages on rebuild:
I think pandoc is such a small dependency (this size seems numb for us Deep Learning Engineers where most of the image can be of the size of 10 GBs). I would suggest simply add them when building the image.
Also when we were using it for the first time with a fresh container, downloading it would always introduce some delay that is somewhat not ideal and also may confuse not that tech-savvy user and blame SB. On either side I don’t see any benefit to use apt install on demand.
Projecting from my own experience with self-hosting. The size of SB image (1.17GB for now) + texlive and pandoc would normally have no impact on the end user. But suppose there is such usecase which I’m not aware of, it is also possible to provide additional ci with additional tag that provides slim version of the image building without texlive. This way those users are catered for their special need and most of the people are defaulted to the image that serves the best user experience.
After all, the space/RAM left available are the space/RAM wasted
It seems the docker image build times have increased from 3-5min to 8min since adding pdflatex and pandoc, which is something I don’t like. Haven’t checked the image size increase, but it’s probably also quite a bit. So still looking at how take this out again, and install such packages more on the fly. So keep the possible solutions going
The „docker“ way would be to basically fork the image and create a new immutable image. Using docker compose this can be done fairly easily.
services:
silverbullet:
build:
context: .
dockerfile_inline: |
FROM ghcr.io/silverbulletmd/silverbullet:v2
RUN apt-get update && \
apt-get install -y pandoc
Frankly I haven‘t tested that, but something like that should work. This also allows copying static binaries into the image the user may have created themselves.
Yeah, but that can be implemented to check if it is only inside the container rather than using it on directly installation. I think this is only reasonable to support if inside our own image.
Also if you were to install from different package manegers (I mean install from npm, pip or apt), we can make the installation be not a list rather a table of lists like following:
Actually this is a bit more complicated than I considered. My first thought: oh right, we can only do this in a container because we don’t have root rights outside of it (and it’s questionable if you’d want to just have SB install “random” software on your host machine). But then I realized that even in the container SB doesn’t run as root…
Perhaps the @MrMugame strategy is still the more viable one…
Alternatively we can use a “user land” package manager, like Nix but honestly I think this adds a lot of complexity and scope to SilverBullet that I’m not really excited to add.
Ok, I just ported the v2 Docker image to be based on Alpine and I removed pandoc. This resizes the docker images from 1.7GB (!!) to 300mb or something around that.
To add pandoc or other custom packages, do what @MrMugame suggests:
In your compose.yml file:
build:
context: .
dockerfile_inline: |
FROM ghcr.io/silverbulletmd/silverbullet:v2
RUN apk add pandoc
This what I have on my instance now, seems to work well.
This looks cool. But also means this installation approach is highly confined to tech-savvy people. But seems not very demanding for the target audience of SB.
Yes, but I think we have to accept that SB is already targeted at tech savvy people. And it’s likely this audience that may want to install additional packages inside a container.
Please correct me if I’m wrong, I’m not an expert in this area.
From what I understand, the denoland/deno:alpine container uses the glibc library. After some testing, it seems this can cause conflicts when running binaries installed via apk, since Alpine packages are typically compiled against musl.
A simple example you can try: apk add ripgrep && rg -h
This kind of issue likely affects any binary that depends on musl-linked libraries.
As far as I can tell, the only workaround would be to manually compile these packages against glibc, but I haven’t tested it yet, and frankly, it sounds like a painful and suboptimal solution, especially in a docker-compose setup.
If there’s no clean fix for this, I would suggest switching to the denoland/deno:ubuntu image instead. It’s only about 90MB larger. While I appreciate lightweight containers, storage is cheap, and I’d much rather have something that just works.
Interesting, I assumed that the Deno Alpine image only added libc to the image, not removed musl so all Alpine software just works (pandoc and git seem fine). In this case I may just go with a plain ubuntu image, doesn’t need to even include Deno since I’m now adding the single binary builds to the docker image anyway.
I think that Ubuntu is so much more popular than Debian that its apt quirks have become the norm now. So on the pragmatic side I expect less issues for users (and issue reports for you) if you go with Ubuntu. I would’ve said the same even if the 7% difference were the other way
At least this has been my experience on desktop which sadly made me switch back after testing Debian for a few weeks.
In the perfect world, Deno would build with musl when packaging for Alpine, but alas I can’t be bothered (nor trusted) to make this container myself…
I’m good with either. I myself, run more Debian instances than Ubuntu; but I can understand why people would prefer a more up-to-date feature rich Ubuntu.
I do have to agree with Marek. People will be more familiar with Ubuntu.
I switched the base image to ubuntu:noble (the current LTS)
I added an additional SB_APT_PACKAGES environment variable. You can set this to a list of additional ubuntu packages you’d like to install at runtime, e.g. SB_APT_PACKAGES="pandoc ripgrep" if set, the docker image when booted will do an apt update && apt install -y $SB_APT_PACKAGES. By default this happens asynchronously (so the server is already running, and the installation happens in the background) although you can override this behavior by setting SB_APT_SYNC.
You can still use the custom base image method, but the environment variable is probably user friendlier.