We had an AI Hackweek at the company last week, which resulted in some thinking and reflection (in a SilverBullet note, obviously) which I will copy and paste here now:
The “SilverBullet Way”
This is always evolving, but this is how I tend to approach problems in the context of SilverBullet and its positioning as a note taking app (platform) for people with a hacker’s mindset:
- Build a solid foundation that enables end-users to expand and be creative with experimentally (examples: templates, queries, but AI could add a few more)
- Let users go crazy
- Codifying what works in Libraries (and if necessary: plugs) and make it reusable that way
This is the way.
So from that perspective, let’s have a look at LLMs and where they could fit in.
Core concepts in using LLMs
- Models: this is rapidly evolving field, both for in locally hosted and cloud based version, so SB should be ready to expand here quickly. Luckily there are “standards” at the API level (for chat, embeddings) so this is fairly easy to stay up to date with and silverbullet-ai seems to have a nice “provider” model already.
- For different use cases you want to select different models, sometimes for performance (quality, speed) and sometimes for privacy reasons. For instance, I would not send my entire space to OpenAI for generating embeddings (I’d use a local LLM for this), but when I use chat I want the latest and greatest, so perhaps gpt-4o or Claude. This may be different for other people.
- Prompts: Figuring out the right prompts is not always trivial and subject to experimentation, on top of it is model specific. What works for one model, won’t for another. This means that ideally prompts should not be hardcoded.
- Use cases: These are still to be determined and ideally, new use cases can be discovered/experimented/built simply by building on the foundation.
With this in the back of our heads, let’s see if we can classify some of the interactions we may have with an LLM.
Types of interactions
Chat
I see different use cases for chat.
Some that come to mind:
- Learning about a topic via a chat conversation, e.g. I used it at some point to learn more about end-to-end encryption and how to implement it. I can talk to a top performing model like gpt-4o on this and learn a lot, and rework my chat to a page as a note for future reference. Which makes SilverBullet a useful environment to do this in. This use case is already well supported.
- GPTs: ChatGPT has this concept of GPTs. Effectively what this is a way to start a new chat with a pre-prompted model. You start the chat, but that model has already been given instructions that are not visible to you and may tap some additional resources (which are vector searchable). In the context of SB these may be references to pages in your space. While a simple concept, it’s actually very powerful. OpenAI has a GPT store, we can could imagine building similar things with a Library of such prompts.
- Chat with my space. This would somehow cleverly pull pages in from my entire space based on the conversion. Not sure how this would work, but I think there can be something here.
To make this more concrete, let me throw in some concept code to show how we may specify this:
Example, a “Practice Dutch” GPT (we should really come up with different branding for this):
in Library/AI/Practice Dutch
:
---
tags: template
hooks.ai.chat:
model: gpt-4o
name: Practice Dutch
kickOff: | # Need a better name for this
**user:** |^|
---
You are a helpful assistant to help the user practice their Dutch skills. Therefore, only interact with the person in Dutch, never give an answer in any other language.
When the user struggles with a word, be helpful and translate the word to Dutch for them.
Below is a list of Dutch words that can be used to practice:
![[Library/AI/Practice Dutch/Words]]
(the point of the translusion example is the emphasize that all SB features like templates can be leveraged for generating the prompt)
You could then fire this GPT with a AI: Chat
command, for instance allowing me to select “Practice Dutch” which would then kick off a chat page along the lines of:
---
ai.chat: "[[Library/AI/Practice Dutch]]"
---
**user**:
Page-based interactions
Some use cases that apply to a single page use case.
Transform
In place transformation (entire page, or selection):
- “Rewrite this less aggressively”
- “Expand these bullet points into prose”
- “Translate to German”
How these could be specified, conceptual code:
In Library/AI/Rewrite Less Aggressively
:
---
tags: template
hooks.ai.transform:
model: phi3
name: Rewrite less aggressively
---
Propose an alternative phrasing of the following text, keep it roughly the same length but phrased less aggressively:
|^|
(where |^|
would be the placeholder for the page content or selection to run the prompt against)
You’d select some text, run AI: Transform
, pick “Rewrite less aggressive” and the text would be replaced in place.
Extract/query/question
This would be for use cases where you don’t need any operation to happen on the text, but you’d like some sort of feedback or question answered about it. “Query” would be a good name for this, but since we already have queries this may be confusing. From a UX perspective, perhaps a side panel (RHS) can be used to render the result.
Some example use cases:
- Summarize
- Determine tone
- Give me writing feedback
In Library/AI/Summarize
:
---
tags: template
hooks.ai.query:
model: phi3
name: TL;DR
---
Summarize the following into a list of bullet points:
|^|
You’d select some text, run AI: Query
, pick “TL;DR” and would get the result in a side panel.
Action
This is one more open ended and perhaps harder to generalize, and perhaps dangerous, but worth exploring.
Use cases I’m thinking of are:
- Rename the page based on a prompt
- Add frontmatter based on prompt
These are now hardcoded into the plug, but perhaps there’s a way to generalize them. The big question would be: what actions would we (and should we) expose. All commands?
Honestly, this area I haven’t really figured out yet.
Space-based interactions
This is another space where I don’t have concrete suggestions, I just sense there are use cases here. Perhaps just in the chat use case of “talking to your space”. The building blocks here would be vectorization of your entire space, which is already supported, pulling in relevant content based on a chat or prompt and then… doing stuff with it. Again I think chat is the most obvious UX to do this.
Anyway, some food for thought.
And I expect that @justyns will have implemented all this by the morning 