LLM use in community and SilverBullet core contributions

Hi all,

Unfortunately I’ve had to spend more of my time that I would like dealing with the results of the “AI revolution” lately. While I’m sure there’s people that benefit from LLMs, there are bad side effects as well:

  • On this community I’ve seen a (manageable, but still annoying) amount of LLM generated posts that sounded good, but contained misleading or simply wrong information. I’ve had to ban one user for this reason. There’s also been a flood of LLM generated spam, but that’s easier to pick out.
  • In PRs I’ve had to spend a lot of time reviewing code that was LLM generated and contained a lot of obvious and less obvious bugs. This one is probably a bit of a mixed bag, although I still think it adds up to a net negative in impact on what I can handle. I’ve had “drive by” contributions of dozens of PRs containing many thousands of lines of code, which I simply had to close because I could not possibly review them. The ones I did review often had subtle but very serious bugs in them.

While I’m sure that many of these contributions are well intentioned, for me they are a net negative, and therefore I’ve written an LLM policy for the project:

Note that this policy only applies to the community and SB main code base (both of which I feel responsible for maintaining at a high standard). Of course, if you want to use LLMs for your own use, you are free to do so. I do urge you to also understand the risks, even when e.g. LLM generating Lua code: SilverBullet offers you “sharp knives”, the assumption is that you understand what code you run, even in the privacy of your own space.

Let me know if there are any questions.

14 Likes

This is a really reasonable stance and I think you did a great job at highlighting the practical reasons behind the decision (I agree with the moral ones too, I just find that people who use AI regularly even today are likely to be aware of those and not convinced).

1 Like