Writing code is fun! Documentation and change logs not so much...
I love writing code and creating new things. Creating documentation though, that is a whole other thing. That is why I am trying my best to help myself get better on it.
Developers want to do but one thing: Develop.
The thing is, once you are in the "zone" and typing out your functions - or classes, no judgements here - you are not thinking: "Oh man, I cannot wait to stop what I am doing to explain into excruciating detail all the decisions I have just taken to develop this amazing function to fix bug #1231. People need to know that there was an array that was unordered and missing the leaf pointer for the..."
Let us all be honest, we don't want to write it. We will maybe - MAYBE - put a comment in the code that explains that weird one-liner we wrote because it felt better than creating a massive nested for loop with if statements. We just want to add the code we want, but here is the problem: We need the change log, we need the documentation. Without it, nobody else will understand what happened there fast enough to be worth checking it out. Not even yourself a year later!
Now here is the thing, there is this "new" thing called LLMs (Large Language Models) out there. I think you might have heard of them. It is a new little thing based on NLP (Natural Language Processing) and Neural Networks that can use massive amounts of data... Wait, who am I kidding? You all have been using ChatGPT and others.
Generative AI models are really great to predict the next thing that you potentially would say, just like any auto-complete models we have used in the past. But MUCH more sophisticated, and larger by orders of magnitude. Some people say that those are good enough to replace programmers and to "vibe" your way through code, I am here to say that this is not true.
While you COULD make a PoC (Proof of Concept) with some great tools that pipe through multiple "agents" and create convincingly good enough code to at least get something to the other side, we should not rely on those just yet for our code bases.
While there are many other great uses of LLMs, I believe using them for enhancing our shortcomings would be the best approach. Here is how I am actively trying to help myself write better documentation. Or even just remind me that I have forgotten to do so.
Here is how I started my tests:
def main [
--model (-m): string = "gemma3:12b"
--ollama-endpoint (-e): string = "http://localhost:11434/"
--update-changelog
] {
let $changes = git diff HEAD~ HEAD
let $changelog_file = open ./CHANGELOG.md
let $system_prompt = "You are a CHANGELOG verifier for our project. You will receive a CHANGELOG.md content alongside the latest changes in git diff format for the project. Once you receive both, notify the user of potential changes they might have missed in the changelog and whether its contents are acceptable by setting the 'needs_update' flag to false if you feel the user did not create changes in the changelog file that are important. Any changes that could potentially change the way the app works is a requirement to be shown in the CHANGELOG.md"
let $prompt = $"CHANGELOG:\n(open ./CHANGELOG.md)\n\nCHANGES: \n($changes)"
let $request_object = {
model: $model,
prompt: $prompt,
stream: false,
system: $system_prompt,
format: {
type: "object",
properties: {
needs_update: {
type: "boolean",
description: "whether or not the changes are properly described in the changelog file. Set to true when it needs updating."
},
suggestions: {
type: "string",
description: "The suggestions of what could be changed in the changelog file"
}
}, required: ["needs_update", "suggestions"]
},
}
let $response = $request_object | http post $"($ollama_endpoint)api/generate" --content-type "application/json" | get response
if (not $update_changelog) {
return $response
}
let $response_object = $response | from json
if (not $response_object.needs_update) {
return $"No need to update ($response_object.needs_update), but here are the suggestions:\n($response_object.suggestions)"
} else {
print $"We need to update, here are the suggestions: ($response_object.suggestions)"
}
let $version = open ./version.json | echo $"($in.major).($in.minor).($in.patch)"
let $system_prompt = $"You are a CHANGELOG modifier for our project. You will receive a suggestion for how to update a CHANGELOG.md alongside the latest changes in the repository and the current CHANGELOG contents. Please return the changes you deem necessary for the changelog to be updated with the proper format. Current version of the project is ($version)"
let $prompt = $"($prompt)\n\nSUGGESTIONS:\n($response_object.suggestions)"
let $request_object = $request_object | update prompt $prompt | update system $system_prompt | update format {
type: "object",
properties: {
changes: {
type: string,
description: "The version changes after the ## version header."
}
}
}
let $response = $request_object | http post $"($ollama_endpoint)api/generate" --content-type "application/json" | get response | from json
let $changelog_file = open ./CHANGELOG.md | lines
if ($response.changes =~ $"## ($version)") {
return ($changelog_file | insert 1 $response.changes)
} else {
return ($changelog_file | insert 1 $response.changes | insert 1 $"## ($version)")
}
}
The main idea here is: I want to be able to test via this script whether I should update my CHANGELOG.md file or not. If so, generate what the new content should be. But the thing is: not all models are made the same, and not all my agent machines in GitLab can run Ollama. Given those constraints, I had to test many models to run on my Ollama server that would be used by all my pipelines.
I have found that many of the models I was using were either not great at testing if the Change log fine looked good enough, and most didn't create good enough descriptions for the new version. But so far, this works well enough for gemma3:12b
that I will keep using it on my local environments.
Here is how it looks like when testing for my Discord Bot for Leetify scores and team managements:
DiscoRec on main [?⇕]
nu ⌁ : nu ./changelog_changes.nu --update-changelog
We need to update, here are the suggestions: The new code introduces a SplitTeamModal, a Posthog configuration, and modifies the teams.py file. These changes, especially the modal which introduces a new user int
eraction flow, should be documented in the CHANGELOG.md. Here's a suggested addition to the CHANGELOG:
## 0.2.18 - YYYY-MM-DD
- **Added:** Implemented a `SplitTeamModal` allowing users to select players for team splitting via a Discord modal.
- **Added:** Initialized PostHog integration for analytics and debugging. Requires setting `POSTHOG_API_KEY` and `POSTHOG_HOST` environment variables.
- **Refactored:** Updated `teams.py` to remove placeholder logic, which has no effect on the functionality of the bot.
╭────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ 0 │ # CHANGELOG │
│ 1 │ ## 0.2.24 │
│ 2 │ ## 0.2.18 - $(date +%Y-%m-%d) │
│ │ │
│ │ - **Added:** Implemented a `SplitTeamModal` allowing users to select players for team splitting via a Discord modal. │
│ │ - **Added:** Initialized PostHog integration for analytics and debugging. Requires setting `POSTHOG_API_KEY` and `POSTHOG_HOST` environment variables. │
│ │ - **Refactored:** Updated `teams.py` to remove placeholder logic, which has no effect on the functionality of the bot. │
│ 3 │ │
│ 4 │ ## 0.2.17 │
│ 5 │ │
│ 6 │ - Leetify API endpoint and host has changed, this update creates two new environment variables to fix this: │
│ 7 │ - LEETIFY_API_ENDPOINT: The endpoint with the protocol and ending without the trailing slash. │
│ 8 │ - LEETIFY_PLAYER_SCORE_API_PATH: The path of the leetify API to call for the profile information without the prefix and suffix slashes. │
│ 9 │ │
│ 10 │ ## 0.1.6 │
│ 11 │ │
│ 12 │ - Changed the \_\_repr\_\_ of the Player class for using discord mentions and the player id instead of the unknown overall score. │
╰────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Yep, I know... Not perfect yet, but it already gives me most of the pieces I need. The most important one is: It won't do anything if the LLM believes it already is good enough. Which was my pet peeve: I do not want something rewriting my words, I want something to help me write more and better. If I want some help to write automatic changes using renovate in my pipelines, I can just add the --update-changelog
flag to allow it to do its thing.
I am an advocate that our words reflect our identity. Therefore, our work should contain our true thoughts. If it gets processed as much as we do not recognize ourselves in our texts, we have successfully been replaced and removed from the equation.
Call me old fashioned, but I believe there is value in honing your skills and doing it yourself rather than getting something out faster and not understanding what you have got. Ask YOURSELF: If you do not understand it, is it truly yours?
Next: I will be writing pre-commit hooks and/or PR pipelines for automatically checking and creating those changes. Will I post it? I guess you will have to subscribe to discover.