My Blog

Starlight ✨

April 25, 2025

Once again I’m updating the design on this blog. This time I’m calling it Starlight, and including an animated starfield background. Indigo, teal, and purple make up the new color scheme, and even the “light” theme has a contrast-heavy, somewhat-dark design.

Also went back to a vector profile image, now updated to match my current hairstyle. The watercolor was fun, but doesn’t fit the vibes now.

At some point I’ll actually fix the search too.

LLMs

March 22, 2025

Let’s explore the state of LLMs in early 2025. Everyone knows about ChatGPT, and while GPT4o is great, I’m far more interested in local LLMs I can run myself with my own hardware, software, and data.

I’ve tried a wide range of models, primarily on Apple Silicon, and overall I’m really impressed with the state of things. Meta’s Llama really pushed things forward with its initial launch a while back, and set the precedent for their competitors. Llama 3 is great, and generally my preferred model for general chat and prose. Google’s Gemma 3 is incredibly good, and feels competitive with larger reasoning models while being much faster. Qwen2.5 is generally useful as a local alternative to GitHub Copilot, and works well when integrated into VS Code with Continue.

Giving a PDF, text, or image file as additional input can be really powerful. With AI tools being used for so much automation these days, things like job applications are much more doable with proper use of LLMs. I’m certainly not advocating for lying about skills, mass applying, or any other broad automation, but if your resume and cover letter are going to be reviewed by an AI before a human, you should at least get Llama’s take on your resume first!

Llama

I find Llama especially good at creative writing tasks, particularly when I want to do brainstorming or roleplay conversations. Setting system params for a particular context is very effective for defining a “personality”, and really shaping its responses.

I’ve used it to flesh out worldbuilding details simply by asking questions and letting it respond in character. It’s fantastic for generating variations on themes, offering alternative ideas when I get stuck, or even just bouncing ideas off of.

My favorite Llama models:

  • Llama 3.1 8B Instruct

    • Great at writing prose, but struggles with complex topics and questions. Often misses details of the prompt.
  • Llama 3.2 3B

    • Remarkably fast, runs acceptably on basically any hardware. Good enough for general chat and prose, but really struggles with information accuracy and question understanding.

Gemma

For more functional and instructional use cases, Gemma 3 is remarkable. I’ve found it better at prompt understanding than DeepSeek R1 distills, despite not being a reasoning model, and it’s far faster than its competitors that have similar understanding properties.

There is a lot that it will refuse to do, and very easily triggers warnings and will add disclaimers and support information to a wide range of content. By default, it’s not very useful for creative tasks, as nearly any fiction subject could appear too risky for it. Specifying a relevant system message can somewhat alleviate this, but I often find Llama more useful for creative prose.

  • Gemma 3 4B Instruct
    • Handles complex questions incredibly well compared to Llama and even DeepSeek R1 distills, at a dramatically-improved token rate.
    • Really likes adding lots of Markdown formatting to responses, while Llama typically responds in plain text.
    • The image input handling is very good, able to discern lots of detail from images. It understands subjects, lighting, composition, and styles with the ability to discuss the image with all of that context.

Qwen

Qwen’s models are really nice. The 1.5B param Coder model is super lightweight, making it quite usable for local code autocomplete. The 3B param model is good enough to give reasonably-accurate larger code changes with the right prompts and context.

I highly recommend trying Continue for integrating local LLMs into VS Code. My MacBook Air is more than capable of replacing GitHub Copilot with Qwen2.5 on LM Studio/Ollama, with obvious advantages in cost and flexibility. It’s not perfect, GitHub’s integration is much better than Continue’s, but the option to have a local, offline code assistant is very compelling. I’ve found it useful enough to be worth the implementation time.

I like using the general Instruct model for technical questions, though I could see myself fully moving over to Gemma 3 now that it’s out. It’s a nice balance between technical and creative, making it a good enough general use model, but with Llama and Gemma both available too, I’ll likely not use it as much.

  • Qwen2.5 Coder 1.5B

    • Incredibly fast model that’s helpful for code autocomplete, but struggles with more complex code and questions.
  • Qwen2.5 Coder 3B Instruct

    • Slightly slower than the 1.5B param model, but writes fairly usable code, especially when given enough context.
  • Qwen2.5 7B Instruct

    • Solid middle ground between Llama and Gemma. It’s fast enough, has good technical knowledge, and can answer questions well. Writes more literally than Llama’s more casual responses, while being far less formal than Gemma.

If you have the hardware for it (and you very likely do!), you should definitely give local LLMs a try. Even on a fairly low-end PC, Llama 3.2 is quite useful and can run on a tiny amount of RAM. There are a huge number of models to try out, with many specialized use cases.

NTFS on Mac

February 24, 2025

Using NTFS on macOS is still a pain. It at least mounts volumes as read-only by default, which is better than nothing, but if you want to write to the disk it’s not as straightforward.

Tuxera and Paragon both have commercial solutions, but I’d rather just use something janky and unsupported, because I’m me. The only reason I even want to write to an NTFS volume on macOS is because my TV only supports playback from FAT32 and NTFS filesystems, and FAT32’s 4 GB limit is not ideal for modern media resolutions, even with newer codecs like AV1.

Let’s just use random Homebrew packages and accept the potential for data loss.

brew install --cask macfuse mounty

brew tap gromgit/homebrew-fuse
brew install gromgit/fuse/ntfs-3g-mac

Simple enough, my TV can play back media written with this setup, and that’s all I really wanted. MacFUSE lets us use the ntfs-3g FUSE driver (which is not intended for macOS, and is very unsupported). The Mounty app wraps the driver in a nice GUI that can auto-remount NTFS volumes r/w, and it’s not a terrible UX overall once it’s working.


April update: After a macOS reinstall, I decided to see if the commercial options were notably different/better. I’m using the latest Tuxera NTFS for Mac now, and it’s… fine. The initial setup was quite similar, still requiring manually enabling user kernel extension management from Recovery, and the UX once configured is very similar to what Mounty does for free. I’m sure there are advantages to the commercially-supported, actually-maintained NTFS driver compared to a very-unsupported Linux FUSE port, but so far I’m not seeing a significant difference in usability under normal conditions.

fish is Also Great

February 22, 2025

fish is the first shell I’ve used that feels like it’s actually designed around how I want to use a shell. As I’ve gotten more into Python, I’ve loved the convenience and power of a simplified scripting language, and have grown increasingly annoyed at how limiting and unintuitive POSIX shells are, to the point where I was actually considering trying to write my own Python-compatible shell where Python syntax could be used intermixed with generic commands. Basically a Python script but where subprocess.run() would be called most of the time.

I’ve used bash heavily since I was a kid, and switched to zsh after macOS changed defaults for whatever reason (maybe just for consistency between my platforms), but have always felt both were lacking in various ways.

Then I finally tried fish. I no longer feel like I have any desire to reinvent the shell, because fish did it perfectly.

For the basic user config, the default behavior is intuitive and nice, but easily adjusted. Here’s an example of some of my user config in ~/.config/fish/config.fish:

# alias is great for running flatpak CLI apps:
alias wine="/usr/bin/flatpak run --command=wine --file-forwarding net.lutris.Lutris"

# nvm works well enough without the global init:
set -gx NODE_VERSION 'lts/*'
alias node="$HOME/.nvm/nvm-exec node"
alias npm="$HOME/.nvm/nvm-exec npm"
alias npx="$HOME/.nvm/nvm-exec npx"
# You can get full functionality of nvm if needed by following their docs, but this is simpler and doesn't slow down the shell the same way, as long as you only need one node version at a time.

# abbr is like alias but better since it expands to the full command interactively and in history:
abbr -a yt yt-dlp
abbr -a artisan php artisan
abbr -a su sudo su -
abbr -a serve python -m http.server

Adding more complex functions interactively is nice, just use funced -s function_name and you’re dropped into an editor, where the syntax is simple and powerful:

function unzip_all_ja
  for z in *.zip
    unzip -O Windows-31J $z -d (string replace '.zip' '' $z) && rm $z
  end
end

Setting the executable path can be done with extending the PATH envvar, but Fish has a global shell variable file that is far more powerful and intuitive in ~/.config/fish/fish_variables. Adding a path to executables can be done with fish_add_path <dir>, which will automatically update the variable globally.

For adjusting much of the global configuration including the prompt style, color schemes, etc. there is a fish_config function which will launch a web interface for interactively configuring the shell. Every built-in fish_* command can be edited with funced, allowing easy customization of default behavior like prompts, terminal titles, command-not-found handling, and more.

The scripting language for functions and logic is similar to bash, but simplified and much more powerful and intuitive. The shell wraps man by default to add all of the fish documentation, and man fish-doc and man fish-language are excellent references for learning it.

If you want to switch, man fish-tutorial and man fish-for-bash-users are great introductions and cover everything a typical user would need to know. I highly recommend trying it out!

Python is Great

February 21, 2025

Python is my favorite language at this point. It’s so incredibly powerful and easy to work with. The multiprocessing module is simple but so, so useful, I adore how useful argparse is for making intuitive CLI tools, and SQLite being built in has completely converted me away from traditional hosted databases outside of enterprise-scale deployments.

Everything I write for personal tooling is in Python now, and I couldn’t be happier with it. It’s a virtually perfect language, with the exact right functionality included out of the box, with PyPI modules available for anything a bit more obscure.

I love building little helpful tools for various sysadminy things that I do, and some of my more useful tools are publicly available. One is ren, a simple batch file rename utility that makes quickly doing bulk naming operations with Python formatting syntax easy. Another is imgfind, a tool for finding and working with images. It works similar to GNU findutils, searching image metadata, and also includes features for modifying images including a simple way of recompressing/optimizing images in bulk.

My home repository, which is becoming increasingly less relevant to a home directory, includes a bunch of tiny Python-based utilities. It includes a dedup.py utility for easy deduplication of files by linking/deleting, with some simple filters. relink.py makes bulk modifying symlinks easy, with simple path string substitutions and recursive operation. relink-relative.py modifies symlinks in bulk to make them relative instead of absolute, useful for portability. ytdl-db.py works as a standalone utility or a yt-dlp postprocess hook to record metadata about downloaded videos and playlists to a sqlite database, and long-term I plan on integrating it into mytube to make automatic import/discovery of archived video metadata easy.

All of these are relatively simple scripts, but show the power in the simplicity of Python as a language and platform. It’s an excellent alternative for bash scripts when you need a bit more power and flexibility, while still being fairly portable and lightweight. I used to use PHP or Node.js for this kind of thing since those were the languages I knew best, and had moved to using Go for some things that needed parallel processing. Python has all of the advantages of each of those for utility scripts, while being far more likely to be available on any given system.

While the majority of Python I write is for command-line tools, the GTK and QT integrations are excellent, and make powerful GUI apps simple and quick to build, and are a great example of where SQLite is so useful for persisting settings, UI state, etc. in a cross-platform, easily extensible way.

For web apps, I still greatly prefer PHP, since having a template engine built-in is just so convenient, but I don’t see myself ever trying to use PHP or Node.js for anything else now. Except this blog for now I guess, unless I find a really nice Python SSG blogging platform.