My Blog

Astro 💫

Back in 2020, I migrated this blog from Jekyll to Gatsby. Gatsby was much more flexible and feature-rich than Jekyll, but I always had concerns about the long-term maintainability of it, as it had a lot of seemingly-unnecessary complexity. Gatsby worked by transforming content sources (in my case a collection of Markdown files) into a GraphQL API, then providing a React app framework to generate pages from the GraphQL data.

Since then, Gatsby has effectively been abandoned, and many more competitors have arrived. I decided to try Astro, and the migration from Gatsby to Astro was incredibly easy. My build times are dramatically improved, and Astro is much easier to work with. I’m loving the simplicity, and the JSX-like syntax made porting my React components over effortless. Apart from some changes to browser preloading, and removal of the (already broken) Algolia search component, the end result is nearly identical to what I had with Gatsby but with far less complexity. I feel like Astro has struck the perfect balance between features and complexity, with the ability to easily extend it for more advanced use cases.

I’ll probably end up moving platforms again in a few years. It seems like that’s just what I do with this site.

Switch 2 🎮

The Nintendo Switch 2 is neat and kinda weird. I mostly really like it. I still haven’t gotten my invite to order one from Nintendo, but I knew someone who got several preorders so I bought from him at MSRP.

First impressions were a bit weird. I wanted to just quickly migrate my save games from my Switch 1, but apparently it requires that both consoles be plugged into first-party power adapters to start the transfer. Luckily, it only checks once when starting the process on each device, so I was able to use the Switch 2 power adapter for both devices, temporarily plugging each in only during the initial check. My other USB-C PD options were not accepted by the migration tool even when they powered and charged both devices perfectly fine. This was especially silly because the transfer only took about 5 minutes!

Once set up, it’s a Switch, but better. In docked mode, every Switch game just runs way better, and the Switch controllers generally work as expected. It’s annoying that Switch 1 controllers can’t power it on, because that’s definitely an intentional feature restriction to get people to buy new a Pro Controller, but BOTW has never looked better at 1440p 60 on my OLED TV.

In handheld, the display is noticeably larger, brighter, and more saturated than the original Switch LCD. VRR and 120 Hz support are very nice, and make a big difference in many games that struggle to keep a consistent V-Sync frame-rate. I’m hopeful that they figure out VRR support on the dock despite the DisplayPort to HDMI conversion limitations, because it’s incredibly helpful for some games.

Switch 1 games that haven’t had proper Switch 2 patches yet look pretty bad in handheld play. The 720p game resolution is resampled to the 1080p display with only basic filtering, causing every pixel to be mismatched. UIs look especially blurry, but anything with pixel-sized detail looks quite bad. Any high-contrast edge becomes blurred across pixels, and it looks noticely worse than on Switch 1’s native 720p display. For games with 1080p support in handheld though, the Switch 2 is a huge upgrade from the original, though still falls behind the overall display quality of the Switch OLED model.

I would love to see better software features for either running the 1080p docked mode in some games while handheld, or a broader range of Switch 2 compatibility patches to run at native resolution on handheld, at least for the UI elements if nothing else. Almost every game should have > 720p UI available for docked mode on Switch 1, so it shouldn’t typically require any new assets.

The console is quite heavy and uncomfortable to hold for long periods. The Joy-Con shape is similar to the original Switch, but with the larger size and added weight (it’s even a bit bigger than the Steam Deck!), it would’ve been really nice to have something more ergonomic for the controllers. Coming from a Steam Deck, it’s far less enjoyable to hold.

The 256 GB internal storage is a great improvement from 32 GB, but I still have enough digital games to still need a microSD Express card. Luckily it seems Nintendo’s partnered Samsung 256 GB card is typically in-stock at MSRP everywhere, so they clearly planned for this. I do wish it still supported SDXC for Switch 1 games, similar to how PS4 games can be played on PS5. SDXC is much cheaper (less than half the price) for now.

I named my original Switch “Squiddy”, as I mainly got back into modern Nintendo consoles for the Splatoon series. To follow up on that, my Switch 2 is named “Ikatsu” (squid + strong). I considered に (“ni”/2) instead of つ since it’s a Switch 2, but I like the sound of いかつ more. I also considered くろうみ since it’s a black sea creature. Naming electronics is silly but fun.


Update: I got my purchase invite on July 23, well after I’ve been used to having my Switch 2 🙃

Starlight ✨

Once again I’m updating the design on this blog. This time I’m calling it Starlight, and including an animated starfield background. Indigo, teal, and purple make up the new color scheme, and even the “light” theme has a contrast-heavy, somewhat-dark design.

Also went back to a vector profile image, now updated to match my current hairstyle. The watercolor was fun, but doesn’t fit the vibes now.

At some point I’ll actually fix the search too.

LLMs

Let’s explore the state of LLMs in early 2025. Everyone knows about ChatGPT, and while GPT4o is great, I’m far more interested in local LLMs I can run myself with my own hardware, software, and data.

I’ve tried a wide range of models, primarily on Apple Silicon, and overall I’m really impressed with the state of things. Meta’s Llama really pushed things forward with its initial launch a while back, and set the precedent for their competitors. Llama 3 is great, and generally my preferred model for general chat and prose. Google’s Gemma 3 is incredibly good, and feels competitive with larger reasoning models while being much faster. Qwen2.5 is generally useful as a local alternative to GitHub Copilot, and works well when integrated into VS Code with Continue.

Giving a PDF, text, or image file as additional input can be really powerful. With AI tools being used for so much automation these days, things like job applications are much more doable with proper use of LLMs. I’m certainly not advocating for lying about skills, mass applying, or any other broad automation, but if your resume and cover letter are going to be reviewed by an AI before a human, you should at least get Llama’s take on your resume first!

Llama

I find Llama especially good at creative writing tasks, particularly when I want to do brainstorming or roleplay conversations. Setting system params for a particular context is very effective for defining a “personality”, and really shaping its responses.

I’ve used it to flesh out worldbuilding details simply by asking questions and letting it respond in character. It’s fantastic for generating variations on themes, offering alternative ideas when I get stuck, or even just bouncing ideas off of.

My favorite Llama models:

  • Llama 3.1 8B Instruct

    • Great at writing prose, but struggles with complex topics and questions. Often misses details of the prompt.
  • Llama 3.2 3B

    • Remarkably fast, runs acceptably on basically any hardware. Good enough for general chat and prose, but really struggles with information accuracy and question understanding.

Gemma

For more functional and instructional use cases, Gemma 3 is remarkable. I’ve found it better at prompt understanding than DeepSeek R1 distills, despite not being a reasoning model, and it’s far faster than its competitors that have similar understanding properties.

There is a lot that it will refuse to do, and very easily triggers warnings and will add disclaimers and support information to a wide range of content. By default, it’s not very useful for creative tasks, as nearly any fiction subject could appear too risky for it. Specifying a relevant system message can somewhat alleviate this, but I often find Llama more useful for creative prose.

  • Gemma 3 4B Instruct
    • Handles complex questions incredibly well compared to Llama and even DeepSeek R1 distills, at a dramatically-improved token rate.
    • Really likes adding lots of Markdown formatting to responses, while Llama typically responds in plain text.
    • The image input handling is very good, able to discern lots of detail from images. It understands subjects, lighting, composition, and styles with the ability to discuss the image with all of that context.

Qwen

Qwen’s models are really nice. The 1.5B param Coder model is super lightweight, making it quite usable for local code autocomplete. The 3B param model is good enough to give reasonably-accurate larger code changes with the right prompts and context.

I highly recommend trying Continue for integrating local LLMs into VS Code. My MacBook Air is more than capable of replacing GitHub Copilot with Qwen2.5 on LM Studio/Ollama, with obvious advantages in cost and flexibility. It’s not perfect, GitHub’s integration is much better than Continue’s, but the option to have a local, offline code assistant is very compelling. I’ve found it useful enough to be worth the implementation time.

I like using the general Instruct model for technical questions, though I could see myself fully moving over to Gemma 3 now that it’s out. It’s a nice balance between technical and creative, making it a good enough general use model, but with Llama and Gemma both available too, I’ll likely not use it as much.

  • Qwen2.5 Coder 1.5B

    • Incredibly fast model that’s helpful for code autocomplete, but struggles with more complex code and questions.
  • Qwen2.5 Coder 3B Instruct

    • Slightly slower than the 1.5B param model, but writes fairly usable code, especially when given enough context.
  • Qwen2.5 7B Instruct

    • Solid middle ground between Llama and Gemma. It’s fast enough, has good technical knowledge, and can answer questions well. Writes more literally than Llama’s more casual responses, while being far less formal than Gemma.

If you have the hardware for it (and you very likely do!), you should definitely give local LLMs a try. Even on a fairly low-end PC, Llama 3.2 is quite useful and can run on a tiny amount of RAM. There are a huge number of models to try out, with many specialized use cases.

NTFS on Mac

Using NTFS on macOS is still a pain. It at least mounts volumes as read-only by default, which is better than nothing, but if you want to write to the disk it’s not as straightforward.

Tuxera and Paragon both have commercial solutions, but I’d rather just use something janky and unsupported, because I’m me. The only reason I even want to write to an NTFS volume on macOS is because my TV only supports playback from FAT32 and NTFS filesystems, and FAT32’s 4 GB limit is not ideal for modern media resolutions, even with newer codecs like AV1.

Let’s just use random Homebrew packages and accept the potential for data loss.

brew install --cask macfuse mounty

brew tap gromgit/homebrew-fuse
brew install gromgit/fuse/ntfs-3g-mac

Simple enough, my TV can play back media written with this setup, and that’s all I really wanted. MacFUSE lets us use the ntfs-3g FUSE driver (which is not intended for macOS, and is very unsupported). The Mounty app wraps the driver in a nice GUI that can auto-remount NTFS volumes r/w, and it’s not a terrible UX overall once it’s working.


April update: After a macOS reinstall, I decided to see if the commercial options were notably different/better. I’m using the latest Tuxera NTFS for Mac now, and it’s… fine. The initial setup was quite similar, still requiring manually enabling user kernel extension management from Recovery, and the UX once configured is very similar to what Mounty does for free. I’m sure there are advantages to the commercially-supported, actually-maintained NTFS driver compared to a very-unsupported Linux FUSE port, but so far I’m not seeing a significant difference in usability under normal conditions.