My Blog

Switch 2 🎮

June 23, 2025

The Nintendo Switch 2 is neat and kinda weird. I mostly really like it. I still haven’t gotten my invite to order one from Nintendo, but I knew someone who got several preorders so I bought from him at MSRP.

First impressions were a bit weird. I wanted to just quickly migrate my save games from my Switch 1, but apparently it requires that both consoles be plugged into first-party power adapters to start the transfer. Luckily, it only checks once when starting the process on each device, so I was able to use the Switch 2 power adapter for both devices, temporarily plugging each in only during the initial check. My other USB-C PD options were not accepted by the migration tool even when they powered and charged both devices perfectly fine. This was especially silly because the transfer only took about 5 minutes!

Once set up, it’s a Switch, but better. In docked mode, every Switch game just runs way better, and the Switch controllers generally work as expected. It’s annoying that Switch 1 controllers can’t power it on, because that’s definitely an intentional feature restriction to get people to buy new a Pro Controller, but BOTW has never looked better at 1440p 60 on my OLED TV.

In handheld, the display is noticeably larger, brighter, and more saturated than the original Switch LCD. VRR and 120 Hz support are very nice, and make a big difference in many games that struggle to keep a consistent V-Sync frame-rate. I’m hopeful that they figure out VRR support on the dock despite the DisplayPort to HDMI conversion limitations, because it’s incredibly helpful for some games.

Switch 1 games that haven’t had proper Switch 2 patches yet look pretty bad in handheld play. The 720p game resolution is resampled to the 1080p display with only basic filtering, causing every pixel to be mismatched. UIs look especially blurry, but anything with pixel-sized detail looks quite bad. Any high-contrast edge becomes blurred across pixels, and it looks noticely worse than on Switch 1’s native 720p display. For games with 1080p support in handheld though, the Switch 2 is a huge upgrade from the original, though still falls behind the overall display quality of the Switch OLED model.

I would love to see better software features for either running the 1080p docked mode in some games while handheld, or a broader range of Switch 2 compatibility patches to run at native resolution on handheld, at least for the UI elements if nothing else. Almost every game should have > 720p UI available for docked mode on Switch 1, so it shouldn’t typically require any new assets.

The console is quite heavy and uncomfortable to hold for long periods. The Joy-Con shape is similar to the original Switch, but with the larger size and added weight (it’s even a bit bigger than the Steam Deck!), it would’ve been really nice to have something more ergonomic for the controllers. Coming from a Steam Deck, it’s far less enjoyable to hold.

The 256 GB internal storage is a great improvement from 32 GB, but I still have enough digital games to still need a microSD Express card. Luckily it seems Nintendo’s partnered Samsung 256 GB card is typically in-stock at MSRP everywhere, so they clearly planned for this. I do wish it still supported SDXC for Switch 1 games, similar to how PS4 games can be played on PS5. SDXC is much cheaper (less than half the price) for now.

I named my original Switch “Squiddy”, as I mainly got back into modern Nintendo consoles for the Splatoon series. To follow up on that, my Switch 2 is named “Ikatsu” (squid + strong). I considered に (“ni”/2) instead of つ since it’s a Switch 2, but I like the sound of いかつ more. I also considered くろうみ since it’s a black sea creature. Naming electronics is silly but fun.

Starlight ✨

April 25, 2025

Once again I’m updating the design on this blog. This time I’m calling it Starlight, and including an animated starfield background. Indigo, teal, and purple make up the new color scheme, and even the “light” theme has a contrast-heavy, somewhat-dark design.

Also went back to a vector profile image, now updated to match my current hairstyle. The watercolor was fun, but doesn’t fit the vibes now.

At some point I’ll actually fix the search too.

LLMs

March 22, 2025

Let’s explore the state of LLMs in early 2025. Everyone knows about ChatGPT, and while GPT4o is great, I’m far more interested in local LLMs I can run myself with my own hardware, software, and data.

I’ve tried a wide range of models, primarily on Apple Silicon, and overall I’m really impressed with the state of things. Meta’s Llama really pushed things forward with its initial launch a while back, and set the precedent for their competitors. Llama 3 is great, and generally my preferred model for general chat and prose. Google’s Gemma 3 is incredibly good, and feels competitive with larger reasoning models while being much faster. Qwen2.5 is generally useful as a local alternative to GitHub Copilot, and works well when integrated into VS Code with Continue.

Giving a PDF, text, or image file as additional input can be really powerful. With AI tools being used for so much automation these days, things like job applications are much more doable with proper use of LLMs. I’m certainly not advocating for lying about skills, mass applying, or any other broad automation, but if your resume and cover letter are going to be reviewed by an AI before a human, you should at least get Llama’s take on your resume first!

Llama

I find Llama especially good at creative writing tasks, particularly when I want to do brainstorming or roleplay conversations. Setting system params for a particular context is very effective for defining a “personality”, and really shaping its responses.

I’ve used it to flesh out worldbuilding details simply by asking questions and letting it respond in character. It’s fantastic for generating variations on themes, offering alternative ideas when I get stuck, or even just bouncing ideas off of.

My favorite Llama models:

  • Llama 3.1 8B Instruct

    • Great at writing prose, but struggles with complex topics and questions. Often misses details of the prompt.
  • Llama 3.2 3B

    • Remarkably fast, runs acceptably on basically any hardware. Good enough for general chat and prose, but really struggles with information accuracy and question understanding.

Gemma

For more functional and instructional use cases, Gemma 3 is remarkable. I’ve found it better at prompt understanding than DeepSeek R1 distills, despite not being a reasoning model, and it’s far faster than its competitors that have similar understanding properties.

There is a lot that it will refuse to do, and very easily triggers warnings and will add disclaimers and support information to a wide range of content. By default, it’s not very useful for creative tasks, as nearly any fiction subject could appear too risky for it. Specifying a relevant system message can somewhat alleviate this, but I often find Llama more useful for creative prose.

  • Gemma 3 4B Instruct
    • Handles complex questions incredibly well compared to Llama and even DeepSeek R1 distills, at a dramatically-improved token rate.
    • Really likes adding lots of Markdown formatting to responses, while Llama typically responds in plain text.
    • The image input handling is very good, able to discern lots of detail from images. It understands subjects, lighting, composition, and styles with the ability to discuss the image with all of that context.

Qwen

Qwen’s models are really nice. The 1.5B param Coder model is super lightweight, making it quite usable for local code autocomplete. The 3B param model is good enough to give reasonably-accurate larger code changes with the right prompts and context.

I highly recommend trying Continue for integrating local LLMs into VS Code. My MacBook Air is more than capable of replacing GitHub Copilot with Qwen2.5 on LM Studio/Ollama, with obvious advantages in cost and flexibility. It’s not perfect, GitHub’s integration is much better than Continue’s, but the option to have a local, offline code assistant is very compelling. I’ve found it useful enough to be worth the implementation time.

I like using the general Instruct model for technical questions, though I could see myself fully moving over to Gemma 3 now that it’s out. It’s a nice balance between technical and creative, making it a good enough general use model, but with Llama and Gemma both available too, I’ll likely not use it as much.

  • Qwen2.5 Coder 1.5B

    • Incredibly fast model that’s helpful for code autocomplete, but struggles with more complex code and questions.
  • Qwen2.5 Coder 3B Instruct

    • Slightly slower than the 1.5B param model, but writes fairly usable code, especially when given enough context.
  • Qwen2.5 7B Instruct

    • Solid middle ground between Llama and Gemma. It’s fast enough, has good technical knowledge, and can answer questions well. Writes more literally than Llama’s more casual responses, while being far less formal than Gemma.

If you have the hardware for it (and you very likely do!), you should definitely give local LLMs a try. Even on a fairly low-end PC, Llama 3.2 is quite useful and can run on a tiny amount of RAM. There are a huge number of models to try out, with many specialized use cases.

NTFS on Mac

February 24, 2025

Using NTFS on macOS is still a pain. It at least mounts volumes as read-only by default, which is better than nothing, but if you want to write to the disk it’s not as straightforward.

Tuxera and Paragon both have commercial solutions, but I’d rather just use something janky and unsupported, because I’m me. The only reason I even want to write to an NTFS volume on macOS is because my TV only supports playback from FAT32 and NTFS filesystems, and FAT32’s 4 GB limit is not ideal for modern media resolutions, even with newer codecs like AV1.

Let’s just use random Homebrew packages and accept the potential for data loss.

brew install --cask macfuse mounty

brew tap gromgit/homebrew-fuse
brew install gromgit/fuse/ntfs-3g-mac

Simple enough, my TV can play back media written with this setup, and that’s all I really wanted. MacFUSE lets us use the ntfs-3g FUSE driver (which is not intended for macOS, and is very unsupported). The Mounty app wraps the driver in a nice GUI that can auto-remount NTFS volumes r/w, and it’s not a terrible UX overall once it’s working.


April update: After a macOS reinstall, I decided to see if the commercial options were notably different/better. I’m using the latest Tuxera NTFS for Mac now, and it’s… fine. The initial setup was quite similar, still requiring manually enabling user kernel extension management from Recovery, and the UX once configured is very similar to what Mounty does for free. I’m sure there are advantages to the commercially-supported, actually-maintained NTFS driver compared to a very-unsupported Linux FUSE port, but so far I’m not seeing a significant difference in usability under normal conditions.

fish is Also Great

February 22, 2025

fish is the first shell I’ve used that feels like it’s actually designed around how I want to use a shell. As I’ve gotten more into Python, I’ve loved the convenience and power of a simplified scripting language, and have grown increasingly annoyed at how limiting and unintuitive POSIX shells are, to the point where I was actually considering trying to write my own Python-compatible shell where Python syntax could be used intermixed with generic commands. Basically a Python script but where subprocess.run() would be called most of the time.

I’ve used bash heavily since I was a kid, and switched to zsh after macOS changed defaults for whatever reason (maybe just for consistency between my platforms), but have always felt both were lacking in various ways.

Then I finally tried fish. I no longer feel like I have any desire to reinvent the shell, because fish did it perfectly.

For the basic user config, the default behavior is intuitive and nice, but easily adjusted. Here’s an example of some of my user config in ~/.config/fish/config.fish:

# alias is great for running flatpak CLI apps:
alias wine="/usr/bin/flatpak run --command=wine --file-forwarding net.lutris.Lutris"

# nvm works well enough without the global init:
set -gx NODE_VERSION 'lts/*'
alias node="$HOME/.nvm/nvm-exec node"
alias npm="$HOME/.nvm/nvm-exec npm"
alias npx="$HOME/.nvm/nvm-exec npx"
# You can get full functionality of nvm if needed by following their docs, but this is simpler and doesn't slow down the shell the same way, as long as you only need one node version at a time.

# abbr is like alias but better since it expands to the full command interactively and in history:
abbr -a yt yt-dlp
abbr -a artisan php artisan
abbr -a su sudo su -
abbr -a serve python -m http.server

Adding more complex functions interactively is nice, just use funced -s function_name and you’re dropped into an editor, where the syntax is simple and powerful:

function unzip_all_ja
  for z in *.zip
    unzip -O Windows-31J $z -d (string replace '.zip' '' $z) && rm $z
  end
end

Setting the executable path can be done with extending the PATH envvar, but Fish has a global shell variable file that is far more powerful and intuitive in ~/.config/fish/fish_variables. Adding a path to executables can be done with fish_add_path <dir>, which will automatically update the variable globally.

For adjusting much of the global configuration including the prompt style, color schemes, etc. there is a fish_config function which will launch a web interface for interactively configuring the shell. Every built-in fish_* command can be edited with funced, allowing easy customization of default behavior like prompts, terminal titles, command-not-found handling, and more.

The scripting language for functions and logic is similar to bash, but simplified and much more powerful and intuitive. The shell wraps man by default to add all of the fish documentation, and man fish-doc and man fish-language are excellent references for learning it.

If you want to switch, man fish-tutorial and man fish-for-bash-users are great introductions and cover everything a typical user would need to know. I highly recommend trying it out!