My Blog

Niche Phones

I tried two very different phones recently. Those who know me or follow this blog for some reason know that I’ve often gone through too many phones.

Nokia 2780

The Nokia 2780 is a KaiOS 3-based flip phone that tries to offer just enough ‘smart’ functionality without being a smartphone.

KaiOS 3 is fine as a user OS. It has everything you actually need, and very little that you don’t. It’s full of minor bugs, but you get used to them and they’re not typically too problematic. I did have issues with SMS delivery occasionally, sometimes having to wait days for new messages to show up.

The modern T9 typing is excellent, and I found that I quickly reached a usable typing speed. The predictive text and customizable dictionary are great, and I honestly kind of miss it now that I’m on a touch keyboard agian.

Since KaiOS 2 sold better, and they’re not cross-compatible, most of the community-made apps don’t work on the 2780, and you’re limited to a handful of essentials, while missing some others. For example, there’s a usable PDF reader app, but not an ePUB reader.

KaiOS 3 as a developer platform is terrible. The docs are very incomplete, the SDK relies on specific very old Ubuntu and Firefox releases, and it feels abandoned. If you want to build apps, you’re on your own.

The stereotype of Nokia’s amazing build quality is no longer a thing. Nokia is just a brand name at this point, with HMD Global buying back the name from Microsoft a decade ago. Light plastic, cheap feel, the hinge squeaks and the plastic creaks as you use it, especially noticeable while on an actual phone call. The buttons started missing presses within weeks, and had constant double presses in a few months.

Still, I really like this phone. The battery easily lasts a week. Physical buttons are great. As a “phone that is just a phone,” it works well enough. I used it as my primary phone for about two years and it was solid.

Bigme Hibreak Pro

The Bigme Hibreak Pro is an Android-based e-ink phone, with a color or black-and-white option. I got the black-and-white version in white plastic and I kinda love it. It’s the first decent e-ink phone I’ve found that actually works properly in North America, 4G/5G support is solid.

The default OOBE is bad unless you just want to read books and send text messages. Android Auto is semi-broken out of the box, and needs manual fixing due to how the OS is set up to handle variable per-app DPI and text rendering. This is a useful feature for on-device apps as most apps do not end up being very readable on the black-and-white e-ink display without heavy adjustments to their rendering. The Android Auto and Google Maps apps need manual corrections to work on the in-car display, but that leaves them mostly unusable on the actual phone since it’s an app-wide setting that overrides the normal Android system display handling. Not great but you can make it work.

MediaTek’s duraspeed aggressively kills background apps, you basically have to disable it if you want to do something as simple as have music playing while also writing a text message.

adb shell
$ settings put global setting.duraspeed.enabled 0

Once fixed, it’s genuinely nice. E-ink reduces visual noise a lot. Battery life is excellent. Feels like what e-ink phones could have been whenever I’d thought of them in the past.

Still niche, still rough, but finally not a novelty. I’d honestly recommend it to anyone. You won’t be scrolling on TikTok on it, but that’s a good thing. This will be my primary phone going forward and I’m very happy with it!

Astro 💫

Back in 2020, I migrated this blog from Jekyll to Gatsby. Gatsby was much more flexible and feature-rich than Jekyll, but I always had concerns about the long-term maintainability of it, as it had a lot of seemingly-unnecessary complexity. Gatsby worked by transforming content sources (in my case a collection of Markdown files) into a GraphQL API, then providing a React app framework to generate pages from the GraphQL data.

Since then, Gatsby has effectively been abandoned, and many more competitors have arrived. I decided to try Astro, and the migration from Gatsby to Astro was incredibly easy. My build times are dramatically improved, and Astro is much easier to work with. I’m loving the simplicity, and the JSX-like syntax made porting my React components over effortless. Apart from some changes to browser preloading, and removal of the (already broken) Algolia search component, the end result is nearly identical to what I had with Gatsby but with far less complexity. I feel like Astro has struck the perfect balance between features and complexity, with the ability to easily extend it for more advanced use cases.

I’ll probably end up moving platforms again in a few years. It seems like that’s just what I do with this site.

Switch 2 🎮

The Nintendo Switch 2 is neat and kinda weird. I mostly really like it. I still haven’t gotten my invite to order one from Nintendo, but I knew someone who got several preorders so I bought from him at MSRP.

First impressions were a bit weird. I wanted to just quickly migrate my save games from my Switch 1, but apparently it requires that both consoles be plugged into first-party power adapters to start the transfer. Luckily, it only checks once when starting the process on each device, so I was able to use the Switch 2 power adapter for both devices, temporarily plugging each in only during the initial check. My other USB-C PD options were not accepted by the migration tool even when they powered and charged both devices perfectly fine. This was especially silly because the transfer only took about 5 minutes!

Once set up, it’s a Switch, but better. In docked mode, every Switch game just runs way better, and the Switch controllers generally work as expected. It’s annoying that Switch 1 controllers can’t power it on, because that’s definitely an intentional feature restriction to get people to buy new a Pro Controller, but BOTW has never looked better at 1440p 60 on my OLED TV.

In handheld, the display is noticeably larger, brighter, and more saturated than the original Switch LCD. VRR and 120 Hz support are very nice, and make a big difference in many games that struggle to keep a consistent V-Sync frame-rate. I’m hopeful that they figure out VRR support on the dock despite the DisplayPort to HDMI conversion limitations, because it’s incredibly helpful for some games.

Switch 1 games that haven’t had proper Switch 2 patches yet look pretty bad in handheld play. The 720p game resolution is resampled to the 1080p display with only basic filtering, causing every pixel to be mismatched. UIs look especially blurry, but anything with pixel-sized detail looks quite bad. Any high-contrast edge becomes blurred across pixels, and it looks noticeably worse than on Switch 1’s native 720p display. For games with 1080p support in handheld though, the Switch 2 is a huge upgrade from the original, though still falls behind the overall display quality of the Switch OLED model.

I would love to see better software features for either running the 1080p docked mode in some games while handheld, or a broader range of Switch 2 compatibility patches to run at native resolution on handheld, at least for the UI elements if nothing else. Almost every game should have > 720p UI available for docked mode on Switch 1, so it shouldn’t typically require any new assets.

The console is quite heavy and uncomfortable to hold for long periods. The Joy-Con shape is similar to the original Switch, but with the larger size and added weight (it’s even a bit bigger than the Steam Deck!), it would’ve been really nice to have something more ergonomic for the controllers. Coming from a Steam Deck, it’s far less enjoyable to hold.

The 256 GB internal storage is a great improvement from 32 GB, but I still have enough digital games to still need a microSD Express card. Luckily it seems Nintendo’s partnered Samsung 256 GB card is typically in-stock at MSRP everywhere, so they clearly planned for this. I do wish it still supported SDXC for Switch 1 games, similar to how PS4 games can be played on PS5. SDXC is much cheaper (less than half the price) for now.

I named my original Switch “Squiddy”, as I mainly got back into modern Nintendo consoles for the Splatoon series. To follow up on that, my Switch 2 is named “Ikatsu” (squid + strong). I considered に (“ni”/2) instead of つ since it’s a Switch 2, but I like the sound of いかつ more. I also considered くろうみ since it’s a black sea creature. Naming electronics is silly but fun.


Update: I got my purchase invite on July 23, well after I’ve been used to having my Switch 2 🙃

Starlight ✨

Once again I’m updating the design on this blog. This time I’m calling it Starlight, and including an animated starfield background. Indigo, teal, and purple make up the new color scheme, and even the “light” theme has a contrast-heavy, somewhat-dark design.

Also went back to a vector profile image, now updated to match my current hairstyle. The watercolor was fun, but doesn’t fit the vibes now.

At some point I’ll actually fix the search too.

LLMs

Let’s explore the state of LLMs in early 2025. Everyone knows about ChatGPT, and while GPT4o is great, I’m far more interested in local LLMs I can run myself with my own hardware, software, and data.

I’ve tried a wide range of models, primarily on Apple Silicon, and overall I’m really impressed with the state of things. Meta’s Llama really pushed things forward with its initial launch a while back, and set the precedent for their competitors. Llama 3 is great, and generally my preferred model for general chat and prose. Google’s Gemma 3 is incredibly good, and feels competitive with larger reasoning models while being much faster. Qwen2.5 is generally useful as a local alternative to GitHub Copilot, and works well when integrated into VS Code with Continue.

Giving a PDF, text, or image file as additional input can be really powerful. With AI tools being used for so much automation these days, things like job applications are much more doable with proper use of LLMs. I’m certainly not advocating for lying about skills, mass applying, or any other broad automation, but if your resume and cover letter are going to be reviewed by an AI before a human, you should at least get Llama’s take on your resume first!

Llama

I find Llama especially good at creative writing tasks, particularly when I want to do brainstorming or roleplay conversations. Setting system params for a particular context is very effective for defining a “personality”, and really shaping its responses.

I’ve used it to flesh out worldbuilding details simply by asking questions and letting it respond in character. It’s fantastic for generating variations on themes, offering alternative ideas when I get stuck, or even just bouncing ideas off of.

My favorite Llama models:

  • Llama 3.1 8B Instruct

    • Great at writing prose, but struggles with complex topics and questions. Often misses details of the prompt.
  • Llama 3.2 3B

    • Remarkably fast, runs acceptably on basically any hardware. Good enough for general chat and prose, but really struggles with information accuracy and question understanding.

Gemma

For more functional and instructional use cases, Gemma 3 is remarkable. I’ve found it better at prompt understanding than DeepSeek R1 distills, despite not being a reasoning model, and it’s far faster than its competitors that have similar understanding properties.

There is a lot that it will refuse to do, and very easily triggers warnings and will add disclaimers and support information to a wide range of content. By default, it’s not very useful for creative tasks, as nearly any fiction subject could appear too risky for it. Specifying a relevant system message can somewhat alleviate this, but I often find Llama more useful for creative prose.

  • Gemma 3 4B Instruct
    • Handles complex questions incredibly well compared to Llama and even DeepSeek R1 distills, at a dramatically-improved token rate.
    • Really likes adding lots of Markdown formatting to responses, while Llama typically responds in plain text.
    • The image input handling is very good, able to discern lots of detail from images. It understands subjects, lighting, composition, and styles with the ability to discuss the image with all of that context.

Qwen

Qwen’s models are really nice. The 1.5B param Coder model is super lightweight, making it quite usable for local code autocomplete. The 3B param model is good enough to give reasonably-accurate larger code changes with the right prompts and context.

I highly recommend trying Continue for integrating local LLMs into VS Code. My MacBook Air is more than capable of replacing GitHub Copilot with Qwen2.5 on LM Studio/Ollama, with obvious advantages in cost and flexibility. It’s not perfect, GitHub’s integration is much better than Continue’s, but the option to have a local, offline code assistant is very compelling. I’ve found it useful enough to be worth the implementation time.

I like using the general Instruct model for technical questions, though I could see myself fully moving over to Gemma 3 now that it’s out. It’s a nice balance between technical and creative, making it a good enough general use model, but with Llama and Gemma both available too, I’ll likely not use it as much.

  • Qwen2.5 Coder 1.5B

    • Incredibly fast model that’s helpful for code autocomplete, but struggles with more complex code and questions.
  • Qwen2.5 Coder 3B Instruct

    • Slightly slower than the 1.5B param model, but writes fairly usable code, especially when given enough context.
  • Qwen2.5 7B Instruct

    • Solid middle ground between Llama and Gemma. It’s fast enough, has good technical knowledge, and can answer questions well. Writes more literally than Llama’s more casual responses, while being far less formal than Gemma.

If you have the hardware for it (and you very likely do!), you should definitely give local LLMs a try. Even on a fairly low-end PC, Llama 3.2 is quite useful and can run on a tiny amount of RAM. There are a huge number of models to try out, with many specialized use cases.