Phpizza Blog

Embracing Darkness

February 24, 2021

When I first did the redesign of this blog last year, I debated whether I should including a toggle-able dark theme. There are a lot of pros and cons to offering a dark theme, and complexities to how it should be implemented, and it was stuff I just didn’t feel like getting into at the time. Since then though, Tailwind 2.0 has released with native support, and my React knowledge has improved to the point where I’m very comfortable using the new functional components.

Initially, my plan was to just match the user agent’s configuration for the theme, as that is easy to implement natively in CSS. There are downsides to that though, as many people may set their OS to a dark theme, but prefer reading long-form content with a light background as it is much easier to read that way in most environments. Instead, this implementation uses localStorage to persist your theme selection, defaulting to matching your global configuration. This is fairly simple to do under normal circumstances, but with React and server-side rendering, it gets a bit more complex. The end result is simple but good enough, with a dropdown menu in the navbar that allows you to select between “Auto”, “Light”, and “Dark”. Your selection, as well as your global setting, controls toggling a dark class on the html node on the page, which applies the overridden styles.

This was implemented using Tailwind 2’s excellent dark mode feature, that adds a dark variant to the utility classes, making it easy to add inline class definitions for how a certain component should alter its appearance under the dark mode. Along with this upgrade to Tailwind 2, the color palette was changed to work a bit differently. Previously, Tailwind offered a single “gray” selection that had a slight cooler temperature. Now, there are five total “gray” options, including a true-neutral gray. I’m using both the cool gray and the neutral gray in this redesign, with the cool gray being used for components, while the neutral grays are used in the page body. I also implemented a custom “teal gray” palette for use in the dark theme, which is hue-matched with the main “teal” palette from Tailwind.

I’m fairly happy with how the dark mode’s colors came out, but I don’t love the new light mode colors. The contrast between components in the navbar is not as clear, and the new gray is not as saturated on the navbar either. Additionally, losing the slight saturated gray from the header backgrounds and other content areas is not as nice looking, and I plan on reintroducing something like it in a future update. I will likely be adjusting the colors a lot more than just a minor correction, possibly adding several new colors to various components to liven up the site more.

There was basically no reason for me to do any of this, as I didn’t really learn anything new, and I actually prefer the old design, but it felt like a necessary first step to a more thorough refresh, that I’ll inevitably do at some point over the next year or so.

Maybe I’ll even actually post something.

Mice are Bad

October 03, 2020

I’ve owned a large number of high-end computer mice, and I hate all of them. I have never found a mouse that actually just works, for long enough to not have to replace it twice a year.

High end mice I’ve owned:

  • Razer Mamba (original): Battery expanded after a year or so, to the point where the mouse wouldn’t sit flat. Could probably just replace the battery but it’s not in great shape at this point.
  • Razer Mamba (2012): Battery expanded much more quickly than the original. The Synapse 2.0 software was also so bad that whenever it was installed, the mouse DPI would switch randomly every minute or so without reason, and the software-bound buttons had several-second delays. Without the drivers installed, it actually worked great other than the battery bloat. The scroll wheel is much louder than the original Mamba.
  • Logitech G700s: Battery life was so terrible it wouldn’t even make it through a day of light use without being plugged in, completely negating the value of a wireless mouse. Didn’t bother using it more than a few days because of that issue.
  • Logitech G502: Sensor stopped tracking accurately after a few months, completely unusable. Was perfect until it just didn’t work.
  • Razer Deathadder Stealth: Generally fine, but had major build quality and software issues. I ended up using it without drivers on my Mac setup for a while and it was good enough I guess.
  • Logitech G602: The middle mouse button stopped working within a few months, usable but frustrating.

I’ve also owned a few mid-range mice that were fine but nothing great. I’ve had a particularly hard time finding a mouse that is both reliable and comfortable for my hand. The Razer Mamba/Deathadder form factor is about perfect, but Razer’s software is unusably bad so I typically end up using knockoffs. The Havit Pro Gaming mouse ($15 Chinese Deathadder clone) was basically perfect. It didn’t last more than a year, but it was so cheap and so good that I bought several of them. Sadly, they discontinued it and replaced it with a new model that’s far cheaper and just doesn’t actually work at all.

At this point I’m using a PICTEK gaming mouse that seems to be a clone of the Havit clone, so I’m several steps away from an actual Deathadder, but it’s not a bad mouse. This one was $19, and so far it’s lasted longer than any $50-120 mouse I’ve owned from Logitech or Razer. It don’t really like this mouse, but I have yet to find anything good enough to actually replace it with.

If you have a big mouse that has lasted you more than a year, please let me know what it is so I can buy a few dozen of them. Maybe one day I can stop buying terrible mice.

Update!

2020-10-15

I’ve purchased a Glorious Model D, and I mostly love it. My only complaint so far is that some of the RGB LED modes don’t have a brightness control, so I can have a rainbow, but only a bright rainbow 🙃

The form factor is perfect, the driverless operation is exactly what I want from a mouse, and the tracking is fantastic, with no perceivable latency. I might have finally found a good mouse.

Gatsby, Netlify, and the Modern Web

July 30, 2020

For a long time, since I first switched from Ghost to Jekyll, this blog has been as minimal as possible in setup, and has had a noticeable lack of JavaScript. I’ve also always self-hosted it, usually on a VPS from Linode or DigitalOcean, even long before the Jekyll switch, as far back as it being on WordPress when I was still in High School.

It’s 2020 though, and the Web is different now. What started as a much-needed full redesign turned into a complete re-platform, including a new CMS, a rewrite of the entire HTML/CSS stack, and a move to hosting everything on a CDN. I debated for a while whether I should stick with Jekyll and just rebuild the HTML and move to Netlify for the TTFB advantage, but after messing with Gatsby for a while I decided it was worth actually switching.

I’ve been wanting to move from Ruby to something Node-based since I’ve had npm dependencies (at least since the Tailwind CSS redesign, and I think even before that), and Gatsby has a lot to like about it. I’ve been loosely following its growth along with Netlify as a combined platform since CSS Tricks started regularly blogging about it. One of the biggest things that appealed to me is that despite using React for all of the userland code, the resulting pages are super fast and work without JavaScript enabled thanks to SSR and some heavy optimizations on the framework end.

While developing, the site hot-reloads everything and uses loads of heavy JS, but the final production builds are very lean. I do plan on trying to optimize the final production JS that’s shipped in some way as it’s still painfully slow on TTI with low-end devices and slow networks. That’s not really an issue since “interactive” just means the JS isn’t ready, so everything but the search is unaffected, but it still bugs me 🙃.

Migrating the content over from Jekyll was fairly painless. By default, the Gatsby blog starter template expects posts to be an index.md file in a directory with that post’s slug as the name, so I just created directories for everything from Jekyll’s _posts directory, and moved the .md files to the newly-created directories. One nice advantage of this new structure is that related content like images for a post can be stored in the directory alongside the actual post content, making it really easy to know what is associated with what, so I moved over the images I used to embed separately too. Gatsby also does some nice optimizations to images with the <picture> element when you serve them locally too, so it’ll send the best-optimized image for the browser without me having to manually create a bunch of versions in the repository.

As for the design of the site, I’d really felt like my old Tailwind CSS-based design had been really lacking since the previews of Tailwind UI started showing up, so I knew I’d have to incorporate some of those elements in a full redesign once it was out. I’m using their top bar with some slight modification, as well as the excellent new Tailwind Typography plugin to style all of the prose content across the site.

There’s a teeny bit of custom non-Tailwind CSS to handle things like the desktop Safari overscroll and to style the search widget, but other than that this is built entirely using Tailwind’s utility classes. I have to say, I love it. Utility classes are so the way to go, using Bootstrap feels like I’m still living in 2010 now. I can definitely see where large components in a clean way can get messy with a utility-first framework, but if you’re using JS components where you’re not having to unnecessarily duplicate them in your source, it works so nicely I can never go back.

Since I’ve moved to a supported platform, I’ve also switched to hosting this blog on Netlify! They have a super-fast global CDN, and a very generous free tier, and handle all of the builds on their end, so publishing new content is as simple as pushing to GitHub, and they handle recompiling the site, pushing the new content to Algolia, generating the optimized images, and pushing the production-ready files to the edge nodes. It builds in 1-2 minutes, and they handle cache invalidation seamlessly so it all just works.

Another thing that’s neat about using Netlify is that I can use the Netlify CMS! This blog’s content is a Git repo full of Markdown files, and Netlify can connect to my GitHub account via the API and give me a nice editor at the /admin/ URL to write posts on, which is where I’m writing this! It uses GitHub’s OAuth to authenticate me (and anyone else I invite I guess, though guest posts through pull requests would be fun!), then just lets me blog as if it was a typical CMS!

This iteration of my blog is also the first time I’ve had a search since leaving WordPress. (At least I don’t think my Ghost blog had one but I honestly don’t remember much other than hating it.) This site’s search uses Algolia, which is an excellent SAAS option that focuses on speed and simplicity. Gatsby has a first-party plugin that handles indexing the content automatically, and Algolia has an official React component that’s easy to implement and customize, so 90% of the time implementing that was just tweaking the styling until I had it to a point where I was happy with it.

The last thing I wanted to get really figured out before I launched the redesign was the About page. I’ve always had a hard time just talking about myself, and I really wanted to write something more long-form. I didn’t achieve that goal, but I wrote enough to more or less replace what I had there on my old blog, so I decided to just launch as-is and update it later. I plan on slowly adding content to it over time, with the hope that it will eventually be a good way to learn a bit more about me than just that I build websites 😜.

One thing from the old blog that’s still missing is the dark theme. This is something I’ve been debating implementing here, either using the prefers-color-scheme media query like before, or with proper toggle UI. Honestly though I’m not sure it’s actually all that useful. In most cases, reading white text on a dark background is much harder than reading black text on a white background. There is an accessibility benefit for those who do benefit from the difference in contrast that change can have with their environment, even if that’s just fully-sighted people in a very dark room. I’m sure I’ll implement it eventually…

I’m sure I’ll make more little changes over time, but building this has been super fun, and a great reminder of just how much I prefer Vue over React 😁

Too Many Drives

April 14, 2020

Following up on my minor server upgrade last year, I’ve made a few new changes to fix some issues with my general home setup.

One of the worst things about my server has always been the case. When I built it, I bought the cheapest MicroATX case I could find on NewEgg that still had space for a few 3.5” drives. That case only had three drive bays, and I’ve been running six drives in it, with them basically just stuck anywhere they’d fit. The system drive, a 2.5” SATA SSD, literally rested on the CPU cooler because it was the only place I could fit it.

It was definitely due for an upgrade. I’ve been eyeing the Fractal Design Node 804 for a while, and finally went for it. It’s a fantastic case, with great airflow, loads of drive bays, excellent removable drive cages, and plenty of space to work in. Along with the new case, I moved over my third 12 TB HDD from my main PC, as a dedicated YouTube mirror drive, because I’ve reached a point where that’s a thing a I need for some reason.

With the latest changes, my disks now look like this:

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 111.8G  0 disk
├─sda1   8:1    0   1.9M  0 part
└─sda2   8:2    0 111.8G  0 part /
sdb      8:16   0  10.9T  0 disk
sdc      8:32   0  10.9T  0 disk /storage
sdd      8:48   0   7.3T  0 disk
sde      8:64   0   7.3T  0 disk /archive
sdf      8:80   0  10.9T  0 disk
└─sdf1   8:81   0  10.9T  0 part /youtube
sdg      8:96   0   2.7T  0 disk
└─sdg1   8:97   0   2.7T  0 part /scratch

I’m still using btrfs for the RAID1 /archive and RAID0 /storage volumes. I added the /scratch volume with a dedicated disk used as a cache drive, and for heavy random IO that’s too big for the SSD (temporary transcoding, compiling Android ROMs, etc.).

The newly-added 12 TB drive mounted at /youtube is a good sign of my /r/datahoarder tendencies, as I’m already using 65% of its capacity. 3.4 TiB of that is just my Nerd³ archive, which I think has every video publicly listed on every official Nerd³ channel, and the complete UnofficialNerdCubedLive collection.

I still eventually plan to move to a many-disk setup with at least 8×12 TB drives in either RAID10 or a parity setup of some kind, likely on ZFS if I can justify the RAM requirement. I’ll probably also move off of Arch Linux as my host OS, maybe even going for something like ESXi. Arch is great, but having to reboot as often as I do for updates when I have a bunch of VMs, containers, and other services running, is somewhat annoying. I’ll probably just end up using Debian or something and running everything in Docker containers.

I also really want to upgrade the network card. I’d love at least a 2.5 Gbps connection on it, as my main PC’s new motherboard has dual Ethernet with a 2.5 Gbps port. I could even just use a direct connection with a crossover adapter and skip buying a switch for now, since the only device that’d actually sustain > 1 Gbps is my main PC when I’m doing large file transfers. That’ll probably be my next upgrade.

WireGuard

January 23, 2020

I’ve always had various ways of connecting to my local network externally, from unencrypted VNC connections directly to my PC in the early days, to RDP, SSH tunnels, and eventually proper VPN setups.

Most recently, I was using SoftEther on my file server as my way into my local network, which is nice because it bundles support for most of the modern protocols out of the box, but is a bit of a pain to keep running correctly on Arch, and seems overkill for what I was doing. Not to mention having like 10 ports open for an occasional single VPN connection felt really silly.

Now though, I’m using WireGuard, and it is a much better experience. With native support in the Linux kernel, it “just works” out of the box on the client and server ends (both just peers in WireGuard’s implementation), and is really clean to set up and maintain. It only uses a single UDP port for incoming traffic, and works much better with my slightly weird networking setup.

Like basically everything on linux, the Arch Wiki article on WireGuard is fantastic and has basically everything you could need to set it up.

Server configuration

My “server” configuration at /etc/wireguard/wg0.conf:

[Interface]
Address = 10.200.200.1/24
ListenPort = 51820
PrivateKey = [key here]

PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o enp1s0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o enp1s0 -j MASQUERADE

[Peer]
PublicKey = [key here]
PresharedKey = [psk here]
AllowedIps = 10.200.200.2/32

[Peer]
PublicKey = [key here]
PresharedKey = [psk here]
AllowedIps = 10.200.200.3/32

The Interface address should be a new subnet that is only used for assigning addresses to the WireGuard peers themselves. The PostUp and PostDown settings are used to update iptables to forward IP traffic from the peers through your primary network interface. Replace enp1s0 with whatever your interface name is (you can use ip link to list them).

If it’s behind a firewall, you’ll need to add a NAT rule allowing UDP traffic to your server on the ListenPort you defined.

You can generate a private key with wg genkey, and generate a pre-shared key to give the clients with wg genpsk.

The Peer sections here are the “clients” in the network. You’ll want to generate a PSK to add here and to the peer when configuring it, then let the peer generate its own key pair to add to the server’s config. Each peer should either have a unique address or use a /24 subnet to allow it to be dynamically assigned an IP when it connects.

One configured, you can use wg-quick up wg0 directly to start your server, or manage it as a systemd service:

systemctl start [email protected]

Client configuration

When adding a client peer, it works best to let the client generate its own key pair and just add its public key as a new peer. A client’s config should look something like this:

[Interface]
Address = 10.200.200.2/24
PrivateKey = [auto-generated private key here]
DNS = 1.1.1.1

[Peer]
PublicKey = [server public key here]
PresharedKey = [psk here]
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = wireguard.example.net:51820

The Address should be compatible with the AllowedIps setting for that peer in the server’s configuration, and the Endpoint should be the hostname and port of your server. The DNS can be set to any DNS server that’s accessible once your connected. If you’re not forwarding traffic on the server end, this will need to be a DNS server in the WireGuard subnet if you want name resolution to work.