WireGuard

I’ve always had various ways of connecting to my local network externally, from unencrypted VNC connections directly to my PC in the early days, to RDP, SSH tunnels, and eventually proper VPN setups.

Most recently, I was using SoftEther on my file server as my way into my local network, which is nice because it bundles support for most of the modern protocols out of the box, but is a bit of a pain to keep running correctly on Arch, and seems overkill for what I was doing. Not to mention having like 10 ports open for an occasional single VPN connection felt really silly.

Now though, I’m using WireGuard, and it is a much better experience. With native support in the Linux kernel, it “just works” out of the box on the client and server ends (both just peers in WireGuard’s implementation), and is really clean to set up and maintain. It only uses a single UDP port for incoming traffic, and works much better with my slightly weird networking setup.

Like basically everything on linux, the Arch Wiki article on WireGuard is fantastic and has basically everything you could need to set it up.

Server configuration

My “server” configuration at /etc/wireguard/wg0.conf:

[Interface]
Address = 10.200.200.1/24
ListenPort = 51820
PrivateKey = [key here]

PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o enp1s0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o enp1s0 -j MASQUERADE

[Peer]
PublicKey = [key here]
PresharedKey = [psk here]
AllowedIps = 10.200.200.2/32

[Peer]
PublicKey = [key here]
PresharedKey = [psk here]
AllowedIps = 10.200.200.3/32

The Interface address should be a new subnet that is only used for assigning addresses to the WireGuard peers themselves. The PostUp and PostDown settings are used to update iptables to forward IP traffic from the peers through your primary network interface. Replace enp1s0 with whatever your interface name is (you can use ip link to list them).

If it’s behind a firewall, you’ll need to add a NAT rule allowing UDP traffic to your server on the ListenPort you defined.

You can generate a private key with wg genkey, and generate a pre-shared key to give the clients with wg genpsk.

The Peer sections here are the “clients” in the network. You’ll want to generate a PSK to add here and to the peer when configuring it, then let the peer generate its own key pair to add to the server’s config. Each peer should either have a unique address or use a /24 subnet to allow it to be dynamically assigned an IP when it connects.

One configured, you can use wg-quick up wg0 directly to start your server, or manage it as a systemd service:

systemctl start wg-quick@wg0.service

Client configuration

When adding a client peer, it works best to let the client generate its own key pair and just add its public key as a new peer. A client’s config should look something like this:

[Interface]
Address = 10.200.200.2/24
PrivateKey = [auto-generated private key here]
DNS = 1.1.1.1

[Peer]
PublicKey = [server public key here]
PresharedKey = [psk here]
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = wireguard.example.net:51820

The Address should be compatible with the AllowedIps setting for that peer in the server’s configuration, and the Endpoint should be the hostname and port of your server. The DNS can be set to any DNS server that’s accessible once your connected. If you’re not forwarding traffic on the server end, this will need to be a DNS server in the WireGuard subnet if you want name resolution to work.

Home Server Updates

My home server I re-set up last January has been working wonderfully. I’ve continued to run Arch Linux as the host OS, with a few VMs running various things from Minecraft servers to Pi-Hole, and kept btrfs as my filesystem for the storage drives. As I continue downloading and archiving far too much crap (my YouTube archive alone is now several terabytes), I needed to expand my capacity this year, swapping two of the 8 TB drives from the RAID10 array for 12 TB ones in a RAID0 setup.

My current btrfs setup now has two filesystems. The /archive filesystem is running two 8 TB WD Red drives under btrfs RAID1, used for backups and other long-term storage where resiliency is important. The /storage filesystem is running two 12 TB Seagate drives under btrfs RAID0, with metadata on RAID1, and is manually backed up to cold storage periodically. This has worked out really well and gives lots of room to grow, and excellent performance. The disks never struggle to max out the 1 Gbps network connection on the server (which is just a cheap AMD A8 desktop), and I’ve had excellent reliability so far.

I have had one issue earlier this year when I had multiple power outages combined with a bad RAM stick that ended up causing issues with the old 8 TB array, which partially prompted the upgrade. The filesystem was readable with btrfs’s recovery tools, but refused to mount, so I duplicated it all to the 12 TB drives, then created a new filesystem and moved data around until I reached my current setup. I bought a third 12 TB drive as well which I now use for my Downloads folder on my desktop PC, since I’m really bad at cleaning up my old downloads, which was useful in the migration process to have extra space to keep things temporarily.

Long-term, I plan on completely replacing this server. It works great for my current use-cases, but once I buy a house in the next year or so, I’d like to set up a server rack with a rack-mounted 10 Gbps switch and a proper server, likely running a new EPYC CPU instead of an old A8. I’d love to have a setup where I can run a large number of VMs, all with dedicated storage devices and plenty of RAM, so I’ll likely be looking at building something with a large number of SSDs, and maybe having my bulk storage on a separate server with basically just HDDs, like a Storinator or a Backblaze pod. I don’t really have a reason to have that kind of storage, but I’d love to be able to dedicate huge amounts of storage and bandwidth to things like Archive Team warriors and other long-term archiving efforts, both personal and public.

I’ve already started my own archiving tools like my comic archiver, and plan on continuing to expand those efforts to cover everything I might miss in the future if it were to disappear from the Internet. Even just things I’ve wget -r‘d have grown to be quite large of a collection, so continuing to grow my storage makes sense to me. I don’t see myself ever truly needing storage on the petabyte-scale, but I love the idea of having a petabyte of storage just because it’s awesome, so I may even go that far as drive capacities continue to increase.

A New Blog Style

I’ve rebuilt my blog’s visual style again, this time on Tailwind CSS with some fun new tooling. I was able to re-use most of my HTML from the old templates, since there’s not much to the theme, only swapping things like mb-sm-3 for sm:mb-3 on some of the Bootstrap utility classes, to change them to Tailwind ones. The new CSS stuff was fine, but the tooling around it was a bit more interesting.

This was my first time using PurgeCSS. I’m obsessive about keeping this blog loading as fast as possible, so making the new stylesheet tiny was part of my intent with this update. Tailwind gives a big stylesheet by default, breaking 500 KB even when minified. Using PurgeCSS I was able to get this down to about 9 KB minified. Using it with Jekyll works really well since Jekyll generates final HTML for every page on the site, making PurgeCSS’s selector matching quite painless. If you want to see the full setup (fairly simple), check out the webpack.mix.js for this project that uses Laravel Mix with PostCSS.

I also set up Puppeteer with a simple script to generate a PDF of my Resume, since I haven’t had an up-to-date PDF for it in years. I’ve used Puppeteer in the past when building the StructHub print export feature and found it to be an excellent way to generate complex PDFs, with some annoying limitations since you’re still working in the context of a web page. Particularly for StructHub, a table of contents would have been amazing, but wasn’t really possible to generate in a reasonable way since page numbers in the final print view aren’t known until after the PDF is generated. We could have tried to render things out in JS with page-sized containers, but that could have led to all sorts of hard-to-fix bugs with weird content sizing inconsistencies. This new PDF is much simpler than that.

For a long time on this site, I refused to include any client-side JavaScript. Recently though, I set up a locally-hosted copy of instant.page, which is a JavaScript module that prefetches pages when there is an interaction with a link to them (either via hover or touchstart). I never intend to add JS to this site that has any effect on usability, or any user tracking, but this tiny module works great to make the website feel even faster than if it was a fully traditional, lightweight HTML site.

I also have no analytics on this site (and don’t even really keep server logging), so I have no idea how many people, if any, read it, but it’s still fun to work on as a side thing to experiment with new frontend technologies, new server configurations, and new design concepts. I’ll likely continue to redesign it every year or two as I get bored with whatever it is at the time.

Maybe I’ll even leave Jekyll. But probably not.

Inconsistency in the Windows experience

I still don’t like Windows 10. All of my original complaints are still issues, but there’s so much more to the Windows experience that is just bad.

One thing I’m frequently frustrated by is application management. On Linux, if you use it correctly, every application (graphical and CLI), every library, and even your OS itself is installed and updated consistently via the package manager. macOS isn’t quite as seamless, but applications are only really installed in three ways: the App Store, a .dmg image, and a package installer. You can use brew too if you want, which isn’t perfect, but works well enough.

On Windows, applications come with infinite varieties of installers, and install all over the place. Microsoft tried to standardize this with MSI packages and the Program Files directory, but both of those have minor limitations that many developers work around by just not using them. Chrome, for example, installs to an unprotected directory in %APPDATA%, using a completely custom installer. Microsoft’s own installers are incredibly inconsistent as well, with Office, Visual Studio, VS Code, and Teams all using completely different installers (none of them MSIs), though they at least all install the bulk of their executables in Program Files.

Microsoft’s own first-party apps are also a great example of how inconsistent Microsoft is with visual styles. Even without installing anything extra, Windows 10 ships with WordPad, Calculator, and Paint, all of which look like they’re running on completely different operating systems built by completely different companies. It gets even worse with applications like Visual Studio and Office, where they have very custom UIs that don’t seem to use any of the native Windows UI.

The command-line experience on Windows is a mess too. Many things still rely on cmd, and that’s where many users’ knowledge of the Windows command-line will be, but with PowerShell being the default now it makes sense to want to transition. This shouldn’t be too difficult, except that PowerShell is almost completely incompatible with cmd. It’s also just bad 🙃

PowerShell is excessively verbose. Things like:

MKLINK C:\src C:\dest

become:

New-Item -Path C:\dest -ItemType SymbolicLink -Value C:\src

…for some reason. You can always use cmd.exe /c ... from PS, but it’s incredibly stupid that that’s necessary. It’s also incredibly ridiculous that Windows by default doesn’t even let you run scripts in PowerShell. I get that running scripts can potentially be insecure, but so is running any executable and those don’t require any system configuration changes to enable (yet). PowerShell is decent for Windows system administration, but for development purposes where everyone’s been using bash already for decades, it just feels so terribly implemented.

PC Settings is another great place to look if you want to see just how unfinished and pieced-together Windows 10 is. It’s gotten a lot better over the years, but about 2/3 of the advanced options in PC Settings still just open Control Panel windows for things, and many of the things that have been migrated over are missing much of the functionality of the original control panel items. Control Panel was really, really bad, but at least it had everything you needed somewhere in there.

Windows 10’s Dark Mode is a welcome new feature, but it needs so much more work to feel complete that it is only useable as a neat re-skin rather than something to actually change your whole OS to be dark. Linux has had fantastic theme support (including light/dark options) through GTK and QT for forever, and Apple went all-out on their dark theme in Mojave, completely updating all of their first-party apps and making a complete redesigned dark version of UIKit available to all third-party apps without their devs needing to do much at all to implement it.

Overall, Windows 10 has definitely improved over the years, but it still has a long way to go to be an OS I’d consider good.

Building the Web, Properly

It’s 2018. The web has more potential to be incredible now than it ever has before, with all major browsers supporting a huge number of powerful modern technologies, including HTTP/2, SVG, and CSS Flexible Boxes and Grids. With all of this consistency in support and optimization in engines and network stack, our websites should be faster and more robust than ever.

But they’re not.

We’ve reached a point in the industry where most new web developers learn just enough code to start building, but are never exposed to the greater needs surrounding development. Things like accessibility, compatibility, and performance are afterthoughts, rather than core concepts within a website’s architecture. Even simple things like UI state changes due to user input are being replaced with buggy JavaScript that only occasionally approximates what the browser could do natively if only we’d let it.

Let’s take the new web interface for YouTube as an example. It’s fairly clean, UI-wise, but it’s built on Polymer, which relies on polyfills for every browser except Chrome and Safari, resulting in very poor performance throughout the site, even on incredibly high-end PCs. An even more painful example is Google Play Music, which is also built on Polymer, but uses an incredibly large number of custom element instances, that absolutely tank even the fast Chrome implementation when doing things as simple as scrolling though a playlist.

And then you have sites like the Lego Shop. It was rebuilt in React a couple years ago, right before their big winter holiday sale, and the rewrite completely broke a huge portion of the major features. To this day, there is still a huge number of core features, like adding to the shopping cart and wishlist, that take a really long time to complete and have no visual indication of activity, and often don’t complete at all. Session data gets very broken as you navigate throughout the site, with AJAX calls updating quantities and other user-specific information happening on huge delays, sometimes taking tens of seconds to show that your cart isn’t empty, even on the cart page itself! There’s no real reason for this, since even with React and a fully-static frontend, most of this data could be safely persisted between pageloads and even prefetched in cases of critical data used widely throughout the experience.

The new Bitbucket UI is also very poorly engineered in many ways. Despite being a single-page application, it sometimes reloads the entire navigation UI through a several-second AJAX call, even when just clicking a tab within the existing navigation. It also takes an incredibly long time to load very common information, like file listings and settings pages, the latter of which should be fairly easy to generate on the server side. I really have no idea why Bitbucket is so slow.

Not everyone is doing a terrible job though. GitHub is probably my favorite example of a website built just about perfectly. It’s incredibly fast, and uses a perfect hybrid of AJAX and full page loads, with minimal assets on each page, resulting in navigation that’s basically instant, and a UI that stays visible and interactive throughout the experience. Their site is designed so well that even the URLs are intuitive and hackable. Unlike Bitbucket, that seemingly uses random names for many parts of the application that don’t correspond with the UI, GitHub structures their entire site in a very obvious-feeling way, that really makes me wonder why other people overcomplicate things so much. GitHub even goes as far as using things like the <details> element for dropdown menus, which are more semantic than CSS checkbox hacks, and don’t require any JavaScript to function. I really admire that level of care in engineering.

Personally, I’m trying to keep everything I build as standards-compliant, cross-browser friendly, and performant as I can. I try to find a good balance between cacheable static pages and completely dynamic experiences, and still try to avoid JavaScript where possible to keep things instantly usable and more resilient. Where I do need complexity, I usually stick with Vue.js, so I can avoid adding overhead to pages and site components that don’t need a lot of custom behaivor. Minifying assets, setting good TTLs, and serving things over HTTP2 are also all easy wins for performance, but the easiest thing you can do is just remove code that doesn’t need to be there. If you have several megabytes of JavaScript on your site, you’re almost certainly doing something very, very wrong.

Stop wasting so much bandwidth :P