Phpizza Blog

Games on Linux

July 21, 2021

It’s been a while since I’ve tried to play games on Linux, I ended up back on Windows as my only gaming PC a few years ago particularly because of Oculus’s lack of Linux support for VR. With Windows 11 doing some… interesting things, I think it’s time I try it again.

Steam’s built-in Proton compatibility layer can be enabled for all non-native games in Settings > Steam Play > Advanced, and basically just does its best to run whatever you want. It mostly just works.

My usual Linux PC

I only have one dedicated Linux PC that’s not a server right now, it’s a relatively low-end HP EliteDesk with an Intel Core i5-6500T with HD 530 graphics. Not suitable for modern 3D games for sure, but perfectly usable for something like Terraria, Minecraft, or Celeste.

Native games via Steam

  • Celeste has a native Linux build that works perfectly, apart from the Steam overlay which doesn’t render any text/images.
  • Psychonauts mostly works. Switching to fullscreen broke HDMI audio, but fixed mouse capture issues in windowed mode.
  • Stardew Valley works great natively. Drops to 40 FPS at 4K, but this PC isn’t really meant for that anyway. Modding will probably be a bit more complicated on Linux than Windows but definitely still possible.

Playing via Proton (Steam’s custom Wine)

  • A Hat in Time took quite a while to process shaders and install dependencies on first launch. Fully saturated 3 CPU threads for maybe 20 minutes processing shaders, which is skippable but probably ideal to let run. I also have 7 GiB of Workshop mods installed, which probably doesn’t help stability or performance, but after a while it did launch the game. Since this is running integrated graphics, it gave me a warning, but running it at 720p low settings gets a somewhat usable 30-60 FPS (very inconsistent, lots of stutters). No major input mapping/delay issues like Psychonauts had, even in windowed mode.
  • Sonic Mania runs perfectly at 4K fullscreen/windowed, solid 60 FPS. Turning on screen filters drops the framerate a bit at 4K. No noticeable input lag or glitchiness.
  • Psychonauts under Proton has much better input handling, but still breaks mouse input when switching apps/resolutions. Also has a persistent “important but non-fatal problem” error text rendered all the time, with no additional detail. Other than that, it seems to run perfectly, with less issues and better performance than the native build.

An actual gaming PC

I also have a proper gaming PC running Windows 10. It’s mostly actually used for YouTube.

3D via Proton, but with an actual GPU this time

  • A Hat in Time - This runs very well on my gaming PC via Proton. There are some occasional hitches when it loads in new assets for the first time, presumably because it’s transpiling things to Vulkan or something, but once you’ve been on a map for a bit, it runs fairly smooth. It’s not quite the Windows experience, but it’s very close. Easily gets >60 FPS at 4K on this PC.
  • Skyrim - Works perfectly via Proton. Loads fast, runs on maxed out settings, mods work (with some effort), it’s great.

Non-Steam games

  • GOG doesn’t have an official client, and the most popular third-party client doesn’t recognize Proton as a usable Wine install yet. Since GOG doesn’t use DRM though, most games work well under Proton when added as “non-Steam” games in the Steam UI, and should work when launched manually under Proton via the command-line. Not a perfect experience, but very usable.
  • has Linux support natively for their client, but it limits downloads to games with Linux builds available. I tried a few of those and they work great. Windows builds can still be downloaded by accessing the Itch site in a browser, and adding them to Steam or running manually should work for most simpler engines. Ren’Py and GameMaker-based games (which both have native Linux support as a build target to the devs) work basically perfectly under Proton.


For me to ever fully leave Windows behind, I’ll have to have decent VR performance. Luckily Valve has official support for the Index on Linux, so I’ll be trying that out with a variety of games.

  • SteamVR itself works okay. It detected my Index hardware, which was left connected the same way its been on Windows, and walked through the initial room setup without any issues. Once in the headset, a few problems started:
    1. The base stations, which I had configured to turn off automatically on the Windows setup, did not automatically turn on and had to be manually power cycled.
    2. The SteamVR UI dropped lots of frames and seemed to not properly match the headset motion sometimes. This could be due to running at 120 Hz, and may have worked better at 90 Hz, but it’s an Index and I’m not using a $1000 headset at lower than its supported specs.
    3. The audio was unusable on the latest Nvidia driver. It “worked” in that it detected the device over DisplayPort, and played back audio to it, but it sounded like it was coming over a phone line that was being repeatedly used to short a high-voltage AC circuit. Adjusting settings a bit completely broke playback on the device until a reboot, with no improvements.
  • Beat Saber via Proton was finnicky to get to launch, as it involved clicking “OK” in a dialog box that’s shown in a window that is only visible in the SteamVR overlay, and didn’t auto-open (I had to manually enter the SteamVR menu while waiting for the game to load). Once it launched, it seemed to work basically perfectly, which was very surprising. Sadly because of the audio not working at the OS level, I was unable to actually play it.

And that ended my attempt at VR on Linux. If the weird UI lag and the audio driver problem were fixed, I could see myself actually moving to Linux for VR games, which is honestly not something I thought would be this close to usable. I’m particularly impressed that the connection between Beat Saber running under Wine/Proton and the SteamVR runtime worked perfectly out of the box, which clearly shows that this is something Valve has been focused on.

GitHub Copilot

July 08, 2021

Microsoft’s acquisition of GitHub was going to change software development in many ways. This wasn’t one I expected.

Since the introduction of Visual Studio Code, it’s been clear that Microsoft is actually serious about leveraging the Open Source community in furthering its development software. While many were opposed to the acquisition of GitHub (myself included), I feel like overall it’s been a good thing for them.

GitHub was struggling financially, despite having a massive userbase and a solid feature set, and Microsoft definitely solved that. The introduction of free private repositories for both individuals and organizations is great, as long as you trust Microsoft with access to your source code, and would never have happened before the acquisition.

GitHub Codespaces is another thing Microsoft introduced that I doubt anyone else could’ve brought. It’s fantastic, you have instant access to a hosted Azure VM with your repository and a Linux installation, that you can basically just install whatever you want on. I’ve used it with Node apps (including this blog!) with no trouble at all, and even PHP apps with a MySQL server work great. I figured Codespaces was going to be the new flagship feature with all of the focus, because it’s already so great. But apparently Microsoft had something more ambitious in mind.

Enter Copilot

GitHub describes Copilot as “Your AI pair programmer”, which sounds both incredible and unrealistic. It’s current available as a preview to a limited group of people, and works as an autocomplete extension to VS Code. I’ve only been using it for a day so far, but it’s already something that shows a lot of promise.

Over the years, many developers have wondered if there was a point where “AI” could replace them, or at least dramatically reduce the need for their skill set. While this definitely doesn’t remove the need for a developer, it’s genuinely impressive how well it can seemingly understand and complete a codebase.

In my day job, I primarily work in Laravel, Vue, and Magento. I also do a decent about of work in C#, but most of that isn’t .NET Core, so I can’t do it in VS Code. Copilot was trained on most, if not all, of the public repositories on GitHub, so it can understand many languages and frameworks, with limited degrees of accuracy.


In Magento, I’ve had Copilot auto-generate entire framework-specific blocks of code from only a comment in an existing function, with enough accuracy that they were usable as-is, once I injected the necessary dependencies (something Magento is notoriously messy with, and something that’s easy to miss as a human developer too). As an example, I needed to set a cookie from a controller. In Magento, this is done with the CookieManager.

In the controller’s main function, I wrote this comment:

// Set customer cookie value

And Copilot gave me this:

  ->set('customer_id', $tokenModel->getCustomerId());

This is completely valid code that works in the context. It isn’t the recommended way of instantiating the CookieManager class, with dependency injection in the class constructor being preferred, but people often ignore that recommendation, which is presumably why it knows about using the Object Manager directly in the first place.

The $tokenModel is a model instance that’s available in this context, and it does, in fact, return the relevant customer ID with the ->getCustomerId() method.

It could easily have just recommended PHP’s native setcookie() or some other popular Framework, but it understood that I was in a Magento project and used framework-specific code from Magento. Even using ^Enter to show 10 different recommendations for that code returned 10 distinct Magento-specific solutions, each with a different way of using the framework and the local variables, solving slightly different problems in slightly different ways. All of them were accurate according to the initial “Set customer cookie value” comment prompt.


In a Vue app, I had an issue where a critical part of a user flow relied on a popup triggered by in an async callback. My goal was to just redirect to the destination if the popup was blocked, since in this case that is much better experience than nothing happening. I started by writing a new function:

function isPopupBlocked(

Which Copilot autocompleted to:

function isPopupBlocked(popupWindow) {
  return popupWindow.document.body === null;

This works for some, but not all browsers. Some throw an exception on, so I wrapped the call in a try/catch block:

try {
  const w =;
} catch (e) {


I started typing an if statement after the call, and it gave me this:

const w =;
if (w.document.body === null) {
  window.location = url;

Then I went to catch and it auto-filled the same thing, which was exactly what I wanted:

try {
  const w =;
  if (w.document.body === null) {
    window.location = url;
} catch (e) {
  window.location = url;

All of this is fairly simple, and all things I could quickly do myself, but it saved a lot of repetition and time usually spent looking for other implementations and documentation.

I hope that when Copilot eventually leaves preview it isn’t too prohibitively expensive as it could easily be something requiring a volume license for corporations or that kind of thing. I already feel like it is a helpful tool that I’d love to use on all of my projects going forward, and I’m excited to see what comes next.

I don't understand Windows

June 26, 2021

I don’t understand Windows anymore. I don’t understand what Microsoft is doing with it at all, and I’m not sure they do either. Since Windows 7, the operating system has changed in dramatic ways every few years, often rolling back changes and releasing features users vocally hate. They’ve been slowly losing the battle against Chrome OS, and even macOS and Linux, particularly in the consumer space.

I doubt Microsoft will ever truly lose their hold on the enterprise workstation market, but many companies have moved to running Linux-based operating systems for the majority of their servers thanks to the lower up front costs and generally better stability and server software support. Google’s incredible success with Chromebooks has dramatically changed the landscape for entry-level notebooks, a place where Windows was the only real option in the past, and many users in need of higher-end systems have ended up with Macs recently, in part due to the inconsistent experiences with Windows 10 and the general PC quality issues. Consumer PCs have suffered for a long time with things like preinstalled bloatware, poorly matched hardware, and things like terrible trackpads and battery life, and some users (myself included) have lost interest in a traditional PC notebook.

I’ve recently gone as far as considering moving my last main Windows PC over to Linux, as the last thing really holding me to Windows has, for a long time, been gaming. I don’t really play many games, but when I do, I want them to run well, particularly in VR. Valve has official support for the Index on Linux, and many games work well enough under Proton now, that I may permanently leave Windows behind at some point. I’ve loved my experience with Linux as a desktop OS in recent years, including running Arch Linux as my only operating system on several PCs, and my new Apple Silicon MacBook has solidified the decision to not bother with non-Apple notebooks going forward.

All of this is a bit weird though. My first real use of Windows was in the XP days, though I did use 95 and 98 for a while before that on my parents’ PCs. XP has an incredible reputation, for good reason. It was user-friendly, attractive, fast, reliable, and had great forwards and backwards compatibility. Many users continued to run XP long after its EOL, and I still keep an XP VM and an airgapped XP netbook around, mostly for nostalgic reasons.

My next computer and first laptop, a Sony VAIO VGN-NR120E (I still remember that model number off the top of my head for some reason), shipped with Windows Vista. While Vista was generally regarded as a terrible operating system, I actually loved it. It was beautiful, still the best looking software I’ve ever used. It wasn’t exactly fast, but it wasn’t generally unstable for me. My device had solid drivers, and the vast majority of the software I needed worked completely fine with no compatibility issues. The only place I really had trouble was using LAN features like file sharing with PCs still running XP, but that wasn’t something I did often. Vista seemed like a necessary step to bring Windows into a newer generation, despite its apparent issues.

Windows 7 may be loved even more than XP. While I did like 7, it honestly didn’t feel all that different from Vista for me. It was slightly uglier, with a less opinionated look and unnecessarily large UI elements for the limited display sizes of the day, but did a lot of things generally better. UAC was changed to be less intrusive by default, Start was updated with a built-in search (still the best search Windows has ever natively had), and the UI had a bit more simplicity to it, with less clutter. I never got into the combined taskbar buttons, and still use them separated with labels and a small taskbar size, all the way from Windows 7 to 10. Hopefully that’s still an option on 11, assuming I ever actually use it.

Windows 8 was a weird one. I actually really liked it overall, despite the obviously terrible UI choices. It was very fast, very stable, and brought a lot of really nice new features to the desktop experience. Sadly all of that was overshadowed by Microsoft clearly overestimating the adoption of Windows 8 tablets. The full-screen “Metro” UI was terrible, even on tablets, relying on unintuitive gestures, and the complete removal of many key UI features, including of course the infamous removal of the Start button. Microsoft quickly followed it up with Windows 8.1, which included some general improvements to the UI, and eventually the stupidly-named Windows 8.1 Update 1, which finally brought back the Start button.

Windows 10 has a Start button. People like that, so the general public considers it better than Windows 8. It has a lot of other good things to. It launched with a much-improved (though I personally still hate it) Start menu, a new feature-rich Task Manager, improvements to Explorer, Microsoft Edge (the old one that’s gone now but was generally pretty good), more window snapping options, a redesigned PC Settings that didn’t have 90% of what was in Control Panel, DirectX 12, and a bunch of other neat things.

It was supposedly “the last Windows”. With a general pattern of having two major updates per year, Ubuntu-style, Microsoft has been steadily adding features and changing things up on Windows 10 since it’s release in 2015. This has included excellent things like the Xbox Game Overlay, GPU information in Task Manager, the Windows Subsystem for Linux, new Hyper-V features, and the fantastic Chromium-based Microsoft Edge.

Sadly, both the RTM version and these periodic feature updates have also brought a lot of features no one wanted. Start search including Bing results, which are hard-coded to launch in Edge instead of the user’s default browser, large amounts of forced telemetry and online-only features, the weird news/weather widget that everyone immediately disabled upon launch in 21H1, the constant redesigns of PC Settings, none of which have been complete or stable… the list goes on.

As an aside, I find it incredibly silly that Canonical has never broken their Ubuntu release cycle, while Microsoft has screwed it up so many times now that they’ve stopped naming their releases after months and changed to a generic “20H2”-style build number. They even rolled back a release and didn’t push it out again until the next year once.

What’s weird with all of this is that none of it seems to address a lot of what Windows user most vocally request and complain about. Microsoft’s own PowerToys includes many of the things users have requested, including PowerToys Run, which is exactly what the Start search should be (and what it used to be before Windows 10), which shows just how inconsistent Microsoft’s direction internally is. My personal issues with Windows lie primarily with the aggressive use of user data, and the forced-online features that have no reason to be online (like Start search). I also find it incredibly ugly, with easily the least-consistent visual style of any operating system Microsoft has released (if you disregard the full-screen vs desktop disconnect in 8.x).

Windows 11 doesn’t seem to address any of these common issues. It is much prettier at a first glance, but time will tell if there’s any actual consistency to the redesign, or if we’ll just have yet another visual style applied to a handful of first-party applications. The centered taskbar is just weird, and honestly the whole UI feels like a shameless (and poorly-implemented) ripoff of macOS Big Sur. I don’t even like Big Sur.

There are so many incredible teams at Microsoft building such incredible things, and it’s just sad to see Windows struggle. I really hope Windows 11 is good, but the forced requirement of using a Microsoft account and using Secure Boot feel like too much lock-in for me, despite the fact that I currently use both of those things on all my Windows PCs. I love so much of what Microsoft has been doing lately and I’m going in to the next generation of Windows with some hope, but with my current opinion of Windows 10, I really don’t know if I’ll like it.

I wrote this whole thing on battery power on my MacBook Air in VS Code, while playing music and occasionally referencing some things in Safari, and I’m still at 100% battery. Apple Silicon is amazing.

It's purple now.

June 22, 2021

The blog is purple now. With a bit of yellow. I’ve probably spent too much time on the Playdate landing page lately or something. Or maybe I’m channeling some Sailor Moon vibes. I don’t know. It’s purple now.

I’ll add more colors later, I like colors.

Embracing Darkness

February 24, 2021

When I first did the redesign of this blog last year, I debated whether I should include a toggle-able dark theme. There are a lot of pros and cons to offering a dark theme, and complexities to how it should be implemented, and it was stuff I just didn’t feel like getting into at the time. Since then though, Tailwind 2.0 has released with native support, and my React knowledge has improved to the point where I’m very comfortable using the new functional components.

Initially, my plan was to just match the user agent’s configuration for the theme, as that is easy to implement natively in CSS. There are downsides to that though, as many people may set their OS to a dark theme, but prefer reading long-form content with a light background as it is much easier to read that way in most environments. Instead, this implementation uses localStorage to persist your theme selection, defaulting to matching your global configuration. This is fairly simple to do under normal circumstances, but with React and server-side rendering, it gets a bit more complex. The end result is simple but good enough, with a dropdown menu in the navbar that allows you to select between “Auto”, “Light”, and “Dark”. Your selection, as well as your global setting, controls toggling a dark class on the html node on the page, which applies the overridden styles.

This was implemented using Tailwind 2’s excellent dark mode feature, that adds a dark variant to the utility classes, making it easy to add inline class definitions for how a certain component should alter its appearance under the dark mode. Along with this upgrade to Tailwind 2, the color palette was changed to work a bit differently. Previously, Tailwind offered a single “gray” selection that had a slight cooler temperature. Now, there are five total “gray” options, including a true-neutral gray. I’m using both the cool gray and the neutral gray in this redesign, with the cool gray being used for components, while the neutral grays are used in the page body. I also implemented a custom “teal gray” palette for use in the dark theme, which is hue-matched with the main “teal” palette from Tailwind.

I’m fairly happy with how the dark mode’s colors came out, but I don’t love the new light mode colors. The contrast between components in the navbar is not as clear, and the new gray is not as saturated on the navbar either. Additionally, losing the slight saturated gray from the header backgrounds and other content areas is not as nice looking, and I plan on reintroducing something like it in a future update. I will likely be adjusting the colors a lot more than just a minor correction, possibly adding several new colors to various components to liven up the site more.

There was basically no reason for me to do any of this, as I didn’t really learn anything new, and I actually prefer the old design, but it felt like a necessary first step to a more thorough refresh, that I’ll inevitably do at some point over the next year or so.

Maybe I’ll even actually post something.