Phpizza Blog

Applying SOLID Principles to Modern PHP

August 31, 2021

The SOLID principles are one of a few groups of software design principles that are often and easily applied to most modern software development. I work primarily in PHP development on the back-end, where object oriented programming is still the usual standard and SOLID is of particular relevancy.

In the modern PHP world, most new development is done in one of a few major frameworks. My framework of choice is Laravel, but each include some level of implied and explicit structure to your application which can determine a lot about how you plan the specifics of new feature implementations.

The SOLID principles include:

  • Single responsibility — Each class should have one responsibility
  • Open/closed — Objects should be open to extension, but closed for modification (private attributes, immutability, etc.)
  • Liskov substitution — An extended object should behave the same as the object it extends for the behavior the original object exposes
  • Interface segregation — Functionality should be split into small logical objects
  • Dependency inversion — Rely on abstractions and interfaces when building common implementations, but don’t couple abstractions to an implementation

I personally find that these principles feel somewhat obvious once you’ve worked in OOP for a while, and that the SOLID acronym does little to help remember or educate (unlike, for example DRY). However they’re still important to learn and truly understand, particularly when designing an application from early stages, to avoid running into major problems when trying to scale the application later.

Let’s take a look at how each one affects how I architect a PHP application.

Single responsibility

One of the most common approaches to structuring a modern web application is the MVC pattern. Many variations of this pattern exist, but the basic concept is that you break an application into at least three major areas:

  • Models, which represent the data objects (e.g. a record in a database table)
  • Views, which represent the renderable representations of data
  • Controllers, which connect the models and views for a given request

Within this simple architecture, there are a lot of things left open. Choices like where to put business logic, how to handle authorization, and many other common realities of an application are not predetermined.

To keep classes responsible for only their relevant behaviors, it is necessary to supplement MVC with additional areas. Common additions include Middleware and View-Models, which offer greater control over the different layers of the application.

In particular when building in a modern framework, splitting code into smaller logical areas within each MVC area is important to keeping things clean. As an example, putting everything related to a user login process in a single controller class may make sense when it only includes one or two routes to handle showing a login page and processing the POST action, but can quickly become too complex when you start including validation, two-factor flows, password resets, etc.

Instead of putting everything involved in a user logging in in a single controller class, splitting the behavior into several classes could include:

  • A controller for each action (showing the form, handling the POST, etc.)
  • A data transfer object for representing the form data, which performs validation
  • A view model that ties the DTO values to the view

Each of those classes have one clearly defined role within the action the user is performing, and make it easy to find and change that behavior without affecting anything unexpected.


When designing a class, it’s important to pay attention to which attributes are handled by the class internally, and which should be accessible to both outside accessors and child classes that extend the class. Failure to correctly enforce these conventions can have huge impacts on the quality of a codebase, and cause classes and behaviors to become entangled in very unpredictable and unmaintainable ways.

Various conventions and language constructs exist to both imply and enforce the open and closed nature of properties and methods on a class, including a very useful readonly keyword coming to class properties in PHP 8.1 later this year. I typically find that I make most class properties protected, and most methods public.

In cases where a class needs to perform some operation as an internal effect of a public method call, it often makes sense to put that behavior in a protected function within the class, especially if it is used in multiple places within the class.

When you do need to share a property value with a class, most applications currently implement this indirectly through the use of getter functions, rather than exposing the property publicly. PHP 8.1’s addition of the readonly keyword will change this in many cases, as it will be safe to set a value once within a class constructor, and make it accessible publicly without allowing external behavior to mutate the property.

Using protected instead of private when declaring properties and methods is particularly helpful down the road when you need to extend a class to modify or wrap some portion of its original behavior. It is possible to override private class members, but doing so often requires completely reimplementing a large amount of the original class’s behavior, which can be error-prone and introduce many side effects, as well as making it more difficult to maintain.

Liskov substitution

When extending classes, it’s important to keep in mind that the class may be used in a variety of ways, especially when building a library that’s used in multiple projects. To avoid breaking implementations, keeping the public behavior consistent is critical.

A common point of concern is method signatures. Given a method that expects a certain list of parameters, extending that class should not change the order, type, or structure of the parameters. When it is necessary to supplement the parameters, ensure that behavior when called with parameters designed for the original implementation still behave as the parent class does.

Here is a simple example of extending a class while preserving the initial behavior:

class ObjectA
    public function updateCount(int $count): void

class NotifiableObjectA extends ObjectA
    public function updateCount(int $count, bool $notify = false): void

The default value on $notify is required to prevent callers from

One of the best ways to avoid accidentally changing method signatures is to ensure that strict types are used for arguments and return values whenever possible. PHP will enforce types being identical between implementations.

Without defined types, we could easily break this behavior without the language enforcing anything:

class ObjectB
    public function updateCount($count)

class NotifiableObjectB
    public function updateCount($notify, $count)

PHP 8 introduced named arguments which, when used, mean that the name of each argument in your method definition must also stay unchanged in new implementations. It’s also a good idea to avoid changing argument names entirely for any public methods when using PHP 8, as this can easily break functionality anywhere in an application.

There are many more complex things to be aware of to safely extend a class, but they are often more implementation-specific, and a result of poor design of the initial class. In those cases, it’s often necessary to find where the class is being used and manually verify that the behavior is not changed unexpectedly.

Interface segregation

Segregation of code goes along well with both the Single Responsibility and Dependency Inversion principles. In particular, it’s important to keep classes and interfaces small, splitting common behavior into separate interfaces.

This is particularly helpful to avoid requiring an implementer to add definitions for behavior that is not supported or is unrelated to an implementation of an interface.

For example, let’s define an interface of common features of a YouTube channel:

interface Channel
    public string $id;
    public string $name;

    public function listVideos(): array;
    public function listPlaylists(): array;
    public function listPlaylistItems(string $playlistId): array;

This works fine for YouTube, but later on if it’s necessary to support another video platform that doesn’t include playlists for example, the expected behavior of the playlist-related methods becomes a lot more complicated.

You could simply return an empty array when listPlaylists is called, but what about listPlaylistItems? Do you throw a generic exception? Do you introduce a new exception type and hope other implementations also throw the same exception? All of these options have negative side effects and can break the application and result in widely varied implementations, which is exactly what we’re trying to avoid by using an interface!

Instead, it’s best to define interfaces in a way that is more flexible for future implementations:

interface Channel
    public string $id;
    public string $name;

    public function listVideos(): array;

interface ChannelWithPlaylists extends Channel
    public function listPlaylists(): array;
    public function listPlaylistItems(string $playlistId): array;

If you architect the application this way from the beginning, code that’s aware of the playlist functionality and needs to work with playlists can easily check if it’s supported by the platform via $channel instanceof ChannelWithPlaylists.

This principle can be hard to implement effectively since you never really know what’s going to happen to your application in the long-term, but doing what you can to get this right early on can be very impactful.

Dependency inversion

Correctly structuring the layers of an application is important to avoid making things too tightly coupled and dangerous to change. For example, it is common to have a Controller that connects a DTO to a View-Model.

A View should not be accessing an HTTP request directly, for many reasons including security, validation, and extensibility, so additional layers to facilitate that flow are necessary to ensure all aspects of the HTTP request can be handled in an intuitive and maintainable way.

It is also important to use interfaces and abstract classes whenever there is a chance that an object will need to be implemented differently in another context. Doing this before you have a need for multiple implementations is important, as it’s harder to restructure an existing application than it is to design with abstractions to begin with.

Having multiple implementations is particularly common in unit testing, as testing directly against an implementation of a class may require additional things like a database, web server, or external API access. For example if a a typical implementation of a class requires accessing a third-party API, it can be undesirable for many reasons (rate limits, credential sharing, performance) to trigger those same API calls during automated testing.

Instead, it’s best to make a mock implementation of the interface, matching the behavior of the external API with hard-coded or procedurally-generated responses, and handling data in a similar way to the actual API. Changing which implementation is used in the testing context can depend on how you’re running tests and which framework you use, but if the choice is between mocking API responses and not testing API integrations at all, it’s definitely best to test them.

Games on Linux

July 21, 2021

It’s been a while since I’ve tried to play games on Linux, I ended up back on Windows as my only gaming PC a few years ago particularly because of Oculus’s lack of Linux support for VR. With Windows 11 doing some… interesting things, I think it’s time I try it again.

Steam’s built-in Proton compatibility layer can be enabled for all non-native games in Settings > Steam Play > Advanced, and basically just does its best to run whatever you want. It mostly just works.

My usual Linux PC

I only have one dedicated Linux PC that’s not a server right now, it’s a relatively low-end HP EliteDesk with an Intel Core i5-6500T with HD 530 graphics. Not suitable for modern 3D games for sure, but perfectly usable for something like Terraria, Minecraft, or Celeste.

Native games via Steam

  • Celeste has a native Linux build that works perfectly, apart from the Steam overlay which doesn’t render any text/images.
  • Psychonauts mostly works. Switching to fullscreen broke HDMI audio, but fixed mouse capture issues in windowed mode.
  • Stardew Valley works great natively. Drops to 40 FPS at 4K, but this PC isn’t really meant for that anyway. Modding will probably be a bit more complicated on Linux than Windows but definitely still possible.

Playing via Proton (Steam’s custom Wine)

  • A Hat in Time took quite a while to process shaders and install dependencies on first launch. Fully saturated 3 CPU threads for maybe 20 minutes processing shaders, which is skippable but probably ideal to let run. I also have 7 GiB of Workshop mods installed, which probably doesn’t help stability or performance, but after a while it did launch the game. Since this is running integrated graphics, it gave me a warning, but running it at 720p low settings gets a somewhat usable 30-60 FPS (very inconsistent, lots of stutters). No major input mapping/delay issues like Psychonauts had, even in windowed mode.
  • Sonic Mania runs perfectly at 4K fullscreen/windowed, solid 60 FPS. Turning on screen filters drops the framerate a bit at 4K. No noticeable input lag or glitchiness.
  • Psychonauts under Proton has much better input handling, but still breaks mouse input when switching apps/resolutions. Also has a persistent “important but non-fatal problem” error text rendered all the time, with no additional detail. Other than that, it seems to run perfectly, with less issues and better performance than the native build.

An actual gaming PC

I also have a proper gaming PC running Windows 10. It’s mostly actually used for YouTube.

3D via Proton, but with an actual GPU this time

  • A Hat in Time - This runs very well on my gaming PC via Proton. There are some occasional hitches when it loads in new assets for the first time, presumably because it’s transpiling things to Vulkan or something, but once you’ve been on a map for a bit, it runs fairly smooth. It’s not quite the Windows experience, but it’s very close. Easily gets >60 FPS at 4K on this PC.
  • Skyrim - Works perfectly via Proton. Loads fast, runs on maxed out settings, mods work (with some effort), it’s great.

Non-Steam games

  • GOG doesn’t have an official client, and the most popular third-party client doesn’t recognize Proton as a usable Wine install yet. Since GOG doesn’t use DRM though, most games work well under Proton when added as “non-Steam” games in the Steam UI, and should work when launched manually under Proton via the command-line. Not a perfect experience, but very usable.
  • has Linux support natively for their client, but it limits downloads to games with Linux builds available. I tried a few of those and they work great. Windows builds can still be downloaded by accessing the Itch site in a browser, and adding them to Steam or running manually should work for most simpler engines. Ren’Py and GameMaker-based games (which both have native Linux support as a build target to the devs) work basically perfectly under Proton.


For me to ever fully leave Windows behind, I’ll have to have decent VR performance. Luckily Valve has official support for the Index on Linux, so I’ll be trying that out with a variety of games.

  • SteamVR itself works okay. It detected my Index hardware, which was left connected the same way its been on Windows, and walked through the initial room setup without any issues. Once in the headset, a few problems started:
    1. The base stations, which I had configured to turn off automatically on the Windows setup, did not automatically turn on and had to be manually power cycled.
    2. The SteamVR UI dropped lots of frames and seemed to not properly match the headset motion sometimes. This could be due to running at 120 Hz, and may have worked better at 90 Hz, but it’s an Index and I’m not using a $1000 headset at lower than its supported specs.
    3. The audio was unusable on the latest Nvidia driver. It “worked” in that it detected the device over DisplayPort, and played back audio to it, but it sounded like it was coming over a phone line that was being repeatedly used to short a high-voltage AC circuit. Adjusting settings a bit completely broke playback on the device until a reboot, with no improvements.
  • Beat Saber via Proton was finnicky to get to launch, as it involved clicking “OK” in a dialog box that’s shown in a window that is only visible in the SteamVR overlay, and didn’t auto-open (I had to manually enter the SteamVR menu while waiting for the game to load). Once it launched, it seemed to work basically perfectly, which was very surprising. Sadly because of the audio not working at the OS level, I was unable to actually play it.

And that ended my attempt at VR on Linux. If the weird UI lag and the audio driver problem were fixed, I could see myself actually moving to Linux for VR games, which is honestly not something I thought would be this close to usable. I’m particularly impressed that the connection between Beat Saber running under Wine/Proton and the SteamVR runtime worked perfectly out of the box, which clearly shows that this is something Valve has been focused on.

GitHub Copilot

July 08, 2021

Microsoft’s acquisition of GitHub was going to change software development in many ways. This wasn’t one I expected.

Since the introduction of Visual Studio Code, it’s been clear that Microsoft is actually serious about leveraging the Open Source community in furthering its development software. While many were opposed to the acquisition of GitHub (myself included), I feel like overall it’s been a good thing for them.

GitHub was struggling financially, despite having a massive userbase and a solid feature set, and Microsoft definitely solved that. The introduction of free private repositories for both individuals and organizations is great, as long as you trust Microsoft with access to your source code, and would never have happened before the acquisition.

GitHub Codespaces is another thing Microsoft introduced that I doubt anyone else could’ve brought. It’s fantastic, you have instant access to a hosted Azure VM with your repository and a Linux installation, that you can basically just install whatever you want on. I’ve used it with Node apps (including this blog!) with no trouble at all, and even PHP apps with a MySQL server work great. I figured Codespaces was going to be the new flagship feature with all of the focus, because it’s already so great. But apparently Microsoft had something more ambitious in mind.

Enter Copilot

GitHub describes Copilot as “Your AI pair programmer”, which sounds both incredible and unrealistic. It’s current available as a preview to a limited group of people, and works as an autocomplete extension to VS Code. I’ve only been using it for a day so far, but it’s already something that shows a lot of promise.

Over the years, many developers have wondered if there was a point where “AI” could replace them, or at least dramatically reduce the need for their skill set. While this definitely doesn’t remove the need for a developer, it’s genuinely impressive how well it can seemingly understand and complete a codebase.

In my day job, I primarily work in Laravel, Vue, and Magento. I also do a decent about of work in C#, but most of that isn’t .NET Core, so I can’t do it in VS Code. Copilot was trained on most, if not all, of the public repositories on GitHub, so it can understand many languages and frameworks, with limited degrees of accuracy.


In Magento, I’ve had Copilot auto-generate entire framework-specific blocks of code from only a comment in an existing function, with enough accuracy that they were usable as-is, once I injected the necessary dependencies (something Magento is notoriously messy with, and something that’s easy to miss as a human developer too). As an example, I needed to set a cookie from a controller. In Magento, this is done with the CookieManager.

In the controller’s main function, I wrote this comment:

// Set customer cookie value

And Copilot gave me this:

  ->set('customer_id', $tokenModel->getCustomerId());

This is completely valid code that works in the context. It isn’t the recommended way of instantiating the CookieManager class, with dependency injection in the class constructor being preferred, but people often ignore that recommendation, which is presumably why it knows about using the Object Manager directly in the first place.

The $tokenModel is a model instance that’s available in this context, and it does, in fact, return the relevant customer ID with the ->getCustomerId() method.

It could easily have just recommended PHP’s native setcookie() or some other popular framework, but it understood that I was in a Magento project and used framework-specific code from Magento. Even using ^Enter to show 10 different recommendations for that code returned 10 distinct Magento-specific solutions, each with a different way of using the framework and the local variables, solving slightly different problems in slightly different ways. All of them were accurate according to the initial “Set customer cookie value” comment prompt.


In a Vue app, I had an issue where a critical part of a user flow relied on a popup triggered by in an async callback. My goal was to just redirect to the destination if the popup was blocked, since in this case that is much better experience than nothing happening. I started by writing a new function:

function isPopupBlocked(

Which Copilot autocompleted to:

function isPopupBlocked(popupWindow) {
  return popupWindow.document.body === null;

This works for some, but not all browsers. Some throw an exception on, so I wrapped the call in a try/catch block:

try {
  const w =;
} catch (e) {


I started typing an if statement after the call, and it gave me this:

const w =;
if (w.document.body === null) {
  window.location = url;

Then I went to catch and it auto-filled the same thing, which was exactly what I wanted:

try {
  const w =;
  if (w.document.body === null) {
    window.location = url;
} catch (e) {
  window.location = url;

All of this is fairly simple, and all things I could quickly do myself, but it saved a lot of repetition and time usually spent looking for other implementations and documentation.

I hope that when Copilot eventually leaves preview it isn’t too prohibitively expensive as it could easily be something requiring a volume license for corporations or that kind of thing. I already feel like it is a helpful tool that I’d love to use on all of my projects going forward, and I’m excited to see what comes next.

I don't understand Windows

June 26, 2021

I don’t understand Windows anymore. I don’t understand what Microsoft is doing with it at all, and I’m not sure they do either. Since Windows 7, the operating system has changed in dramatic ways every few years, often rolling back changes and releasing features users vocally hate. They’ve been slowly losing the battle against Chrome OS, and even macOS and Linux, particularly in the consumer space.

I doubt Microsoft will ever truly lose their hold on the enterprise workstation market, but many companies have moved to running Linux-based operating systems for the majority of their servers thanks to the lower up front costs and generally better stability and server software support. Google’s incredible success with Chromebooks has dramatically changed the landscape for entry-level notebooks, a place where Windows was the only real option in the past, and many users in need of higher-end systems have ended up with Macs recently, in part due to the inconsistent experiences with Windows 10 and the general PC quality issues. Consumer PCs have suffered for a long time with things like preinstalled bloatware, poorly matched hardware, and things like terrible trackpads and battery life, and some users (myself included) have lost interest in a traditional PC notebook.

I’ve recently gone as far as considering moving my last main Windows PC over to Linux, as the last thing really holding me to Windows has, for a long time, been gaming. I don’t really play many games, but when I do, I want them to run well, particularly in VR. Valve has official support for the Index on Linux, and many games work well enough under Proton now, that I may permanently leave Windows behind at some point. I’ve loved my experience with Linux as a desktop OS in recent years, including running Arch Linux as my only operating system on several PCs, and my new Apple Silicon MacBook has solidified the decision to not bother with non-Apple notebooks going forward.

All of this is a bit weird though. My first real use of Windows was in the XP days, though I did use 95 and 98 for a while before that on my parents’ PCs. XP has an incredible reputation, for good reason. It was user-friendly, attractive, fast, reliable, and had great forwards and backwards compatibility. Many users continued to run XP long after its EOL, and I still keep an XP VM and an airgapped XP netbook around, mostly for nostalgic reasons.

My next computer and first laptop, a Sony VAIO VGN-NR120E (I still remember that model number off the top of my head for some reason), shipped with Windows Vista. While Vista was generally regarded as a terrible operating system, I actually loved it. It was beautiful, still the best looking software I’ve ever used. It wasn’t exactly fast, but it wasn’t generally unstable for me. My device had solid drivers, and the vast majority of the software I needed worked completely fine with no compatibility issues. The only place I really had trouble was using LAN features like file sharing with PCs still running XP, but that wasn’t something I did often. Vista seemed like a necessary step to bring Windows into a newer generation, despite its apparent issues.

Windows 7 may be loved even more than XP. While I did like 7, it honestly didn’t feel all that different from Vista for me. It was slightly uglier, with a less opinionated look and unnecessarily large UI elements for the limited display sizes of the day, but did a lot of things generally better. UAC was changed to be less intrusive by default, Start was updated with a built-in search (still the best search Windows has ever natively had), and the UI had a bit more simplicity to it, with less clutter. I never got into the combined taskbar buttons, and still use them separated with labels and a small taskbar size, all the way from Windows 7 to 10. Hopefully that’s still an option on 11, assuming I ever actually use it.

Windows 8 was a weird one. I actually really liked it overall, despite the obviously terrible UI choices. It was very fast, very stable, and brought a lot of really nice new features to the desktop experience. Sadly all of that was overshadowed by Microsoft clearly overestimating the adoption of Windows 8 tablets. The full-screen “Metro” UI was terrible, even on tablets, relying on unintuitive gestures, and the complete removal of many key UI features, including of course the infamous removal of the Start button. Microsoft quickly followed it up with Windows 8.1, which included some general improvements to the UI, and eventually the stupidly-named Windows 8.1 Update 1, which finally brought back the Start button.

Windows 10 has a Start button. People like that, so the general public considers it better than Windows 8. It has a lot of other good things to. It launched with a much-improved (though I personally still hate it) Start menu, a new feature-rich Task Manager, improvements to Explorer, Microsoft Edge (the old one that’s gone now but was generally pretty good), more window snapping options, a redesigned PC Settings that didn’t have 90% of what was in Control Panel, DirectX 12, and a bunch of other neat things.

It was supposedly “the last Windows”. With a general pattern of having two major updates per year, Ubuntu-style, Microsoft has been steadily adding features and changing things up on Windows 10 since it’s release in 2015. This has included excellent things like the Xbox Game Overlay, GPU information in Task Manager, the Windows Subsystem for Linux, new Hyper-V features, and the fantastic Chromium-based Microsoft Edge.

Sadly, both the RTM version and these periodic feature updates have also brought a lot of features no one wanted. Start search including Bing results, which are hard-coded to launch in Edge instead of the user’s default browser, large amounts of forced telemetry and online-only features, the weird news/weather widget that everyone immediately disabled upon launch in 21H1, the constant redesigns of PC Settings, none of which have been complete or stable… the list goes on.

As an aside, I find it incredibly silly that Canonical has never broken their Ubuntu release cycle, while Microsoft has screwed it up so many times now that they’ve stopped naming their releases after months and changed to a generic “20H2”-style build number. They even rolled back a release and didn’t push it out again until the next year once.

What’s weird with all of this is that none of it seems to address a lot of what Windows user most vocally request and complain about. Microsoft’s own PowerToys includes many of the things users have requested, including PowerToys Run, which is exactly what the Start search should be (and what it used to be before Windows 10), which shows just how inconsistent Microsoft’s direction internally is. My personal issues with Windows lie primarily with the aggressive use of user data, and the forced-online features that have no reason to be online (like Start search). I also find it incredibly ugly, with easily the least-consistent visual style of any operating system Microsoft has released (if you disregard the full-screen vs desktop disconnect in 8.x).

Windows 11 doesn’t seem to address any of these common issues. It is much prettier at a first glance, but time will tell if there’s any actual consistency to the redesign, or if we’ll just have yet another visual style applied to a handful of first-party applications. The centered taskbar is just weird, and honestly the whole UI feels like a shameless (and poorly-implemented) ripoff of macOS Big Sur. I don’t even like Big Sur.

There are so many incredible teams at Microsoft building such incredible things, and it’s just sad to see Windows struggle. I really hope Windows 11 is good, but the forced requirement of using a Microsoft account and using Secure Boot feel like too much lock-in for me, despite the fact that I currently use both of those things on all my Windows PCs. I love so much of what Microsoft has been doing lately and I’m going in to the next generation of Windows with some hope, but with my current opinion of Windows 10, I really don’t know if I’ll like it.

I wrote this whole thing on battery power on my MacBook Air in VS Code, while playing music and occasionally referencing some things in Safari, and I’m still at 100% battery. Apple Silicon is amazing.

It's purple now.

June 22, 2021

The blog is purple now. With a bit of yellow. I’ve probably spent too much time on the Playdate landing page lately or something. Or maybe I’m channeling some Sailor Moon vibes. I don’t know. It’s purple now.

I’ll add more colors later, I like colors.