MySQL Backups

MySQL is pretty nice for a free, Open Source RDBMS. Before trying any kind of management, you should totally have a .my.cnf file in your ~. Put your username and password for localhost there, and then remove read permissions from everyone but yourself. This file specifies the default options to use with MySQL command line tools, making them much easier to work with, and avoiding having to repeatedly type in your password or accidentally letting it end up in a history file somewhere.


Flexible backups with mysqldump

The mysqldump tool is fairly straightforward, but you’re likely not using it to it’s full potential.

To make working with your export easier (making structure adjustments and such), it’s often useful to export the schema and data separately. A --no-data dump will include table schemas and view declarations, while a --no-create-info backup will only include table data.

mysqldump --no-data dbname | gzip > dbname-schema.sql.gz
mysqldump --no-create-info dbname | gzip > dbname.sql.gz

If you plan on changing your column ordering or adding columns from your schema file, your inserts in the data backup will no longer import correctly. To work around this, you can use the --complete-insert flag, which includes column names in the inserts, ensuring they restore properly as long as all the columns backed up are still present in the new table.

mysqldump --complete-insert --no-create-info dbname | gzip > dbname.sql.gz

Restoring views to a new server can fail if the view’s DEFINER is not a user on the new system. If you’re going to be importing the database on a different system with potentially different users, you can use grep to filter out the DEFINER rows, ensuring views import without errors.

mysqldump --no-data dbname | grep -v "50013 DEFINER" | gzip > dbname-schema.sql.gz

I pretty much always pipe mysqldump through gzip since there’s no good reason to keep an uncompressed export sitting around. Restoring gzipped backups is very simple to do in-place with the help of zcat.

zcat dbname-schema.sql.gz | mysql dbname
zcat dbname.sql.gz | mysql dbname

If you’ve got any stored procedures, triggers, or functions in your database, these will only be backed up if you use the --routines flag. I prefer to keep these in another file.

mysqldump --routines --no-data --no-create-info dbname | gzip > dbname-routines.sql.gz

If you’re using BLOB columns with unfiltered data, you may run into issues restoring backups of that data. To work around this, you can store those in hexadecimal which, while larger by default, shouldn’t take up too much more space once gzipped. Enable hex encoding with the --hex-blob flag.

mysqldump --hex-blob --no-create-info dbname | gzip > dbname.sql.gz

If your database consists entirely of InnoDB tables, you have the option of using a transactional backup, which ensures data integrity without requiring table-level locks! This can be enabled with the --single-transaction flag, but keep in mind any non-InnoDB tables will not be locked, and may be backed up inconsistently.

mysqldump --single-transaction --no-create-info dbname | gzip > dbname.sql.gz

Multi-threaded backups with mydumper

If all you’re looking for is a crazy-fast backup of a large database’s structure and data, you should be using the amazing mydumper tool, which uses a multi-threaded approach, backing up each table in a separate thread.

It’s fairly straightforward to use. Let’s do a gzip-compressed single-database backup.

mydumper -c -B dbname -d export_dir

Restoring a mydumper backup is quite simple

myloader -d export_dir

Mydumper also has powerful options like regex table name matching and row-count file splitting. Read the man page for details on everything mydumper and mysqldump can do.

Expand a live ext4 filesystem

So maybe you let your OS installer configure your partitions for you because you’re lazy like me. And maybe you realized it created a 16 GB swap partition on your tiny SSD. And maybe you wanted / to use that 16 GB.

Luckily, Linux is awesome.

root@box:~# lsblk
sda      8:0    0 111.8G  0 disk
├─sda1   8:1    0   512M  0 part /boot/efi
├─sda2   8:2    0  97.3G  0 part /
└─sda3   8:3    0  15.9G  0 part [SWAP]

/dev/sda2 is the partition we want to extend. We’ll use gdisk.

First, list the partitions to be sure we’re starting in the right place. Make a note of the start sector for the partition you want to extend, as we’ll be deleting it and creating a new one in it’s place.

root@box:~# gdisk /dev/sda
Command (? for help): p

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         1050623   512.0 MiB   EF00  EFI System Partition
   2         1050624       190093083    97.3 GiB   8300  Linux filesystem
   3       190093084       234441614    15.9 GiB   8200  Linux swap

Next we’ll delete the swap and root partitions.

Command (? for help): d
Partition number (1-3): 3
Command (? for help): d
Partition number (1-2): 2

Create a new partition. Use the start sector from old partition as the first sector on the new one, leaving the default values for the last sector and partition type.

Command (? for help): n
Partition number (2-128, default 2): 2
First sector (34-234441614, default = 1050624) or {+-}size{KMGTP}: 1050624
Last sector (1050624-234441614, default = 234441614) or {+-}size{KMGTP}:
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):

Print the new partition table to make sure everything looks right.

Command (? for help): p

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         1050623   512.0 MiB   EF00  EFI System Partition
   2         1050624       234441614   111.3 GiB   8300  Linux filesystem

If it’s all good, write the new partition table to disk!

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING

Do you want to proceed? (Y/N): y

Your new partition table is written, but the system won’t recognize it everywhere yet. We’ll partprobe to fix this.


Now your new partition table should be live.

root@box:~# lsblk
sda      8:0    0 111.8G  0 disk
├─sda1   8:1    0   512M  0 part /boot/efi
└─sda2   8:2    0 111.3G  0 part /

Finally, we’ll resize the filesystem to fill the new partition.

resize2fs /dev/sda2

Arch on UEFI

UEFI boot is weird, mostly because of backwards compatibility. Here’s a simple guide to setting up UEFI on GPT, assuming you already know how to do a typical Arch install.


Gdisk supports GPT, so we’ll use that to partition our system disk.

gdisk /dev/sda

Press o to create a new GPT partition table, overwriting any existing contents.

Then, use n to create new partitions:

  • Root partition (8300), full disk size - 1.5 GB.
  • EFI partition (EF00), 1024 MB

Type p to output your new partition layout, and w to write the changes.

Finally, refresh the disks.



Create filesystems for the new partitions:

mkfs.ext4 /dev/sda1
mkfs.fat -F32 /dev/sda2

Mount the new filesystems

mount /dev/sda1 /mnt
mkdir /mnt/boot
mount /dev/sda2 /mnt/boot


First, mount efivarfs if not already mounted:

mount -t efivarfs efivarfs /sys/firmware/efi/efivars

We’ll use systemd-boot, as it’s included within systemd.

bootctl install

We’ll need to write a few new config files in /boot/loader:

vim /boot/loader/loader.conf
default arch
timeout 3
editor  0
vim /boot/loader/entries/arch.conf
title   Arch Linux
linux   /vmlinuz-linux
initrd  /initramfs-linux.img
options root=/dev/sda1 rw

Finishing Up

exit # ^D is always superior :P
umount -R /mnt


Over the last few years, I’ve worked on an open source project management system called Phproject. It’s on GitHub under a GPL license, and you should use it.

Apparently I did a good enough job of it that someone tried to sell it. A Slovakian team going by the name KreaHive rebranded it as “Workflow”, and posted it to CodeCanyon. Apparently it was removed from the site quite quickly as I never saw it active, but I found it quite funny. After a bit more research, they have a public revision history that perfectly matches my latest updates on RevCTRL, and even made it to a business software review site, Capterra.

Workflow on Code Canyon

Then it got even better. I found a supposedly “nulled” version of Workflow on a site called Guest Post. Sadly the download links just redirect you back to the Code Canyon site, I would’ve loved to see what code changes were made in a nulled version of a stolen version of my open source project. At least they linked to a neat marketing graphic for it.


So I just discovered xinput. It’s neat.

Say you want to reduce the sensitivity on your mouse because it’s one of the stupid Mamba 2012s that are only usable with the buggy Windows software.

xinput list # lists input devices - note the ID of your device, we'll use '8'
xinput list-props 8
xinput 8 "Device Accel Constant Deceleration" 3

Arch Web Server

When a LAMP server just isn’t enough, you may as well go all-out with nginx, HHVM, and MariaDB on Arch Linux.

Start by installing and enabling the services:

sudo pacman -S nginx hhvm mariadb
sudo systemctl enable nginx
sudo systemctl enable hhvm
sudo systemctl enable mysqld

Run the initial setup for MariaDB:

mysql_install_db --user=mysql --basedir=/usr --datadir=/var/lib/mysql

Create a new file /etc/nginx/hhvm.conf:

location ~ \.php$ {
  fastcgi_index  index.php;
  fastcgi_param  SCRIPT_FILENAME $request_filename;
  include        fastcgi.conf;

Add a line to any nginx servers that need PHP:

include hhvm.conf;

After your MariaDB and nginx are configured, start your services:

sudo systemctl start nginx
sudo systemctl start hhvm
sudo systemctl start mysqld

Now you should be good to go!

LAMP Sever Setup

Sometimes you have a fancy VPS when you really just want a web server. This guide goes through the process of setting up a basic LAMP server with WordPress on Ubuntu.

LAMP Setup

To start out, you’ll need to install some packages. This step will ask you to create a database password, you should pick something secure that you can remember.

sudo apt-get install apache2 php5 php5-gd php5-mysql mysql-server-5.6 wget

After installing your packages, enable the rewrite module for pretty URLs in WordPress and other apps that use it.

sudo a2enmod rewrite

You should then edit your Apache configuration file. You’ll need to change the AllowOverride None line in the <Directory /var/www> block from None to All in order for .htaccess files to work properly.

sudo nano /etc/apache2/apache2.conf

Finally, restart Apache to apply your new configuration.

sudo service apache2 restart

WordPress Setup

Installing WordPress follows a pretty typical process that should work for most web apps and custom sites. We’ll start by creating a database. Run the command below, entering your database password chosen during the package installation.

mysql -uroot -p

Once in the MySQL command line, we’ll create a new database and user. Replace “Passw0rd” with a secure password. You’ll use this username (“wordpress”) and password to install WordPress later.

CREATE DATABASE `wordpress`;
CREATE USER 'wordpress'@'localhost' IDENTIFIED BY 'Passw0rd';
GRANT ALL ON `wordpress`.* TO 'wordpress'@'localhost';

After creating a database, we’ll download and unzip WordPress.

cd /var/www
sudo wget
sudo unzip
sudo chown -R www-data: wordpress
sudo chmod -R 755 wordpress

We need to tell Apache to run our WordPress code, so next we’ll create a new VirtualHost.

sudo nano /etc/apache2/sites-available/wordpress.conf

Enter the following text in the new file, replacing with your domain (you should point the domain to the server’s IP address as well):

<VirtualHost *:80>
DocumentRoot /var/www/wordpress

If you want additional domains or subdomains to point to this website as well, you can add a ServerAlias line after ServerName, like this:


Now that your VirtualHost is created, enable it and reload Apache.

sudo a2ensite wordpress
sudo service apache2 reload

At this point, if your domain is set up correctly, you should be able to browse to your website and install WordPress. You’ll need the database username and password you set up earlier, and the database hostname should be localhost.

Once installed, you should go into WordPress’s admin panel and change the Permalink style (Settings > Permalinks) to something other than the default to take advantage of Apache’s URL rewrites.

Arch Setup

I’ve always loved the concept of Arch Linux, with it’s nothing-by-default setup and intentional lack of user-friendly tools, but I’ve run into issues with the installation that’ve prevented me from really using it enough to get familiar with it.

This time around I got it working perfectly, so I decided I’d write a little guide on what I did. This is mostly just a reference for me, but it should prove useful to anyone trying out Arch for the first time. This guide assumes familiarity with linux basics, like sudo, fstab, gparted, and you should probably read Arch’s Beginner’s Guide.

Getting Started

Start by booting the live CD.

Partition setup

Note: this guide only covers the setup for a MBR partition table as booting GPT requires more system-specific setup.

If you prefer a more graphical tool you can use the Gparted live CD to configure your partitions, then skip to the “Install base system” step. Note that the current version of the Gparted live CD won’t boot properly on VirtualBox without EFI enabled.

Locate the disk you want to install your system to with lsblk, then start cfdisk with that device.

cfdisk /dev/sda

In cfdisk, create a new partition table, then the partitions you would like, writing the changes when you’re finished.

Next, create a filesystem, substituting your new system partition and repeating for each partition you need to format.

mkfs.ext4 /dev/sda1

If a swap partition was created, activate it:

mkswap /dev/sda2
swapon /dev/sda2

Install base system

Mount partition

Start the installation by mounting your system partition, and any other non-swap partitions you created in /mnt.

mkdir -p /mnt
mount /dev/sda1 /mnt

Set up base packages and fstab

Move your preferred mirror to the top of the list, or add mine (

vim /etc/pacman.d/mirrorlist

Install base packages and generate new fstab:

pacstrap -i /mnt base
genfstab -U -p /mnt >> /mnt/etc/fstab

Chroot into the new installation:

arch-chroot /mnt

Configure language

sed -i "s/#en_US.UTF-8/en_US.UTF-8/g" /etc/locale.gen
echo LANG=en_US.UTF-8 > /etc/locale.conf
. /etc/locale.conf

Configure timezone

ln -s /usr/share/zoneinfo/America/Denver /etc/localtime
hwclock --systohc --utc

Configure network

Run ip link to list all network interfaces and enable DHCP on the one you want to use:

ip link
systemctl enable dhcpcd@eth0

Configure wireless (optional)

pacman -S wireless_tools wpa_supplicant wpa_actiond dialog
systemctl enable net-auto-wireless

Configure package manager

Open /etc/pacman.conf and check that the [core], [extra], and [community] lines are uncommented. If you’re on a 64-bit system (you should be), optionally uncomment the [multilib] lines for 32-bit compatibility.

After updating your pacman config, refresh the repository list:

pacman -Sy

Create a user

passwd # Set root password
useradd -m -g users -G wheel,storage,power -s /bin/bash alan # Create 'alan'
passwd alan # Set password for alan

Configure sudo

pacman -S sudo # Install sudo

Uncomment the %wheel line to allow your new user to use sudo:

EDITOR=nano visudo

Install bootloader

pacman -S grub-bios
grub-install --target=i386-pc --recheck /dev/sda
cp /usr/share/locale/en\@quot/LC_MESSAGES/ /boot/grub/locale/
grub-mkconfig -o /boot/grub/grub.cfg

Finish installation

umount /mnt

Desktop setup

# Xorg
pacman -S xorg-server xorg-xinit xorg-server-utils \
  xorg-twm xorg-xclock xterm

# Mesa (3D acceleration)
pacman -S mesa

# Drivers (only one needed)
pacman -S xf86-video-vesa # Vesa (general, works almost always)
pacman -S nvidia lib32-nvidia-utils # Nvidia

Desktop environment

Xfce4 + lightdm

pacman -S xfce4 xfce4-goodies lightdm lightdm-gtk-greeter
systemctl enable lightdm # Enable lightdm


pacman -S gnome # Install desktops
systemctl enable gdm # Enable gdm

KDE Plasma 5

pacman -S plasma kde-applications
systemctl enable sddm

After installing your preferred DE, reboot, and your system should be ready to go! If you decide to switch DEs, make sure you disable the display manager before uninstalling, otherwise you’ll have to manually remove the symlink from /etc/systemd.

If you’re running Arch in VirtualBox, you’ll want to install the guest additions with pacman -S virtualbox-guest-utils.

For a basic overview of the pacman and the Arch User Repositories, see the Arch wiki and this gist.

Surface Pro 4

So I bought a Surface Pro 4 a little while ago. I change between loving it and hating it almost daily.

It has pretty impressive hardware, apart from the “cheap” $899-$1199 models only having 4 GB of RAM and a small, slow 128 GB SSD. The 128 GB Samsung NVMe SSDs used are painfully slow by modern standards, usually getting only 7200 RPM hard disk drive sequential write speeds of 80-100 MB/s. On the plus side, the little Core m3 is an impressive little CPU. Despite the low clock speed and 2 cores, it runs the OS and most software beautifully quickly, even at the high native resolution of the display. An extra annoyance is that the keyboard cover (which is amazing by the way) is an extra $130, and fairly essential to getting everything out of the Surface Pro 4. The on-screen keyboard isn’t bad, but is hardly usable for any real work, especially in my field where I need quick access to arrow keys (of which the Windows 10 keyboard only has left and right, no up and down) and special symbols. It’s worth buying the keyboard, but it really feels like it should just be included with a tablet this expensive, especially when Microsoft is branding it as a laptop replacement.

The capacitive touch screen is absolutely perfect, and the included upgraded Surface Pen is a breeze to use, my only complaint with the pen being that it’s initial pressure required to register a touch is higher than feels natural. When used for drawing, this can be a bit of an issue, since very light strokes will often not be registered, while a Wacom tablet registers the same light strokes perfectly. I’ve only drawn a bit on it so far, but that was actually my original reason for buying it. I’ve been particularly impressed by the pen vs. finger detection, allowing you to rest your hand on the screen while writing or drawing without any accidental touches registering, while still being able to use fingers to use the UI. The Core m3 is fairly responsive even when drawing at very high resolutions in Photoshop, and runs OneNote’s pressure-sensitive drawing with no noticable pen delay, making sketching and handwriting feel fantastic.

Sadly Windows 10 is still not really stable. Everyone keeps telling me they haven’t had any issues with it. Maybe I’m just incredibly unlucky, but I’ve got five devices running Windows 10, and every one of them has some serious usability issue with the OS. Most noticably on the Surface Pro 4, the lock screen sometimes gets stuck and won’t respond to touch or keyboard input, and more annoyingly, the device never wakes from sleep about one in ten times the power button is pressed.

Overall, I’m not sure whether I’d recommend the Surface Pro 4 to anyone. I absolutely love it a lot of the time, but the quality issues I’ve experienced feel like something you’d have on a $200 tablet, not a $900 one (plus the $130 for the keyboard cover). I look forward to seeing what Microsoft does with the future Surface line. There’s a long way to go before it’s perfect (or even worth the price, probably), but so far this thing is awesome.

Update (Dec 4):

The latest Windows Updates included a new display driver that has somehow prevented my Surface Pen from working, including crashing OneNote on startup even when the pen is off. Installing updates for Windows Defender somehow fixed this, not sure why. I’m really considering an iPad Pro, which isn’t really what I want at all.

Social Network Performance

I’ve been working on a somewhat unique social network lately, and I wanted to see how it matched up with the big ones. Here’s a simple breakdown of the HTTP requests on each site:


  • 308 requests, 6.1 MB
  • 36 CSS files, 232 KB
  • 85 JS files, 1.7 MB
  • 161 images, 507 KB
  • 0 webfonts
  • 2 IFRAMEs


  • 65 requests, 2.7 MB
  • 6 CSS files, 127 KB
  • 4 JS files, 440 KB
  • 45 images, 2.0 MB
  • 1 webfont, 23.9 KB (Rosetta Icons)
  • 1 IFRAME


  • 224 requests, 4.8 MB
  • 13 CSS files, 79.9 KB
  • 23 JS files, 1.4 MB
  • 84 images, 1.7 MB
  • 6 webfonts, 42.6 KB (Various weights of Roboto)
  • 13 IFRAMEs

All of these were loaded from Chrome 45 on a desktop PC, with uBlock and Privacy Badger enabled (because really everyone should have both of these installed). Twitter is definitely the smallest, with a very fast perceived loading speed thanks to the small number of CSS files. All of the networks delay loading images until the rest of the interface is there. I was surprised to see Google+ using webfonts for their content, but since they were the same fonts used throughout many Google sites, chances are you already have Roboto cached from another previous pageload.

For comparison, here’s my current test site’s requests:

  • 20 requests, 779 KB
  • 1 CSS file, 21.5 KB
  • 1 JS file, 45.9 KB
  • 14 images, 577 KB
  • 3 webfonts, 132 KB (higher than I would like)

Apart from the webfonts, this loads very, very quickly. The webfont slowness was what prompted me to look into what other networks were doing, and it was good to see that other sites weren’t requiring nearly as much data for their webfonts. In Webkit browsers, on a slow connection, webfonts this big can prevent the content from showing for a good 5-6 seconds, which is definitely not usable. The large size of the webfonts is an issue, and the largest one is surprisingly my custom icon font, which I definitely need to optimize more before production. I’ll likely remove many of the icons from the set since I don’t need most of them.

The only conclusion I can draw from this is that Twitter is the only company who knows what they’re doing. Google+ has good cached load times, which is probably fine since most of their assets are cached from other Google pages anyway, but Facebook needing 1.7 MB of JavaScript is just scary.