Social Network Performance

I’ve been working on a somewhat unique social network lately, and I wanted to see how it matched up with the big ones. Here’s a simple breakdown of the HTTP requests on each site:

Facebook

  • 308 requests, 6.1 MB
  • 36 CSS files, 232 KB
  • 85 JS files, 1.7 MB
  • 161 images, 507 KB
  • 0 webfonts
  • 2 IFRAMEs

Twitter

  • 65 requests, 2.7 MB
  • 6 CSS files, 127 KB
  • 4 JS files, 440 KB
  • 45 images, 2.0 MB
  • 1 webfont, 23.9 KB (Rosetta Icons)
  • 1 IFRAME

Google+

  • 224 requests, 4.8 MB
  • 13 CSS files, 79.9 KB
  • 23 JS files, 1.4 MB
  • 84 images, 1.7 MB
  • 6 webfonts, 42.6 KB (Various weights of Roboto)
  • 13 IFRAMEs

All of these were loaded from Chrome 45 on a desktop PC, with uBlock and Privacy Badger enabled (because really everyone should have both of these installed). Twitter is definitely the smallest, with a very fast perceived loading speed thanks to the small number of CSS files. All of the networks delay loading images until the rest of the interface is there. I was surprised to see Google+ using webfonts for their content, but since they were the same fonts used throughout many Google sites, chances are you already have Roboto cached from another previous pageload.

For comparison, here’s my current test site’s requests:

  • 20 requests, 779 KB
  • 1 CSS file, 21.5 KB
  • 1 JS file, 45.9 KB
  • 14 images, 577 KB
  • 3 webfonts, 132 KB (higher than I would like)

Apart from the webfonts, this loads very, very quickly. The webfont slowness was what prompted me to look into what other networks were doing, and it was good to see that other sites weren’t requiring nearly as much data for their webfonts. In Webkit browsers, on a slow connection, webfonts this big can prevent the content from showing for a good 5-6 seconds, which is definitely not usable. The large size of the webfonts is an issue, and the largest one is surprisingly my custom icon font, which I definitely need to optimize more before production. I’ll likely remove many of the icons from the set since I don’t need most of them.

The only conclusion I can draw from this is that Twitter is the only company who knows what they’re doing. Google+ has good cached load times, which is probably fine since most of their assets are cached from other Google pages anyway, but Facebook needing 1.7 MB of JavaScript is just scary.

Pidgin with Google Apps

I love using Pidgin with our chat server at work. It’s a really nice, clean IM client (apart from the account management, that’s a mess). Setting it up with Google Apps is slightly more tricky.

  1. If you use 2-Factor Authentication (never a bad idea), you’ll need to generate an app password before you can use the account with Pidgin or other generic mail/chat clients.
  2. Pick Other for the app name, and name it “Pidgin” or whatever else you want. Copy the password it generates for you.

In Pidgin’s Buddy List window, go to Accounts > Manage Accounts (Ctrl+A), click Add…, and enter the following details in the Basic tab:

  • Username: the prefix of your email address (e.g. “alan” from “alan@phpizza.com”)
  • Domain: the domain name of your email address (e.g. “phpizza.com”)
  • Password: your account password, unless you have 2-factor authentication, then it’s the app password generated in the first step.

In the Advanced tab, set “Connect server” to talk.google.com.

That’s it! You should be able to use Pidgin with your Google Apps account now.

MySQL SSL on Ubuntu 12.04

Ubuntu 12.04’s included libssl is incompatible with the default mysql verison provided. This isn’t how to fix that, but a warning not to try. Just use 14.04 LTS.

If you did set up SSL on a 12.04 server, you’ll likely run into issues exporting your database. If you get errors like these, just disable SSL.

root@db:/# innobackupex --user=root --password=MyPass /home/backup/rep-transfer
150821 13:54:35  innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'root'  (using password: YES).
innobackupex: Error: Failed to connect to MySQL server: DBI connect(';mysql_read_default_group=xtrabackup','root',...) failed: SSL connection error: error:00000001:lib(0):func(0):reason(1) at /usr/bin/innobackupex line 2949

root@db:/# mydumper --database=db_name --user=root --password=MyPass --host=localhost
** (mydumper:28499): CRITICAL **: Error connecting to database: SSL connection error: error:00000001:lib(0):func(0):reason(1)

Commenting out ssl-ca, ssl-cert, ssl-key and ssl-cipher lines in your /etc/mysql/my.cnf file and restarting the service with sudo service mysql restart should allow you to export again.

Windows 10

I really like Windows 10. I’m just not going to use it for a while. I knew the July release date was far too early to really have a stable, solid OS, but I had high hopes anyway. I upgraded the day it was released, after running the Developer, Tech, and Insider previews on my secondary PC for a while, and all went smoothly for a while. Well, everything but the video drivers, mouse driver, default program settings, and 80% of my games, all of those failed miserably.

Nvidia put out a few more driver updates since the initial OS release, but I still can’t get more than 30 minutes in GTA before the drivers take down the OS and I have to force reboot. It was a bit annoying after spending $650 on a GTX 980 to not be able to use the card. My Razer drivers worked about 70% of the time, but the initial startup time (which is already horrible, if possible buy a non-Synapse 2.0 Razer mouse, because the new software is garbage) was much worse than on 8.1.

Luckily, Microsoft left in a simple, 3-5 minute process to downgrade back to Windows 8.1. The PC Settings app has an option under Recovery in the updates section to revert to your previous version, and at least for me it worked wonderfully. Unlike the several hour upgrade time to go from 8.1 to 10, the downgrade took under 5 minutes and everything has worked perfectly since then.

I’ll likely upgrade to Windows 10 again at some point, but for now I’ll stick with an OS that doesn’t die on me several times a day.

Bridged Networking on Ubuntu 14.04

It’s often necessary to set up a bridged network on VM hosts, but the documntation for Ubuntu has gotten a bit dated. After much trial-and-error, here is what worked for my datacenter-hosted VM server:

First, run sudo apt-get install bridge-utils, if the package is not already installed.

Next, update your /etc/network/interfaces file to include a br0 adapter, moving any ip configuration from eth0. This is the complete configuration that I’m running:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet static
    address 192.99.15.40
    netmask 255.255.255.0
    network 192.99.15.0
    broadcast 192.99.15.255
    gateway 192.99.15.254
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

You can also safely add an iface eth0 inet6 section for IPv6 networking, without it interfering with the bridged adapter configuration.

Finally, restart your networking services, in a different way than usual. Since your primary interface is now br0, you’ll want to run sudo ifdown eth0 && sudo ifup br0. Assuming your configuration was done correctly, this shouldn’t interrupt any open SSH connections. Once your br0 interface is up, you can proceed to bind IPs to it within VMs by pointing the VM to your br0 device as a bridged adapter. Static IP assignments within VMs should work fine as long as the IP is associated with your host machine.

This setup should work on any debian-based OS, and may work on other linux-based OSes as well. I’ll likely replace my Ubuntu host with a Debian 8 setup soon, and I’ll update this post when I do.

Partially sourced from the KVM/Networking article on Ubuntu’s community site.