Category Archives: Uncategorized

Home Environment

Hardware
microserver

HP Proliant Gen8 Microserver
Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz
12GB ECC RAM
1x256GB Samsung 840 PRO SSD for operating system volumes
2x1TB HDD for data storage
2x1Gb Ethernet Ports

Operating System

XenLogoBlackGreen
This rig is acting as a virtualisation host running on Arch linux with a Xen 4.5.0 kernel.

The plan was to build a powerful enough home server that I could run a decent amount of virtual machines on top of the hypervisor and have different virtual environments to suit different purposes (media server, home theater, dev environment, network / VPN gateway).

Maintaining them as independent virtual machines gives you the benefits you’d expect – being able to use one physical server to provide you with as many virtual servers as you need. Overprovisioning allows you to allocate more CPU and memory resources than you actually have, and have them dynamically scale up and down to meet the needs of the workload running at that particular time. Transcoding? give the majority of the CPU time and memory resource to the media server for instance.

Storage

Tying in nicely with the choice to go down the hypervisor route is the use of LVM (Logical Volume Manager). This is essentially virtualisation of your physical storage. It allows you to provision your physical disks, which don’t have to be identical in any sense, into a pool of logical volumes. These can be assigned to a single physical volume, spread across many, moved resized etc. at will.

Physically I’ve got 3 drives in this server, a 256GB Samsung 840 Pro SSD, and 2x1TB standard SATA HDDs.

The SSD was designated for the operating systems root partitions to be sure they are as performant as possible. The HDDs are carved up between 1TB of data storage, and 1TB for snapshots to back up the virtual machines and to provide non performance critical generic storage (swapspace, local storage).

Operating System(s)

xentop

Dom0 Hypervisor
Arch linux running Xen 4.5.0
Running as lean as possible with 512mb of allocated RAM and a 40GB root partition.

Guest #1 Media Center (HVM)
Windows 8.1 running as a media center
* PCI passthrough of HD6850 video card for 3d acceleration
* USB passthrough of wireless keyboard

Guest #2 Media Server (HVM)
plex_logo
Ubuntu server 15.04 running as a media server
* Sickrage for triggering downloads of new episodes of TV shows
* Couchpotato for doing the same thing with Movies
* Sabnzbd for downloading from newsgroups
* Transmission for downloading from torrent sites
* Plex media server for indexing media, and making it accessable outside of the LAN (transcoding when needed)

Guest #3 (Still to build) Dev Server (HVM)
docker

Arch server running as a development environment / host for docker containers
* Home automation server will run here (servicing voice commands)

Guest #4 (Still to build) VPN/Network Management Server (HVM)
Arch server running OpenVPN to allow connections in to home LAN.
* Network monitoring (nethogs, iftop, tcpdump etc.)
* Quality of service to dedicate a percentage of traffic to BitTorrent / Newsgroups – so we don’t have to manually configure each client with a limit, and to prioritise things like Skype/SSH/OpenVPN
* Default gateway / Firewall – all traffic from LAN will enter via eth1 and leave via eth0 to router.

Services

SickRage
CouchPotato
Sabnzbd
Transmission

Creating an Intelligent Home Part 1

So my favourite project of 2014 has definitely been a series of API’s to run on my home network.

This has allowed me to do things like issue voice commands through an Android app using the voice recognition APIs and have a system on my network action those commands (like “Download the latest game of thrones”) and have it automagically appear on my home media centre in the living room (running XBMC). This could be drastically extended though and that’s the purpose of this series of blog posts – planning how I’m going to create a smarter home, one that lets me monitor what’s going on inside (webcams, sensors), control the temperature, set up events to happen when I get home etc.

I want to move away from a set of disparate “hacks” and turn it into a standard piece of software that can run on commodity hardware, and has a nice user interface. Ideally it will run on a wide range of hardware, from a Raspberry Pi at the bottom end to a PC, MicroServer or blade.

Technical parts ahead!

Continue reading

VMware vSphere client download URL

Each and every time I want to manage my Esxi server from a different machine (laptop, work laptop, home PC) I have to find and download the VMware ESXi windows app. You should be able to navigate to the ESXi machine and download it from there but does it work? Does it hell.

So here (for my own convenience and that of anyone else that’s reading) are the download URLs.

Disclaimer: shamelessly borrowed from Chris Halls blog

vSphere v4.1

– VMware vSphere Client v4.1 : VMware-viclient-all-4.1.0-258902.exe
– VMware vSphere Client v4.1 Update 1 : VMware-viclient-all-4.1.0-345043.exe
– VMware vSphere Client v4.1 Update 2 : VMware-viclient-all-4.1.0-491557.exe
– VMware vSphere Client v4.1 Update 3 : VMware-viclient-all-4.1.0-799345.exe

vSphere v5.0

– VMware vSphere Client v5.0 : VMware-viclient-all-5.0.0-455964.exe
– VMware vSphere Client v5.0 Update 1 : VMware-viclient-all-5.0.0-623373.exe

vSphere v5.1

– VMware vSphere Client v5.1 : VMware-viclient-all-5.1.0-786111.exe
– VMware vSphere Client 5.1.0a : VMware-viclient-all-5.1.0-860230.exe
– VMware vSphere Client 5.1.0b : VMware-viclient-all-5.1.0-941893.exe
– VMware vSphere Client 5.1 Update 1 : VMware-viclient-all-5.1.0-1064113.exe

A quick and easy way to deploy from git with post-receive hooks

I’ve recently moved over to a custom framework for my main website and wanted deployments into the repositories release branch to be seamless. I could use Jenkins but that would probably be overkill.

To accomplish this, I wrote a quick post-commit post-receive script that will be called by git each time a commit is pushed into my repository. If the branch name matches ‘release’ the commit is extracted into a release folder under the webroot. The webroot directory (in reality a symbolic link) is then updated to point to this new release.

Using symbolic links has the advantage of not leaving your website in an inconsistant state as files are deleted and then re-extracted, it should be an atomic operation.

Here’s the script, put this in your git repository under the hooks subdirectory in a file named ‘post-commit’.
The old revision, new revision and branch name are handily passed into this script :)

Edit: One thing to keep an eye out for is that if you are running PHP5-FPM, when the symbolic link is switched to the new location the fpm seems to be still serving your scripts from the old location. I would love to figure out why this is but a workaround is to call /etc/init.d/php5-fpm restart as the final step in the post-receive script (not ideal!!)

#!/bin/bash
#
# A post commit hook that takes any updates pushed to the 'release' branch
# and creates a release directory for the new version under the webroot.
# Live site is then symlinked to this new release directory.
 
oldrev=$1
newrev=$2
branch=$3
 
# this is the root of the website (a symlink to a release directory)
webroot=/var/www/danielbyrne.net/www
 
if [ "$branch" == "release" ]
then
 
    # create a release directory to extract files into
    target=/var/www/danielbyrne.net/releases/$2/
    mkdir $target
 
    echo "Making target directory: $target"
 
    # create an archive in the webroot of danielbyrne.net
    /usr/bin/git archive master --format zip --output $target/deploy.zip
 
    echo "unzipping archive..."
 
    # extract the archive
    unzip -o -q $target/deploy.zip -d $target
 
    echo "removing deployment archive"
 
    # remove the archive file
    rm $target/deploy.zip
 
    echo "switching symbolic link to $target"
 
    # now switch the live site to point to the new release
    ln -nsf $target $webroot
 
    echo "done";
fi

MySQL import / export fun

Exporting Data

If you run the command:

SELECT * FROM TestTable INTO OUTFILE 'outfile.txt

and receive the rather cryptic error message ‘PERMISSION DENIED’..

Keep in mind, you need to specify an explicit path to export the data to, e.g. ‘/tmp/outfile.txt’

SELECT INTO OUTFILE also requires the permission ‘OUTFILE’ to be granted to the currently executing user.

Importing Data
Similarly, when importing data using LOAD DATA INFILE (even with an explicit path) you get the error message

‘ERROR 29 (HY000): File ‘/tmp/infile.txt’ not found (Errcode: 13)

MySQL server may be denied access to most of the filesystem depending on your OS setup (including /tmp), so try using

LOAD DATA LOCAL INFILE '/tmp/infile.txt' ...

which pipes the result through to the MySQL client which has all the permissions that the executing user has.

#mysqlfun

Migrated to nginx, php-fpm and APM.

My VPS is a little underpowered, and checking the amount of free memory I was, shall we say, a little surprised at just how much Apache thought it needed for the amount of visitors this domain brings in.

The combination of nginx and php-fpm is astoundingly lightweight on memory usage… with nginx and a seperate process manager for PHP (php-fpm) instead of mod_php I now have a few hundred MB to play with. Not just that but the requests per second my server can now handle is through the roof.

Using phpinfo as a test, I’m now managing to serve 3,000 requests per second at a maximum of 3ms per page.
Even the beastly wordpress is coming in at 1,628 requests per second with a helping hand from APC byte-code cache. Breezy.

Adding a custom resource type to the Zend Framework autoloader

I’m working on a new project using the latest Zend Framework. I’ve got a modular application and decided that I wanted to hold my module-specific logic classes in their own ‘logic’ subdirectory.

Setting up class to folder mappings is done inside ‘Zend/Application/Module/Autoloader.php’ in the function ‘initDefaultResourceTypes’. In here you’ve got your standard mappings like ‘Form’ => ‘forms’, ‘Model’ => ‘models’ etc.

You’ve either got the choice of overriding this class with your own implementation, or you add your own resource types by putting the following code in your application or module bootstrap:

class MyApp_Bootstrap extends Zend_Application_Module_Bootstrap
{
 
    /**
     * Add some custom resource types to the resource loader
     *
     * @return void
     */
    protected function _initLoaderResource()
    {
        $this->getResourceLoader()->addResourceTypes(array(
            'logic' => array(
                'namespace' => 'Logic',
                'path'      => 'logic'
            )
        ));
    }
 
}

This will make the autoloader look inside the logic subdirectory for any classes with the namespace ‘MyApp_Logic’

Eclipse proxy authentication problems

So… I spent a good half hour today trying to figure out why eclipse wouldn’t connect to the android SDK update site.

Proxy host/port… check.
Proxy authentication… check.
The site exists… check.

No wait, last time I was in there I was sure I’d clicked ‘save’… why isn’t it saving my username and password. I’ve had this problem before when I first started at Orange… took me a good day or two to get my environment fully set up.

After filling it in again and going around the same circle for a little while, and monkeying around with the secure storage section I came across this:

Adding this line to your eclipse.ini will solve the issue:
-Dorg.eclipse.ecf.provider.filetransfer.excludeContributors=org.eclipse.ecf.provider.filetransfer.httpclient

In a few words the above command says that Eclipse can access the web via the *.pac files of the HTTP clients (eg Internet Explorer or Firefox). You’ll still need to enter your proxy settings again, one final time…

With thanks to ‘Comments after the EOF’:
http://cateof.wordpress.com/2010/01/15/eclipse-galileo-proxy-problem-workaround-solution/