Using NuGet at Mailcloud

So what is NuGet, anyway?

Intro

When working with shared functionality across multiple .Net projects and team members, historically your options are limited to something like;

  1. Copy a dll containing the common functionality into your solution
  2. Register the dll into the GAC on whichever machine it needs to run
  3. Reference the project itself within your solution

There are several problems with these such as;

  • ensuring all environments have the correct version of the dll as well as any dependencies already installed
  • tight dependencies between projects, potentially breaking several when the shared project is updated
  • trust – is this something you’re willing to install into a GAC if it’s from a 3rd party?
  • so many more, much more painful, bad bad things

So how can you get around this pain?

I’m glad you asked.

Treasure! Rubies, Gems, oh my.

ruby

The ruby language has had this problem solved for many, many years – since around 2004, in fact.

Using the gem command you could install a ruby package from a central location into your project, along with all dependencies, e.g.:

gem install rails --include-dependencies

This one would pull down rails as well as packages that rails itself depended on.

You could search for gems, update your project’s gems, remove old versions, and remove the gem from your project entirely; all with minimal friction. No more scouring the internets for information on what to download, where to get it from, how to install it, and then find out you need to repeat this for a dozen other dependent packages!

You use a .gemspec file to define the contents and meta data for your gem before pushing the gem to shared repository.

Pe(a)rls of Wisdom

cpan

Even ruby gems were borne from a frustration that the ruby ecosystem wasn’t supported as well as Perl; Perl had CPAN (Comprehensive Perl Archive Network) for over a DECADE before ruby gems appeared – it’s been up since 1995!

Nubular / Nu

Nubular

If Perl had CPAN since 1995, and ruby had gems since 2005, where is the .Net solution?

I’d spent many a project forgetting where I downloaded PostSharp from or RhinoMocks, and having to repeat the steps of discovery before I could even start development; leaving the IDE in order to browse online, download, unzip, copy, paste, before referencing within the IDE, finding there were missing dependencies, rinse, repeat.

Around mid-2010 Dru Sellers and gang (including Rob Reynolds aka @ferventcoder) built the fantastic “nu[bular]” project; this was itself a ruby gem, and could only be installed using ruby gems; i.e., to use nu you needed to install ruby and rubygems.


Side note: Rob was no stranger to the concept of .Net gems and has since created the incredible Chocolatey apt-get style package manager for installing applications instead of just referencing packages within your code projects, which I’ve previously waxed non-lyrical about

Once installed you were able to pull down and install .Net packages into your projects (again, these were actually just ruby gems). At the time of writing it still exists as a ruby gem and you can see the humble beginnings and subsequent death (or rather, fading away) of the project over on its homepage (this google group).

I used this when I first heard about it and found it to be extremely promising; the idea that you can centralise the package management for the .Net ecosystem was an extremely attractive proposition; unfortunately at the time I was working at a company where introducing new and exciting (especially open source) things was generally considered Scary™. However it still had some way to go.

NuPack

NuPack

In October 2006 Nu became Nu v2, at which point it became NuPack; The Epic Trinity of Microsoft awesomeness – namely Scott Guthrie, Scott Hanselman, and Phil Haack – together with Dave Ebbo and David Fowler and the Nubular team took a mere matter of months to create the first fully open sourced project that was central to an MS product – i.e., VisualStudio which was accepted into the ASP.Net open source gallery in Oct 2006

It’s referred to as NuPack in the ASP.MVC 3 Beta release notes from Oct 6 2010 but underwent a name change due to a conflict with an existing product, NUPACK from Caltech.

NuGet! (finally)

nuget

There was a vote, and if you look through the issues listed against the project in codeplex you can see some of the other suggestions.

(Notice how none of the names available in the original vote are “NuGet”..)

Finally we have NuGet! The associated codeplex work item actually originally proposed “Nugget”, but that was change to NuGet.

Okay already, so what IS NuGet?!

Essentially the same as a gem; an archive with associated metadata in a manifest file (.nuspec for nuget, .gemspec for gems). It’s blindingly simple in concept, but takes a crapload of effort and smarts to get everything working smoothly around that simplicity.

All of the details for creating a package are on the NuGet website.

Using NuGet at Mailcloud

We decided to use MyGet initially to kick off our own private nuget feed (but will migrate shortly to a self-hosted solution most likely; I mean, look at how easy it is! Install-Package NuGet.Server, deploy, profit!)

The only slight complexity was allowing the private feed’s authentication to be saved with the package restore information; I’ll get on to this shortly as there are a couple of options.

Creating a package

Once you’ve created a project that you’d like to share across other projects, it’s simply a matter of opening a prompt in the directory where your csproj file lives and running:

nuget spec

to create the nuspec file ready for you to configure, which looks like this:

<?xml version="1.0"?>
<package >
  <metadata>
    <id>$id$</id>
    <version>$version$</version>
    <title>$title$</title>
    <authors>$author$</authors>
    <owners>$author$</owners>
    <licenseUrl>http://LICENSE_URL_HERE_OR_DELETE_THIS_LINE</licenseUrl>
    <projectUrl>http://PROJECT_URL_HERE_OR_DELETE_THIS_LINE</projectUrl>
    <iconUrl>http://ICON_URL_HERE_OR_DELETE_THIS_LINE</iconUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>$description$</description>
    <releaseNotes>Summary of changes made in this release of the package.</releaseNotes>
    <copyright>Copyright 2014</copyright>
    <tags>Tag1 Tag2</tags>
  </metadata>
</package>

Fill in the blanks and then run:

nuget pack YourProject.csproj

to end up with a .nupkg file in your working directory.

.nupkg

As previously mentioned, this is just an archive. As such you can open it yourself in 7Zip or similar and find something like this:

.nupkg guts

Your compiled dll can be found in the lib dir.

Pushing to your package feed

If you’re using MyGet then you can upload your nupkg via the MyGet website directly into your feed.

If you like the command line, and I do like my command line, then you can use the nupack command to do this for you:

nuget push MyPackage.1.0.0.nupkg <your api key> -Source https://www.myget.org/F/<your feed name>/api/v2/package

Once this has completed your package will be available at your feed, ready for referencing within your own projects.

Referencing your packages

If you’re using a feed that requires authentication then there are a couple of options.

Edit your NuGet sources (Options -> Package Manager -> Package Sources) and add in your main feed URL, e.g.

http://www.myget.org/F/<your feed name>/

If you do this against a private feed then an attempt to install a package pops up a windows auth prompt:

myget.auth

This will certainly work locally, but you may have problems when using a build server such as teamcity or VisualStudio Online due to the non-interactive authentication.

One solution to this is to actually include your password (in plain text – eep!) in your nuget.config file. To do this, right click your solution and select “Enable Package Restore”.

package restore

This will create a .nuget folder in your solution containing the nuget executable, a config file and a targets file. Initially the config file will be pretty bare. If you edit it and add in something similar to the following then your package restore will use the supplied credentials for the defined feeds:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <solution>
    <add key="disableSourceControlIntegration" value="true" />
  </solution>
  <packageSources>
    <clear />
    <add key="nuget.org" value="https://www.nuget.org/api/v2/" />
    <add key="Microsoft and .NET" value="https://www.nuget.org/api/v2/curated-feeds/microsoftdotnet/" />
    <add key="MyFeed" value="https://www.myget.org/F/<feed name>/" />
  </packageSources>
  <disabledPackageSources />
  <packageSourceCredentials>
    <MyFeed>
      <add key="Username" value="myusername" />
      <add key="ClearTextPassword" value="mypassword" />
    </MyFeed>
  </packageSourceCredentials>
</configuration>

So we resupply the package sources (need to clear them first else you get duplicates), then add a packageSourceCredentials section with an element matching the name you gave your packageSource in the section above it.

Alternative Approach

Don’t like plain text passwords? Prefer auth tokens? Course ya do. Who doesn’t? In that case, another option is to use the secondary feed URL MyGet provides instead of the primary one, which contains your auth token (which can be rescinded at any time) and looks like:

https://www.myget.org/F/<your feed name>/auth/<auth token>/

Notice the extra “auth/blah-blah-blah” at the end of this version.

Summary

NuGet as a package manager solution is pretty slick. And the fact that it’s open sourced and can easily be self-hosted internally means it’s an obvious solution for managing those shared libraries within your project, personal or corporate.

Extra References

http://weblogs.asp.net/bsimser/archive/2010/10/06/unicorns-triple-rainbows-package-management-and-lasers.aspx

http://devlicio.us/blogs/rob_reynolds/archive/2010/09/21/the-evolution-of-package-management-for-net.aspx

Setting up an Ubuntu development VM: Scripted

Having seen this blog post about setting up a development Linux VM in a recent Morning Brew, I had to have a shot at doing it all in a script instead, since it looked like an awful lot of hard work to do it manually.

The post I read covers downloading and installing VirtualBox (which could be scripted also, using the amazing Chocolatey) and then installing Ubuntu, logging in to the VM, downloading and installing Chrome, SublimeText2, MonogDB, Robomongo, NodeJs, NPM, nodemon, and mocha.

Since all of this can be handled via apt-get and a few other cunning configs, here’s my attempt using Vagrant. Firstly, vagrant init a directory, then paste the following into the Vagrantfile:

Vagrantfile

[bash]
Vagrant.configure(2) do |config|

config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"

end
[/bash]

Setup script

Now create new file in the same dir as the Vagrantfile (since this directory is automatically configured as a shared folder, saving you ONE ENTIRE LINE OF CONFIGURATION), calling it something like set_me_up.sh. I apologise for the constant abuse of > /dev/null – I just liked having a clear screen sometimes..:

[bash]#!/bin/sh

clear
echo "******************************************************************************"
echo "Don’t go anywhere – I’m going to need your input shortly.."
read -p "[Enter to continue]"

### Set up dependencies
# Configure sources & repos
echo "** Updating apt-get"
sudo apt-get update -y > /dev/null

echo "** Installing prerequisites"
sudo apt-get install libexpat1-dev libicu-dev git build-essential curl software-properties-common python-software-properties -y > /dev/null

### deal with intereactive stuff first
## needs someone to hit "enter"
echo "** Adding a new repo ref – hit Enter"
sudo add-apt-repository ppa:webupd8team/sublime-text-2

echo "** Creating a new user; enter some details"
## needs someone to enter user details
sudo adduser developer

echo "******************************************************************************"
echo "OK! All done, now it’s the unattended stuff. Go make coffee. Bring me one too."
read -p "[Enter to continue]"

### Now the unattended stuff can kick off
# For mongo db – http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
echo "** More prerequisites for mongo and chrome"
sudo apt-key adv –keyserver hkp://keyserver.ubuntu.com:80 –recv 7F0CEB10 > /dev/null
sudo sh -c ‘echo "deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen" | sudo tee /etc/apt/sources.list.d/mongodb.list’ > /dev/null
# For chrome – http://ubuntuforums.org/showthread.php?t=1351541
wget -q -O – https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add –

echo "** Updating apt-get again"
sudo apt-get update -y > /dev/null

## Go, go, gadget installations!
# chrome
echo "** Installing Chrome"
sudo apt-get install google-chrome-stable -y > /dev/null

# sublime-text
echo "** Installing sublimetext"
sudo apt-get install sublime-text -y > /dev/null

# mongo-db
echo "** Installing mongodb"
sudo apt-get install mongodb-10gen -y > /dev/null

# desktop!
echo "** Installing ubuntu-desktop"
sudo apt-get install ubuntu-desktop -y > /dev/null

# node – the right(?) way!
# http://www.joyent.com/blog/installing-node-and-npm
# https://gist.github.com/isaacs/579814

echo "** Installing node"
echo ‘export "PATH=$HOME/local/bin:$PATH"’ >> ~/.bashrc
. ~/.bashrc
mkdir ~/local
mkdir ~/node-latest-install
cd ~/node-latest-install
curl http://nodejs.org/dist/node-latest.tar.gz | tar xz –strip-components=1
./configure –prefix=~/local
make install

# other node goodies
sudo npm install nodemon > /dev/null
sudo npm install mocha > /dev/null

## shutdown message (need to start from VBox now we have a desktop env)
echo "******************************************************************************"
echo "**** All good – now quitting. Run *vagrant halt* then restart from VBox to go to desktop ****"
read -p "[Enter to shutdown]"
sudo shutdown 0
[/bash]

The gist is here, should you want to fork and edit it.

You can now open a prompt in that directory and run
[bash]
vagrant up && vagrant ssh
[/bash]
which will provision your VM and ssh into it. Once connected, just execute the script by running:
[bash]
. /vagrant/set_me_up.sh
[/bash]

(/vagrant is the shared directory created for you by default)

Nitty Gritty

Let’s break this up a bit. First up, I decided to group together all of the apt-get configuration so I didn’t need to keep calling apt-get update after each reconfiguration:

[bash]
# Configure sources & repos
echo "** Updating apt-get"
sudo apt-get update -y > /dev/null

echo "** Installing prerequisites"
sudo apt-get install libexpat1-dev libicu-dev git build-essential curl software-properties-common python-software-properties -y > /dev/null

### deal with intereactive stuff first
## needs someone to hit "enter"
echo "** Adding a new repo ref – hit Enter"
sudo add-apt-repository ppa:webupd8team/sublime-text-2
[/bash]

Then I decided to set up a new user, since you will be left with either the vagrant user or a guest user once this script has completed; and the vagrant one doesn’t have a desktop/home nicely configured for it. So let’s create our own one right now:

[bash]
echo "** Creating a new user; enter some details"
## needs someone to enter user details
sudo adduser developer

echo "******************************************************************************"
echo "OK! All done, now it’s the unattended stuff. Go make coffee. Bring me one too."
read -p "[Enter to continue]"
[/bash]

Ok, now the interactive stuff is done, let’s get down to the installation guts:

[bash]
### Now the unattended stuff can kick off
# For mongo db – http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
echo "** More prerequisites for mongo and chrome"
sudo apt-key adv –keyserver hkp://keyserver.ubuntu.com:80 –recv 7F0CEB10 > /dev/null
sudo sh -c ‘echo "deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen" | sudo tee /etc/apt/sources.list.d/mongodb.list’ > /dev/null
# For chrome – http://ubuntuforums.org/showthread.php?t=1351541
wget -q -O – https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add –

echo "** Updating apt-get again"
sudo apt-get update -y > /dev/null
[/bash]

Notice the URLs in there referencing where I found out the details for each section.

The only reason these config sections are not at the top with the others is that they can take a WHILE and I don’t want the user to have to wait too long before creating a user and being told they can go away. Now we’re all configured, let’s get installing!

[bash]
## Go, go, gadget installations!
# chrome
echo "** Installing Chrome"
sudo apt-get install google-chrome-stable -y > /dev/null

# sublime-text
echo "** Installing sublimetext"
sudo apt-get install sublime-text -y > /dev/null

# mongo-db
echo "** Installing mongodb"
sudo apt-get install mongodb-10gen -y > /dev/null

# desktop!
echo "** Installing ubuntu-desktop"
sudo apt-get install ubuntu-desktop -y > /dev/null
[/bash]

Pretty easy so far, right? ‘Course it is. Now let’s install nodejs on linux the – apparently – correct way. Well it works better than compiling from source or apt-getting it.

[bash]
# node – the right(?) way!
# http://www.joyent.com/blog/installing-node-and-npm
# https://gist.github.com/isaacs/579814

echo "** Installing node"
echo ‘export "PATH=$HOME/local/bin:$PATH"’ >> ~/.bashrc
. ~/.bashrc
mkdir ~/local
mkdir ~/node-latest-install
cd ~/node-latest-install
curl http://nodejs.org/dist/node-latest.tar.gz | tar xz –strip-components=1
./configure –prefix=~/local
make install
[/bash]

Now let’s finish up with a couple of nodey lovelies:
[bash]
# other node goodies
sudo npm install nodemon > /dev/null
sudo npm install mocha > /dev/null
[/bash]

All done! Then it’s just a case of vagrant halting the VM and restarting from Virtualbox (or edit the Vagrantfile to include a line about booting to GUI); you’ll be booted into an Ubuntu desktop login. Use the newly created user to log in and BEHOLD THE AWE.

Enough EPICNESS, now the FAIL…

Robomongo Fail 🙁

The original post also installs Robomongo for mongodb administration, but I just couldn’t get that running from a script. Booo! Here’s the script that should have worked; please have a crack and try to sort it out! qt5 fails to install for me which then causes everything else to bomb out.

[bash]
# robomongo
INSTALL_DIR=$HOME/opt
TEMP_DIR=$HOME/tmp

# doesn’t work
sudo apt-get install -y git qt5-default qt5-qmake scons cmake

# Get the source code from Git. Perform a shallow clone to reduce download time.
mkdir -p $TEMP_DIR
cd $TEMP_DIR
sudo git clone –depth 1 https://github.com/paralect/robomongo.git

# Compile the source.
sudo mkdir -p robomongo/target
cd robomongo/target
sudo cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=$INSTALL_DIR
make
make install

# As of the time of this writing, the Robomongo makefile doesn’t actually
# install into the specified install prefix, so we have to install it manually.
mkdir -p $INSTALL_DIR
mv install $INSTALL_DIR/robomongo
mkdir -p $HOME/bin
ln -s $INSTALL_DIR/robomongo/bin/robomongo.sh $HOME/bin/robomongo

# Clean up.
rm -rf $TEMP_DIR/robomongo
[/bash]

Not only is there the gist, but the whole shebang is over on github too.

ENJOOOYYYYY!

Chef for Developers: part 4 – WordPress, Backups, & Restoring

I’m continuing with my plan to create a series of articles for learning Chef from a developer perspective.

Part #1 gave an intro to Chef, Chef Solo, Vagrant, and Virtualbox. I also created my first Ubunutu VM running Apache and serving up the default website.

Part #2 got into creating a cookbook of my own, and evolved it whilst introducing PHP into the mix.

Part #3 wired in MySql and refactored things a bit.

WordPress Restore – Attempt #1: Hack It Together

Now that we’ve got a generic LAMP VM its time to evolve it a bit. In this post I’ll cover adding wordpress to your VM via Chef, scripting a backup of your current wordpress site, and finally creating a carbon copy of that backup on your new wordpress VM.

I’m still focussing on using Chef Solo with Vagrant and VirtualBox for the time being; I’m learning to walk before running!

Kicking off

Create a new directory for working in and create a cookbooks subdirectory; you don’t need to prep the directory with a vagrant init as I’ll add in a couple of clever lines at the top of my new Vagrantfile to initialise it straight from a vagrant up.

Installing WordPress

As in the previous articles, just pull down the wordpress recipe from the opscode repo into your cookbooks directory:

[bash]cd cookbooks
git clone https://github.com/opscode-cookbooks/wordpress.git
[/bash]

Looking at the top of the WordPress default.rb file you can see which other cookbooks it depends on:

[bash]include_recipe "apache2"
include_recipe "mysql::server"
include_recipe "mysql::ruby"
include_recipe "php"
include_recipe "php::module_mysql"
include_recipe "apache2::mod_php5"
[/bash]

From the last post we know that MySql also depends on OpenSSL, and MySql::Ruby depends on build-essentials. Go get those both in your cookbooks directory as well as the others mentioned above:

[bash]git clone https://github.com/opscode-cookbooks/apache2.git
git clone https://github.com/opscode-cookbooks/mysql.git
git clone https://github.com/opscode-cookbooks/openssl.git
git clone https://github.com/opscode-cookbooks/build-essential.git
git clone https://github.com/opscode-cookbooks/php.git
[/bash]

Replace the default Vagrantfile with the one below to reference the wordpress cookbook, and configure the database, username, and password for wordpress to use; I’m basing this one on the Vagrantfile from my last post but have removed everything to do with the “mysite” cookbook:

Vagrantfile

[ruby]Vagrant.configure("2") do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :forwarded_port, guest: 80, host: 8080

config.vm.provision :shell, :inline => "apt-get clean; apt-get update"

config.vm.provision :chef_solo do |chef|

chef.json = {
"mysql" => {
"server_root_password" => "myrootpwd",
"server_repl_password" => "myrootpwd",
"server_debian_password" => "myrootpwd"
},
"wordpress" => {
"db" => {
"database" => "wordpress",
"user" => "wordpress",
"password" => "mywppassword"
}
}
}

chef.cookbooks_path = ["cookbooks"]
chef.add_recipe "wordpress"
end
end
[/ruby]

The lines

[ruby] config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
[/ruby]

mean you can skip the vagrant init stage as we’re defining the same information here instead.

You don’t need to reference the dependant recipes directly since the WordPress one has references to it already.

You also don’t need to disable the default site since the wordpress recipe does this anyway. As such, remove this from the json area:

[ruby] "apache" => {
"default_site_enabled" => false
},
[/ruby]

Note: An issue I’ve found with the current release of the WordPress cookbook

I had to comment out the last line of execution which just displays a message to you saying

[ruby]Navigate to ‘http://#{server_fqdn}/wp-admin/install.php’ to complete wordpress installation.
[/ruby]

For some reason the method “message” on “log” appears to be invalid. You don’t need it though, so if you get the same problem you can just comment it out yourself for now.

To do this, head to line 116 in cookbooks/wordpress/recipes/default.rb and add a # at the start, e.g.:

[ruby]log "wordpress_install_message" do
action :nothing
# message "Navigate to ‘http://#{server_fqdn}/wp-admin/install.php’ to complete wordpress installation"
end
[/ruby]

Give that a

[bash]vagrant up
[/bash]

Then browse to localhost:8080/wp-admin/install.php and you should see:

wordpress inital screen 8080

From here you could quite happily set up your wordpress site on a local VM, but I’m going to move on to the next phase in my cunning plan.

Restore a WordPress Backup

I’ve previously blogged about backing a wordpress blog, the output of which was a gziped tar of the entire wordpress directory and the wordpress database tables. I’m now going to restore it to this VM so that I have a functioning copy of my backed up blog.

I’d suggest you head over and read the backup post I link to above, or you can just use the resulting script:

backup_blog.sh

[bash]#!/bin/bash

# Set the date format, filename and the directories where your backup files will be placed and which directory will be archived.
NOW=$(date +"%Y-%m-%d-%H%M")
FILE="rposbowordpressrestoredemo.$NOW.tar"
BACKUP_DIR="/home/<user>/_backup"
WWW_DIR="/var/www"

# MySQL database credentials
DB_USER="root"
DB_PASS="myrootpwd"
DB_NAME="wordpress"
DB_FILE="rposbowordpressrestoredemo.$NOW.sql"

# dump the wordpress dbs
mysql -u$DB_USER -p$DB_PASS –skip-column-names -e "select table_name from information_schema.TABLES where TABLE_NAME like ‘wp_%’;" | xargs mysqldump –add-drop-table -u$DB_USER -p$DB_PASS $DB_NAME > $BACKUP_DIR/$DB_FILE

# archive the website files
tar -cvf $BACKUP_DIR/$FILE $WWW_DIR

# append the db backup to the archive
tar –append –file=$BACKUP_DIR/$FILE $BACKUP_DIR/$DB_FILE

# remove the db backup
rm $BACKUP_DIR/$DB_FILE

# compress the archive
gzip -9 $BACKUP_DIR/$FILE
[/bash]

That results in a gzipped tarball of the entire wordpress directory and the wordpress database dumped to a sql file, all saved in the directory specified at the top – BACKUP_DIR=”/home/<user>/_backup”

First Restore Attempt – HACK-O-RAMA!

For the initial attempt I’m just going to brute-force it, to validate the actual importing and restoring of the backup. The steps are:

  1. copy an archive of the backup over to the VM (or in my case I’ll just set up a shared directory)
  2. uncompress the archive into a temp dir
  3. copy the wordpress files into a website directory
  4. import the mysql dump
  5. update some site specific items in mysql to enable local browsing

You can skip that last one if you want to just add some HOSTS entries to direct calls to the actual wordpress backed up site over to your VM.

Prerequisite

Create a backup of a wordpress site using the script above (or similar) and download the archive to your host machine.

I’ve actually done this using another little vagrant box with a base wordpress install for you to create a quick blog to play around with backing up and restoring – repo is over on github.

For restoring

Since this is the HACK-O-RAMA version, just create a bash script in that same directory called restore_backup.sh into which you’ll be pasting the chunks of code from below to execute the restore.

We can then call this script from the Vagrantfile directly. Haaacckkyyyy…

Exposing the archive to the VM

I’m saving the wordpress archive in a directory called “blog_backup” which is a subdirectory of the project dir on the host machine; I’ll share that directory with the VM using this line somewhere in the Vagrantfile:

[ruby]config.vm.synced_folder "blog_backup/", "/var/blog_backup/"
[/ruby]

if you’re using Vagrant v1 then the syntax would be:

[ruby]config.vm.share_folder "blog", "/var/blog_backup/", "blog_backup/"
[/ruby]

Uncompress the archive into the VM

This can be done using the commands below, pasted into that restore_backup.sh

[bash]# pull in the backup to a temp dir
mkdir /tmp/restore

# untar and expand it
cd /tmp/restore
tar -zxvf /var/blog_backup/<yoursite>.*.tar.gz
[/bash]

Copy the wordpress files over

[bash]# copy the website files to the wordpress site root
sudo cp -Rf /tmp/restore/var/www/wordpress/* /var/www/wordpress/
[/bash]

Import the MySQL dump

[bash]# import the db
mysql -uroot -p<dbpassword> wordpress < /tmp/restore/home/<user>/_backup/<yoursite>.*.sql
[/bash]

Update some site-specific settings to enable browsing

Running these db updates will allow you to browse both the wordpress blog locally and also the admin pages:

[bash]# set the default site to locahost for testage
mysql -uroot -p<dbpassword> wordpress -e "UPDATE wp_options SET option_value=’http://localhost:8080′ WHERE wp_options.option_name=’siteurl’"
mysql -uroot -p<dbpassword> wordpress -e "UPDATE wp_options SET option_value=’http://localhost:8080′ WHERE wp_options.option_name=’home’"
[/bash]

Note: Pretty Permalinks

If you’re using pretty permalinks – i.e., robinosborne.co.uk/2013/07/02/chef-for-developers/ instead of http://robinosborne.co.uk/?p=1418 – then you’ll need to both install the apache::mod_rewrite recipe and configure your .htaccess to allow mod_rewrite to do its thing. Create the .htaccess below to enable rewrites and save it in the same dir as your restore script.

.htaccess

[bash]<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ – [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
[/bash]

restore_backup.sh

[bash]# copy over the .htaccess to support mod_rewrite for pretty permalinks
sudo cp /var/blog_backup/.htaccess /var/www/wordpress/
sudo chmod 644 /var/www/wordpress/.htaccess
[/bash]

Also add this to your Vagrantfile:

[ruby]chef.add_recipe "apache2::mod_rewrite"
[/ruby]

The final set up and scripts

Bringing this all together we now have a backed up wordpress blog, restored and running as a local VM:

wordpress restore 1

The files needed to achieve this feat are:

Backup script

To be saved on your blog host, executed on demand, and the resulting archive file manually downloaded (probably SCPed). I have mine saved in a shared directory – /var/vagrant/blog_backup.sh:

blog_backup.sh

[bash]#!/bin/bash

# Set the date format, filename and the directories where your backup files will be placed and which directory will be archived.
NOW=$(date +"%Y-%m-%d-%H%M")
FILE="rposbowordpressrestoredemo.$NOW.tar"
BACKUP_DIR="/home/vagrant"
WWW_DIR="/var/www"

# MySQL database credentials
DB_USER="root"
DB_PASS="myrootpwd"
DB_NAME="wordpress"
DB_FILE="rposbowordpressrestoredemo.$NOW.sql"

# dump the wordpress dbs
mysql -u$DB_USER -p$DB_PASS –skip-column-names -e "select table_name from information_schema.TABLES where TABLE_NAME like ‘wp_%’;" | xargs mysqldump –add-drop-table -u$DB_USER -p$DB_PASS $DB_NAME > $BACKUP_DIR/$DB_FILE

# archive the website files
tar -cvf $BACKUP_DIR/$FILE $WWW_DIR

# append the db backup to the archive
tar –append –file=$BACKUP_DIR/$FILE $BACKUP_DIR/$DB_FILE

# remove the db backup
rm $BACKUP_DIR/$DB_FILE

# compress the archive
gzip -9 $BACKUP_DIR/$FILE
[/bash]

Restore script

To be saved in a directory on the host to be shared with the VM, along with your blog archive.

restore_backup.sh

[bash]# pull in the backup, untar and expand it, copy the website files, import the db
mkdir /tmp/restore
cd /tmp/restore
tar -zxvf /var/blog_backup/rposbowordpressrestoredemo.*.tar.gz
sudo cp -Rf /tmp/restore/var/www/wordpress/* /var/www/wordpress/
mysql -uroot -pmyrootpwd wordpress < /tmp/restore/home/vagrant/_backup/rposbowordpressrestoredemo.*.sql

# create the .htaccess to support mod_rewrite for pretty permalinks
sudo cp /var/blog_backup/.htaccess /var/www/wordpress/
sudo chmod 644 /var/www/wordpress/.htaccess

# set the default site to locahost for testage
mysql -uroot -pmyrootpwd wordpress -e "UPDATE wp_options SET option_value=’http://localhost:8080′ WHERE wp_options.option_name=’siteurl’"
mysql -uroot -pmyrootpwd wordpress -e "UPDATE wp_options SET option_value=’http://localhost:8080′ WHERE wp_options.option_name=’home’"
[/bash]

.htaccess

[bash]<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ – [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
[/bash]

Vagrantfile

[ruby]Vagrant.configure("2") do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :forwarded_port, guest: 80, host: 8080
config.vm.synced_folder "blog_backup/", "/var/blog_backup/"

config.vm.provision :shell, :inline => "apt-get clean; apt-get update"

config.vm.provision :chef_solo do |chef|

chef.json = {
"mysql" => {
"server_root_password" => "myrootpwd",
"server_repl_password" => "myrootpwd",
"server_debian_password" => "myrootpwd"
},
"wordpress" => {
"db" => {
"database" => "wordpress",
"user" => "wordpress",
"password" => "mywppassword"
}
}
}

chef.cookbooks_path = ["cookbooks"]
chef.add_recipe "wordpress"
chef.add_recipe "apache2::mod_rewrite"
end

# hacky first attempt at restoring the blog from a script on a share
config.vm.provision :shell, :path => "blog_backup/restore_backup.sh"
end
[/ruby]

myrootpwd

The password used to set up the mysql instance; it needs to be consistent in your Vagrantfile and your restore_backup.sh script

mywppassword

if you can’t remember your current wordpress user’s password, look in the /wp-config.php file in the backed up archive.

Go get it

I’ve created a fully working setup for your perusal over on github. This repo, combined with the base wordpress install one will give you a couple of fully functional VMs to play with.

If you pull down the restore repo you’ll just need to run setup_cookbooks.sh to pull down the prerequisite cookbooks, then edit the wordpress default recipe to comment out that damned message line.

Once that’s all done, just run

[bash]vagrant up[/bash]

and watch everything tick over until you get your prompt back. At this point you can open a browser and hit http://localhost:8080/ to see:

restored blog from github

Next up

I’ll be trying to move all of this hacky cleverness into a Chef recipe or two. Stay tuned.

Chef For Developers part 3

I’m continuing with my plan to create a series of articles for learning Chef from a developer perspective.

Part #1 gave an intro to Chef, Chef Solo, Vagrant, and Virtualbox. I also created my first Ubunutu VM running Apache and serving up the default website.

Part #2 got into creating a cookbook of my own, and evolved it whilst introducing PHP into the mix.

In this article I’ll get MySQL installed and integrated with PHP, and tidy up my own recipe.

Adding a database into the mix

1. Getting MySQL

Download mysql cookbook from the Opscode github repo into your “cookbooks” subdirecctory:

mysql

[bash]git clone https://github.com/opscode-cookbooks/mysql.git
[/bash]

Since this will be a server install instead of a client one you’ll also need to get OpenSSL:

openssl

[bash]git clone https://github.com/opscode-cookbooks/openssl.git
[/bash]

Now use Chef Solo to configure it by including the recipe reference and the mysql password in the Vagrantfile I’ve been using in the previous articles;

Vagrantfile

[ruby highlight=”14-17,26″]Vagrant.configure("2") do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :forwarded_port, guest: 80, host: 8080

config.vm.provision :shell, :inline => "apt-get clean; apt-get update"

config.vm.provision :chef_solo do |chef|

chef.json = {
"apache" => {
"default_site_enabled" => false
},
"mysql" => {
"server_root_password" => "blahblah",
"server_repl_password" => "blahblah",
"server_debian_password" => "blahblah"
},
"mysite" => {
"name" => "My AWESOME site",
"web_root" => "/var/www/mysite"
}
}

chef.cookbooks_path = ["cookbooks","site-cookbooks"]
chef.add_recipe "mysql::server"
chef.add_recipe "mysite"
end
end
[/ruby]

No need to explicitly reference OpenSSL; it’s in the “cookbooks” directory and since the mysql::server recipe references it it just gets pulled in.

If you run that now you’ll be able to ssh in and fool around with mysql using the user root and password as specified in the chef.json block.

[bash]vagrant ssh
[/bash]

and then

[bash]mysql -u root -p
[/bash]

and enter your password (“blahblah” in my case) to get into your mysql instance.

MySQL not doing very much

Now let’s make it do something. Using the mysql::ruby recipe it’s possible to orchestrate a lot of mysql functionality; this also relies on the build-essential cookbook, so download that into your “cookbooks” directory:

Build essential

[bash]git clone https://github.com/opscode-cookbooks/build-essential.git
[/bash]

To get some useful database abstraction methods we need the database cookbook:

Database

[bash]git clone https://github.com/opscode-cookbooks/database.git
[/bash]

The database cookbook gives a nice way of monkeying around with an RDBMS, making it possible to do funky things like:

[ruby]mysql_connection = {:host => "localhost", :username => ‘root’,
:password => node[‘mysql’][‘server_root_password’]}

mysql_database "#{node.mysite.database}" do
connection mysql_connection
action :create
end
[/ruby]

to create a database.

Add the following to the top of the mysite/recipes/default.rb file:

[ruby]include_recipe "mysql::ruby"

mysql_connection = {:host => "localhost", :username => ‘root’,
:password => node[‘mysql’][‘server_root_password’]}

mysql_database node[‘mysite’][‘database’] do
connection mysql_connection
action :create
end

mysql_database_user "root" do
connection mysql_connection
password node[‘mysql’][‘server_root_password’]
database_name node[‘mysite’][‘database’]
host ‘localhost’
privileges [:select,:update,:insert, :delete]
action [:create, :grant]
end

mysql_conn_args = "–user=root –password=#{node[‘mysql’][‘server_root_password’]}"

execute ‘insert-dummy-data’ do
command %Q{mysql #{mysql_conn_args} #{node[‘mysite’][‘database’]} <<EOF
CREATE TABLE transformers (name VARCHAR(32) PRIMARY KEY, type VARCHAR(32));
INSERT INTO transformers (name, type) VALUES (‘Hardhead’,’Headmaster’);
INSERT INTO transformers (name, type) VALUES (‘Chromedome’,’Headmaster’);
INSERT INTO transformers (name, type) VALUES (‘Brainstorm’,’Headmaster’);
INSERT INTO transformers (name, type) VALUES (‘Highbrow’,’Headmaster’);
INSERT INTO transformers (name, type) VALUES (‘Cerebros’,’Headmaster’);
INSERT INTO transformers (name, type) VALUES (‘Fortress Maximus’,’Headmaster’);
INSERT INTO transformers (name, type) VALUES (‘Chase’,’Throttlebot’);
INSERT INTO transformers (name, type) VALUES (‘Freeway’,’Throttlebot’);
INSERT INTO transformers (name, type) VALUES (‘Rollbar’,’Throttlebot’);
INSERT INTO transformers (name, type) VALUES (‘Searchlight’,’Throttlebot’);
INSERT INTO transformers (name, type) VALUES (‘Wideload’,’Throttlebot’);
EOF}
not_if "echo ‘SELECT count(name) FROM transformers’ | mysql #{mysql_conn_args} –skip-column-names #{node[‘mysite’][‘database’]} | grep ‘^3$’"
end
[/ruby]

and add in the new database variable in Vagrantfile:

[ruby highlight=”22″]Vagrant.configure("2") do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :forwarded_port, guest: 80, host: 8080

config.vm.provision :shell, :inline => "apt-get clean; apt-get update"

config.vm.provision :chef_solo do |chef|

chef.json = {
"apache" => {
"default_site_enabled" => false
},
"mysql" => {
"server_root_password" => "blahblah",
"server_repl_password" => "blahblah",
"server_debian_password" => "blahblah"
},
"mysite" => {
"name" => "My AWESOME site",
"web_root" => "/var/www/mysite",
"database" => "great_cartoons"
}
}

chef.cookbooks_path = ["cookbooks","site-cookbooks"]
chef.add_recipe "mysql::server"
chef.add_recipe "mysite"
end
end
[/ruby]

Now we need a page to display that data, but we need to pass in the mysql password as a parameter. That means we need to use a template; create the file templates/default/robotsindisguise.php.erb with this content:

[php]<?php
$con = mysqli_connect("localhost","root", "<%= @pwd %>");
if (mysqli_connect_errno($con))
{
die(‘Could not connect: ‘ . mysqli_connect_error());
}

$sql = "SELECT * FROM great_cartoons.transformers";
$result = mysqli_query($con, $sql);

?>
<table>
<tr>
<th>Transformer Name</th>
<th>Type</th>
</tr>
<?php
while($row = mysqli_fetch_array($result, MYSQL_ASSOC))
{
?>
<tr>
<td><?php echo $row[‘name’]?></td>
<td><?php echo $row[‘type’]?></td>
</tr>
<?php
}//end while
?>
</tr>
</table>
<?php
mysqli_free_result($result);
mysqli_close($con);
?>
[/php]

That line at the top might look odd:

[php]$con = mysqli_connect("localhost","root", "<%= @pwd %>");
[/php]

But bear in mind that it’s an ERB (Extended RuBy) file so gets processed by the ruby parser to generate the resulting file; the PHP processor only kicks in once the file is requested from a browser.

As such, if you kick off a vagrant up now and (eventually) vagrant ssh in, open /var/www/robotsindisguise.php in nano/vi and you’ll see the line

[php]$con = mysqli_connect("localhost","root", "<%= @pwd %>");
[/php]

has become

[php]$con = mysqli_connect("localhost","root", "blahblahblah");
[/php]

browsing to http://localhost:8080/robotsindisguise.php should give something like this:

Autobots: COMBINE!

2. Tidy it up a bit

Right now we’ve got data access stuff in the default.rb recipe, so let’s move that lot out; I’ve created the file /recipes/data.rb with these contents:

data.rb
[ruby]include_recipe "mysql::ruby"

mysql_connection = {:host => "localhost", :username => ‘root’,
:password => node[‘mysql’][‘server_root_password’]}

mysql_database node[‘mysite’][‘database’] do
connection mysql_connection
action :create
end

mysql_database_user "root" do
connection mysql_connection
password node[‘mysql’][‘server_root_password’]
database_name node[‘mysite’][‘database’]
host ‘localhost’
privileges [:select,:update,:insert, :delete]
action [:create, :grant]
end

mysql_conn_args = "–user=root –password=#{node[‘mysql’][‘server_root_password’]}"

execute ‘insert-dummy-data’ do
command %Q{mysql #{mysql_conn_args} #{node[‘mysite’][‘database’]} <<EOF
CREATE TABLE transformers (name VARCHAR(32) PRIMARY KEY, type VARCHAR(32));
INSERT INTO transformers (name, type) VALUES (‘Hardhead’,’Headmaster’);
INSERT INTO transformers (name, type) VALUES (‘Chromedome’,’Headmaster’);
INSERT INTO transformers (name, type) VALUES (‘Brainstorm’,’Headmaster’);
INSERT INTO transformers (name, type) VALUES (‘Highbrow’,’Headmaster’);
INSERT INTO transformers (name, type) VALUES (‘Cerebros’,’Headmaster’);
INSERT INTO transformers (name, type) VALUES (‘Fortress Maximus’,’Headmaster’);
INSERT INTO transformers (name, type) VALUES (‘Chase’,’Throttlebot’);
INSERT INTO transformers (name, type) VALUES (‘Freeway’,’Throttlebot’);
INSERT INTO transformers (name, type) VALUES (‘Rollbar’,’Throttlebot’);
INSERT INTO transformers (name, type) VALUES (‘Searchlight’,’Throttlebot’);
INSERT INTO transformers (name, type) VALUES (‘Wideload’,’Throttlebot’);
EOF}
not_if "echo ‘SELECT count(name) FROM transformers’ | mysql #{mysql_conn_args} –skip-column-names #{node[‘mysite’][‘database’]} | grep ‘^3$’"
end
[/ruby]

I’ve moved the php recipe references into recipes/webfiles.rb:

webfiles.rb
[ruby]include_recipe "php"
include_recipe "php::module_mysql"

# — Setup the website
# create the webroot
directory "#{node.mysite.web_root}" do
mode 0755
end

# copy in an index.html from mysite/files/default/index.html
cookbook_file "#{node.mysite.web_root}/index.html" do
source "index.html"
mode 0755
end

# copy in my usual favicon, just for the helluvit..
cookbook_file "#{node.mysite.web_root}/favicon.ico" do
source "favicon.ico"
mode 0755
end

# copy in the mysql demo php file
template "#{node.mysite.web_root}/robotsindisguise.php" do
source "robotsindisguise.php.erb"
variables ({
:pwd => node.mysql.server_root_password
})
mode 0755
end

# use a template to create a phpinfo page (just creating the file and passing in one variable)
template "#{node.mysite.web_root}/phpinfo.php" do
source "testpage.php.erb"
mode 0755
variables ({
:title => node.mysite.name
})
end
[/ruby]

So /receipes/default.rb now looks like this:

default.rb
[ruby]include_recipe "apache2"
include_recipe "apache2::mod_php5"

# call "web_app" from the apache recipe definition to set up a new website
web_app "mysite" do
# where the website will live
docroot "#{node.mysite.web_root}"

# apache virtualhost definition
template "mysite.conf.erb"
end

include_recipe "mysite::webfiles"
include_recipe "mysite::data"
[/ruby]

Summary

Over the past three articles we’ve automated the creation of a virtual environment via a series of code files, flat files, and template files, and a main script to pull it all together. The result is a full LAMP stack virtual machine. We also created a new website and pushed that on to the VM also.

All files used in this post can be found in the associated github repo.

Any comments or questions would be greatly appreciated, as would pull requests for improving my lame ruby and php skillz! (and lame css and html..)

Chef For Developers part 2

I’m continuing with my plan to create a series of articles for learning Chef from a developer perspective.

Part #1 gave an intro to Chef, Chef Solo, Vagrant, and Virtualbox. I also created my first Ubuntu VM running Apache and serving up the default website.

In this article I’ll get on to creating a cookbook of my own, and evolve it whilst introducing PHP into the mix.

Creating and evolving your own cookbook

1. Cook your own book

Downloaded configuration cookbooks live in the cookbooks subdirectory; this should be left alone as you can exclude it from version control knowing that the cookbooks are remotely hosted and can be downloaded as needed.

For your own ones you need to create a new directory; the convention for this has become to use site-cookbooks, but you can use whatever name you like as far as I can tell. You just need to add a reference to that directory in the Vagrantfile:

[bash]chef.cookbooks_path = ["cookbooks", "site-cookbooks", "blahblahblah"][/bash]

Within that new subdirectory you need to have, at a minimum, a recipes subdirectory with a default.rb ruby file which defines what your recipe does. Other key subdirectories are files (exactly that: files to be referenced/copied/whatever) and templates (ruby ERB templates which can be referenced to create a new file).

To create this default structure (for a cookbook called mysite) just use the one-liner:

[bash]mkdir -p site-cookbooks/mysite/{recipes,{templates,files}/default}[/bash]

You’ll need to create two new files to spin up our new website; a favicon and a flat index html file. Create something simple and put them in the files/default/ directory (or use my ones).

Now in order for them to be referenced there needs to be a default.rb in recipes:

[ruby]# — Setup the website
# create the webroot
directory "#{node.mysite.web_root}" do
mode 0755
end

# copy in an index.html from mysite/files/default/index.html
cookbook_file "#{node.mysite.web_root}/index.html" do
source "index.html"
mode 0755
end

# copy in my usual favicon, just for the helluvit..
cookbook_file "#{node.mysite.web_root}/favicon.ico" do
source "favicon.ico"
mode 0755
end[/ruby]

This will create a directory for the website (the location of which needs to be defined in the chef.json section of the Vagrantfile), copy the specified files from files/default/ over, and set the permissions on them all so that the web process can access them.

You can also use the syntax:

[ruby]directory node[‘mysite’][‘web_root’] do[/ruby]

in place of

[ruby]directory "#{node.mysite.web_root}" do[/ruby]

So how will Apache know about this site? Better configure it with a conf file from a template; create a new file in templates/default/ called mysite.conf.erb:

[ruby]<VirtualHost *:80>
DocumentRoot <%= @params[:docroot] %>
</VirtualHost>[/ruby]

And then reference it from the default.rb recipe file (add to the end of the one we just created, above):

[ruby]web_app "mysite" do
# where the website will live
docroot "#{node.mysite.web_root}"

# apache virtualhost definition
template "mysite.conf.erb"
end[/ruby]

That just calls the web_app method that exists within the Apache cookbook to create a new site called “mysite”, set the docroot to the same directory as we just created, and configure the virtual host to reference it, as configured in the ERB template.

The Vagrantfile now needs to become:

[ruby]Vagrant.configure("2") do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :forwarded_port, guest: 80, host: 8080

config.vm.provision :chef_solo do |chef|

chef.json = {
"apache" => {
"default_site_enabled" => false
},
"mysite" => {
"name" => "My AWESOME site",
"web_root" => "/var/www/mysite"
}
}

chef.cookbooks_path = ["cookbooks","site-cookbooks"]
chef.add_recipe "apache2"
chef.add_recipe "mysite"
end
end[/ruby]

Pro tip: be careful with quotes around the value for default_site_enabled; “false” == true whereas false == false, apparently.

Make sure you’ve destroyed your existing vagrant vm and bring this new one up, a decent one-liner is:

[bash]vagrant destroy –force && vagrant up[/bash]

You should see a load of references to your new cookbook in the output and hopefully once it’s finished you’ll be able to browse to http://localhost:8080 and see something as GORGEOUS as:

Salmonpink is underrated

2. Skipping the M in LAMP, Straight to the P: PHP

Referencing PHP

Configure your code to bring in PHP; a new recipe needs to be referenced as a module of Apache:

[ruby]chef.add_recipe "apache2::mod_php5"[/ruby]

It’s probably worth mentioning that

[ruby]add_recipe "apache"[/ruby]

actually means

[ruby]add_recipe "apache::default"[/ruby]

As such, mod_php5 is a recipe file itself, much like default.rb is; you can find it in the Apache cookbook under cookbooks/apache2/recipes/mod_php5.rb and all it does is call the approriate package manager to install the necessary libraries.

You may find that you receive the following error after adding in that recipe reference:

[bash]apt-get -q -y install libapache2-mod-php5=5.3.10-1ubuntu3.3 returned 100, expected 0[/bash]

To get around this you need to add in some simple apt-get housekeeping before any other provisioning:

[ruby]config.vm.provision :shell, :inline => "apt-get clean; apt-get update"[/ruby]

PHPInfo

Let’s make a basic phpinfo page to show that PHP is in there and running. To do this you could create a new file and just whack in a call to phpinfo(), but I’m going to create a new template so we can pass in a page title for it to use (create your own, or just use mine):

[html]<html>
<head>
<title><%= @title %></title>
.. snip..
</head>
<body>
<h1><%= @title %></h1>
<div class="description">
<?php
phpinfo( );
?>
</div>
.. snip ..
</body>
</html>[/html]

The default.rb recipe now needs a new section to create a file from the template:

[ruby]# use a template to create a phpinfo page (just creating the file and passing in one variable)
template "#{node.mysite.web_root}/phpinfo.php" do
source "testpage.php.erb"
mode 0755
variables ({
:title => node.mysite.name
})
end[/ruby]

Destroy, rebuild, and browse to http://localhost:8080/phpinfo.php:

A spanking new phpinfo page - wowzers!

Notice the heading and the title of the tab are set to the values passed in from the Vagrantfile.

3. Refactor the Cookbook

We can actually put the add_recipe calls inside of other recipes using include_recipe, so that the dependencies are explicit; no need to worry about forgetting to include apache in the Vagrantfile if you’re including it in your recipe itself.

Let’s make default.rb responsible for the web app itself, and make a new recipe for creating the web files; create a new webfiles.rb in recipes/mysite and move the file related stuff in there:

webfiles.rb
[ruby]# — Setup the website
# create the webroot
directory "#{node.mysite.web_root}" do
mode 0755
end

# copy in an index.html from mysite/files/default/index.html
cookbook_file "#{node.mysite.web_root}/index.html" do
source "index.html"
mode 0755
end

# copy in my usual favicon, just for the helluvit..
cookbook_file "#{node.mysite.web_root}/favicon.ico" do
source "favicon.ico"
mode 0755
end

# use a template to create a phpinfo page (just creating the file and passing in one variable)
template "#{node.mysite.web_root}/phpinfo.php" do
source "testpage.php.erb"
mode 0755
variables ({
:title => node.mysite.name
})
end[/ruby]

default.rb now looks like

[ruby]include_recipe "apache2"
include_recipe "apache2::mod_php5"

# call "web_app" from the apache recipe definition to set up a new website
web_app "mysite" do
# where the website will live
docroot "#{node.mysite.web_root}"

# apache virtualhost definition
template "mysite.conf.erb"
end

include_recipe "mysite::webfiles"[/ruby]

And Vagrantfile now looks like:

[ruby]Vagrant.configure("2") do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :forwarded_port, guest: 80, host: 8080

config.vm.provision :shell, :inline => "apt-get clean; apt-get update"

config.vm.provision :chef_solo do |chef|

chef.json = {
"apache" => {
"default_site_enabled" => false
},
"mysite" => {
"name" => "My AWESOME site",
"web_root" => "/var/www/mysite"
}
}

chef.cookbooks_path = ["cookbooks","site-cookbooks"]
chef.add_recipe "mysite"
end
end[/ruby]

The add_recipes are now include_recipes moved to default.rb, the file related stuff is in webfiles.rb and there’s an include_recipe to reference this new file:

[ruby]include_recipe "mysite::webfiles"[/ruby]

Why the refactoring is important!

Well, refactoring is a nice, cathartic, thing to do anyway. But there’s also a specific reason for doing it here: once we move from using Chef Solo to Grown Up Chef (aka Hosted Chef) the Vagrantfile won’t be used anymore.

As such, moving the logic out of the Vagrantfile (e.g., add_recipe calls) and into our own cookbook (e.g. include_recipe calls) will allow us to use our same recipe in both Chef Solo and also Hosted Chef.

Next up

We’ll be getting stuck in to MySQL integration and evolving a slightly more dynamic recipe.

All files used in this post can be found in the associated github repo.

Chef For Developers

Chef, Vagrant, VirtualBox

In this upcoming series of articles I’ll be trying to demonstrate (and learn for myself) how to effectively configure the creation of an environment. I’ve decided to look into Chef as my environment configuration tool of choice, just because it managed to settle in my brain quicker than Puppet did.

I’m planning on starting really slowly and simply using Chef Solo so I don’t need to learn about the concepts of hosted Chef servers and Chef client nodes to begin with. I’ll be using virtual machines instead of metal, so will be using VirtualBox for the VM-ing and Vagrant for the VM orchestration.

Sounds like Ops to me..

The numerous other articles I’ve read about using Chef all seem to assume a fundemental Linux SysOps background, which melted my little brain somewhat; hence why I’m starting my own series and doing it from a developer perspective.

LINUX?!

Don’t worry if you’re not familiar with Linux; although I’ll start with a Linux VM I’ll eventually move on to applying the same process to Windows, and the commands used in Linux will be srsly basic. Srsly.
Lolz.

Part 1 – I ♥ LAMP

This first few articles will cover:

Chef

Chef

“Chef is an automation platform that transforms infrastructure into code”. You are ultimately able to describe what your infrastructure looks like in ruby code and manage your entire server estate via a central repository; adding, removing, and updating features, applications, and configuration from the command line with an extensive Chef toolbelt.

Yes, there are knives. And cookbooks and recipes. Even a food critic!

Here’s the important bit: The difference between Chef Solo and one of the Hosted Chef options

Chef Solo

  1. You only have a single Chef client which uses a local json file to understand what it is comprised of.
  2. Cookbooks are either saved locally to the client or referenced via URL to a tar archive.
  3. There is no concept of different environments.

Hosted Chef

  1. You have a master Chef server to which all Chef client nodes connect to understand what they are comprised of.
  2. Cookbooks are uploaded to the Chef server using the Knife command line tool.
  3. There is the concept of different environments (dev, test, prod).

I’ll eventually get on to this in more detail as I’ll be investigating Chef over the next few posts in this series; for now, please just be aware that in this scenario Chef Solo is being used to demonstrate the benefit of environment configuration and is not being recommended as a production solution. Although in some cases it might be.

VirtualBox

virtualbox

“VirtualBox is a cross-platform virtualization application”. You can easily configure a virtual machine in terms of RAM, HDD size and type, network interface type and number, CPU, even cnfigure shared folders between host and client. Then you can point the virtual master drive at an ISO on the host computer and install an OS as if you were sitting at a physical machine.

This has so many uses, including things like setting up a development VM for installing loads of dev tools if you want to keep your own computer clean, or setting up a presentation machine containing just powerpoint, your slides, and Visual Studio for demos.

Vagrant

vagrant up

Vagrant is an open source development environment virtualisation technology written in Ruby. Essentially you use Vagrant to script against VirtualBox, VMWare, AWS or many others; you can even write your own provider for it to hook into!

The code for Vagrant is open source and can be found on github

Getting started

Downloads

For this first post you don’t even need to download the Chef client, so we’ll leave that for now.

Go and download Vagrant and VirtualBox and install them.

Your First Scripted Environment

1. Get a base OS image

To do this download a Vagrant “box” (an actual base OS, of which there are many) from the specified URL, assign a friendly name (e.g. “precise32”) to it, and create a base “Vagrantfile” using Vagrant’s “init” method, from the command line run:

[bash]vagrant init precise32 http://files.vagrantup.com/precise32.box[/bash]

vagrant init

A Vagrantfile is a little bit of ruby to define the configuration of your Vagrant box; the autogenerated one is HUGE but its pretty much all tutorial-esque comments. Ignoring the comments gives you something like this:

[ruby]Vagrant::Config.run do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
end
[/ruby]

Yours might also look like this depending on whether you’re defaulting to Vagrant v2 or v1:

[ruby]Vagrant.configure("2") do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
end
[/ruby]

This is worth bearing in mind as the syntax for various operations differ slightly between versions.

2. Create and start your basic VM

From the command line:

Create and start up the basic vm

[bash]vagrant up[/bash]

vagrant up

If you have Virtualbox running you’ll see the new VM pop up and the preview window will show it booting.

vagrant up in virtualbox

SSH into it

[bash]vagrant ssh[/bash]

vagrant ssh

Stop it

[bash]vagrant halt[/bash]

vagrant halt

Remove all trace of it

[bash]vagrant destroy[/bash]

vagrant destroy

And that’s your first basic, scripted, virtual machine using Vagrant! Now let’s add some more useful functionality to it:

3. Download Apache Cookbook

Create a subdirectory “cookbooks” in the same place as your Vagrantfile, then head over to the opscode github repo and download the Apache2 cookbook into the “cookbooks” directory.

OpsCode cookbooks repo for Apache

Apache

[bash]git clone https://github.com/opscode-cookbooks/apache2.git[/bash]

Gitting it

4. Set up Apache using Chef Solo

Now it starts to get interesting.

Update your Vagrantfile to include port forwarding so that browsing to localhost:8080 redirects to your VM’s port 80:

[ruby]Vagrant.configure("2") do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :forwarded_port, guest: 80, host: 8080
end[/ruby]

Now add in the Chef provisioning to include Apache in the build:

[ruby]Vagrant.configure("2") do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :forwarded_port, guest: 80, host: 8080

config.vm.provision :chef_solo do |chef|
chef.cookbooks_path = ["cookbooks"]
chef.add_recipe "apache2"
end
end[/ruby]

Kick it off:

[bash]vagrant up[/bash]

Vagrant with Apache - starting boot

..tick tock..

Vagrant with Apache - finishing boot

So we now have a fresh new Ubunutu VM with Apache installed and configured and running on port 80, with our own port 8080 forwarded to the VM’s port 80; let’s check it out!

Browsing the wonderful Apache site

Huh? Where’s the lovely default site you normally get with Apache? Apache is definitely running – check the footer of that screen.

What’s happening is that on Ubuntu the default site doesn’t get enabled so we have to do that ourselves. This is also a great intro to passing data into the chef provisioner.

Add in this little chunk of JSON to the Vagrantfile:

[ruby]chef.json = {
"apache" => {
"default_site_enabled" => true
}
}[/ruby]

So it should now look like this:

[ruby]Vagrant.configure("2") do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :forwarded_port, guest: 80, host: 8080

config.vm.provision :chef_solo do |chef|

chef.json = {
"apache" => {
"default_site_enabled" => true
}
}

chef.cookbooks_path = ["cookbooks"]
chef.add_recipe "apache2"
end
end[/ruby]

The chef.json section passes the specified variable values into the specified recipe file. If you dig into default.rb in /cookbooks/apache/recipes you’ll see this block towards the end:

[ruby]apache_site "default" do
enable node[‘apache’][‘default_site_enabled’]
end[/ruby]

Essentially this says “for the site default, set it’s status equal to the value defined by default_site_enabled in the apache node config section”. For Ubuntu this defaults to false (other OSs default it to true) and we’ve just set ours to true.

Let’s try that again:

[bash]vagrant reload[/bash]

(reload is the equivalent of vagrant halt && vagrant up)

Notice that this time, towards the end of the run, we get the message

[bash]INFO: execute[a2ensite default] ran successfully[/bash]

instead of on the previous one:

[bash]INFO: execute[a2dissite default] ran successfully[/bash]

  • a2ensite = enable site
  • a2dissite = disable site

So what does this look like?

Browsing the wonderful Apache site.. take 2

BOOM!

Next up

Let’s dig into the concept of Chef recipes and creating our own ones.