configuration management

Berkshelf - the missing piece

If you've been following my past few posts, you've seen i was investigating how best to integrate the plethora of Chef testing tools that've been coming out — foodcritic, chefspec, test-kitchen, mini-test — and although not testing tools per se, Berkshelf and Vagrant are the other pieces of the puzzle… but how to fit them all together? What is the directory structure for keeping your Berkfile - at the top of the repo? Inside a cookbook directory? How many Vagrant files am I going to create here?

If, like myself, you weren't along at this year's ChefConf 2013, you may also have missed on a major conceptual shift that has happened. Instead of the all-inclusive Chef-repo design pattern, as implied by the OpsCode Chef Repo - https://github.com/opscode/chef-repo - which, when used with all the community cookbooks out there, creates a mess of forked, modified and sub-moduled cookbooks and recipes.

The conceptual shift away and now recommended way, is to treat each cookbook as a separate piece of software and to give it it's own git repo, keeping them separate from from your Chef-repo. This combined, with a distinction between Library and Application cookbooks, and then bundled together via Berkshelf, enables a much cleaner and modular way of working. When you accept this move, it's much easier to then fit all the testing pieces together as they all live within each separate cookbook/repo.

This Comment Thread was what really drew it together for me, and then to fully clarify this way of working, watch Jamie Winsor's ChefConf talk which is the original starting point:

some fav Velocity talks

Velocity was on last week, and I was following along enviously on twitter - I'm making a promise to myself that I'll be along in person next year!

Quite a few of the talks now appearing online - here's a couple I've watched so far..

Puppet stages and APT

gonz -- for no reason except he's the MAN!

At work, our old code deployment strategy was basically a wrapper script doing an svn checkout and some symlinking. With our move to Puppet for config management, we also moved to using Apt packaging for our code deployment, tying them together with a line similar to :

class foo-export {
package { 'foo-export': ensure => latest }
}

So that whenever we deploy a new version of a package to our apt-repo, it can then be installed with a:

puppet agent --test
(and with an initial dry-run using --noop)

( I should mention I manage our Puppet runs via our own distributed scripts, rather than having the nodes set up to check in every 30mins - when I'm doing so much work on our Puppet setup and config, I'd rather not having machines check in automatically in case the config is in a broken state )

Inevitably I would run the above Puppet command and it would not find any new packages, because ‘d'uh!', of course I still need to run an apt-get update.

I've been using Puppet stages for a while now, in order to group package installations in a broader sense rather than manually spelling out every dependency with a require => stanza, so it was a simple addition to add in a pre stage, and have the nodes run apt-get update before any runs.

In order to use stages, you need to first define them in your site.pp. By default every defined class runs under Stage[main], so you just need to add the new stages and define the running order. (full Puppet stage documentation is here)

At the top of my site.pp file, I added a pre and post stage, then define the execution order via:

stage { [pre, post]: }
Stage[pre] -> Stage[main] -> Stage[post]

Then I created a class called apt-hupdate (sorry, i use stupid naming conventions!) in
modules/apt-hupdate/manifests/init.pp

which contained:
class apt-hupdate {

exec { "aptHupdate":
command => "/usr/bin/apt-get update",
}
}

And finally, include that in your site.pp with:

class { apt-hupdate: stage => pre }

Now every time you do a Puppet run, apt-get update will be the first task run.

building a DEB package from a perl script

I have a speedtest perl script i wrote - nothing complicated, takes a file and uploads it to a remote FTP or SFTP server, while calculating how long, then gives you a a measure of the MB/per second bandwidth between two sites.

I want it available on a selection of machines so it can run from wherever, so I thought i'd package it up as a .DEB file and stick it in our local repo. Nothing complicated in that, and there are a number of online tutorials about building your own debs. The main drawback with most I found was that they assume you are actually building from source rather than just distributing a script, although I also found a relevant Ubuntu thread which is pretty simple and to the point.

However, even using these tutorials it still took me a few hours to figure out. There are just a couple of non-obvious points, so i figure writing out my own steps is worth recording -

So first, grab your required packages:

apt-get install dh-make dpkg-dev debhelper devscripts fakeroot lintian

You will need to build from a directory with the name of your script in the form packagename-version, so for mine i created /tmp/speedtest-1.0, then copied in my script ‘speedtest‘ and it's data file 25MBFLAC.file ( which i could have created with dd on the box rather than copy over, but downloading the file is actually quicker in this situation ).

The first step is to run:

dh_make -s --indep --createorig -e thor@valhalla.com
(dash-s means create a single binary .deb - i.e. no source version; indep means architecture-independent; and createorig is to indicate you are the original maintainer)

this creates a top-level ‘debian‘ directory containing all the necessary config files.
The main one you need to edit is debian/control - you prob only need fill in “section”, “homepage” and “Description”

Mine looks like:

Source: speedtest
Section: web
Priority: extra
Maintainer: Thorsten Sideboard <thor@valhalla.com>
Build-Depends: debhelper (>= 7.0.50~)
Standards-Version: 3.8.4
Homepage: http://github.com/sideboard/speedtest.git

Package: speedtest
Architecture: all
Depends: ${misc:Depends}
Description: Test Upload Speeds

One of the things which baffled me for a while, which was answered in the askubuntu link above, was how to specify where something is installed — it goes in a file ‘debian/install‘ which isn't created for you. The format of the file is ‘filename location/to/be/installed” (without the initial slash)

so in my case, i ran:
echo "speedtest usr/local/Scriptz/" > debian/install
echo "25MBFLAC.file usr/local/Scriptz/" >> debian/install

At this point, you should then be able to run:
debuild -us -uc

and you should have a deb file built. but..

First i ran into :

dpkg-source: error: can't build with source format '3.0 (quilt)': no orig.tar file found

As the above-mentioned askubuntu post says, you can

echo "1.0" > debian/source/format

then re-running the debuild -us -uc i ran into

dpkg-source: error: cannot represent change to speedtemp-1.0/25MBFLAC.file: binary file contents changed

This error is due to leftover build-cruft from my last run - if you check the directory one step up from where you are, you'll see debuild has already built some files for you, typically a tar.gz, a .dsc and a .build file. Delete all them, then re-run debuild -us -uc — now it should build properly!

ah!

dh_usrlocal: debian/speedtemp/usr/local/Scriptz/speedtest is not a directory

This one also caught me out for a while - turns out this is caused by my specifying “/usr/local/Scriptz” as my install location -

Most third-party software installs itself in the /usr/local directory hierarchy. On Debian this is reserved for private use by the system administrator, so packages must not use directories such as /usr/local/bin but should instead use system directories such as /usr/bin, obeying the Filesystem Hierarchy Standard (FHS).

(from here)

So, yeah, i changed my debian/install file to be “speedtest usr/bin

and finally! running debuild -us -uc completes properly, outputting a /tmp/speedtest_1.0-1_all.deb which can then be installed via
dpkg -i /tmp/speedtest_1.0-1_all.deb

One last note — there are four useful scripts to also know about — preinst, postinst, prerm, postrm — these should be in the debian/ directory - pretty self-explanatory - pre- and post- install and remove scripts - if these exist, they will be run exactly as they are named, so for example, i wanted my 25MBFLAC.file still to be installed under /usr/local/Scriptz, so i listed it to be installed in the debian/install file as “25MBFLAC.file tmp” and then in my postinst file, i added:

#!/bin/sh
mv /tmp/25MBFLAC.file /usr/local/Scriptz/

CPAN Diff script

diff

I put together a quick perl script for comparing installed CPAN modules between two hosts. Find it here.

Quite easy to use:
Usage: ./CompareHostCpanModules.pl login@host1 login@host2

The script ssh's into both hosts (so it's easier if you have your ssh-keys setup) and grabs a list of installed CPAN modules and versions, then outputs the differences - it returns two lists - one of modules installed but having different versions, and another list of modules missing from the second host.

Gen Xen

I've been working pretty extensively with Xen and Puppet in my new job, really loving it! I've been creating a whole load of Xen hosts, most of which are cloned from an initial image I built using Xen-tools. I've just finished a script which is over on my github page, which basically automates what was previously a manual process.

Basically, it copies your existing disk.img and swap.img, generates a new xen.cfg file based on some interactive input (desired hostname, IP, memory and number of vCPUs) plus a random Xen mac address, then mounts the disk.img file and changes some appropriate system files - /etc/hostname, hosts, and network/interfaces.

All quite simple and straight forward, but quite nice to have automated.

GenXen

Here's the README:

GenXen #
#############################

A script for automating Xen VM deployment.

It requires that you have a base disk.img and swap.img already created.
I created mine with:
xen-create-image -pygrub -size=50Gb -swap=9Gb -vcpus=2 -memory 6Gb -dist=squeeze -dhcp -passwd -dir=/var/virt-machines -hostname=xen-squeeze-base

Fill in some of the variables at the top of GenXen.pl before running, then simply:
./GenXen.pl

The interactive part will ask for hostname, memory size, vCPUs, IP address, then generate a unique Xen mac address, and write these all to a xen config file which will be saved in /etc/xen/

It'll copy your disk.img and swap.img to destination dir, mount the disk.img and create appropriate files for:
/etc/hostname
/etc/hosts
/etc/network/interfaces

After that you should be good to launch with:

xm create -c /etc/xen/whatever-your-hostname-is.cfg

Vagrant and Chef setup

I've been reading through ThoughtWorks' latest ‘technology radar‘ which led me to look up Vagrant, one of the tools they list as worth exploring.

Vagrant is a framework for building and deploying Virtual Machine environments, using Oracle VirtualBox for the actual VMs and utilizing Chef for configuration management.

Watching through this intro video:

http://vimeo.com/9976342

i was quite intrigued as it is very similar to what i was looking to achieve earlier when i was experimenting with installing Xen and configuring with Puppet.

So here's what I experienced during the setup of Vagrant on my Macbook - I decided to start with a simple Chef install to familiarise myself with Chef itself and it's own requirements CouchDB, RabbitMQ and Solr, mostly by following these instructions -

-CHEF INSTALL-

sudo gem install chef
sudo gem install ohai

Chef uses couchDB as it's datastore, so we need to install it using the instructions here

brew install couchdb

The instructions I list above also contains steps to install a couchDB user and set it up as a daemon. They didn't work for me, and after 30mins of troubleshooting, i gave up and went with the simpler option of running it under my own user - in production this will be running on a Linux server rather than my Macbook, so it seemed fair enough -

cp /usr/local/Cellar/couchdb/1.1.0/Library/LaunchDaemons/org.apache.couchdb.plist ~/Library/LaunchAgents/

launchctl load -w ~/Library/LaunchAgents/org.apache.couchdb.plist

Check its running okay by going to
http://127.0.0.1:5984/

which should provide something akin to :
{“couchdb”:”Welcome”,”version”:”1.1.0″}

- INSTALL RABBITMQ -

brew install rabbitmq
/usr/local/sbin/rabbitmq-server -detached

sudo rabbitmqctl add_vhost /chef
sudo rabbitmqctl add_user chef testing
sudo rabbitmqctl set_permissions -p /chef chef “.*” “.*” “.*”

Ok, Gettin' back to my mission, break out the whipped cream and the cherries, then I go through all the fly positions - oh, wrong mission!

Ok..

brew install gecode
brew install solr

sudo gem install chef-server chef-server-api chef-server chef-solr
sudo gem install chef-server-webui
sudo chef-solr-installer

Setup a conf file -
sudo mkdir /etc/chef
sudo vi /etc/chef/server.rb
- paste in the example from:

http://wiki.opscode.com/display/chef/Manual+Chef+Server+Configuration - making the appropriate changes for your FQDN

At this point, the above instructions ask you to start the indexer however the instructions haven't been updated to reflect changes to Chef version 0.10.2 in which chef-solr-indexer has been replaced with chef-expander

So, instead of running:
sudo chef-solr-indexer

you instead need to run:
sudo chef-expander -n1 -d

Next i tried
sudo chef-solr

which ran into
“`configure_chef': uninitialized constant Chef::Application::SocketError (NameError)”

i had to create an /etc/chef/solr.rb file and simply add this to the file:

require ‘socket'

startup now worked -
if you want to daemonize it, use:

sudo chef-solr -d

Next start Chef Server with:
sudo chef-server -N -e production -d

and finally:
sudo chef-server-webui -p 4040 -e production

Now you should be up and running - you need to configure the command client ‘Knife' follwing the instructions here - under the section ‘Configure the Command Line Client

mkdir -p ~/.chef
sudo cp /etc/chef/validation.pem /etc/chef/webui.pem ~/.chef
sudo chown -R $USER ~/.chef

knife configure -i

(follow the instructions at the link - you only need to change the location of the two pem files you copied above)

Ok, so hopefully you're at the same place as me with this all working at least as far as being able to log into CouchDB, and verifying that Chef/Knife are both working.

- VAGRANT SETUP -

Now, onward with the original task of Vagrant setup…
Have a read over the getting started guide:

Install VirtualBox - download from http://www.virtualbox.org/wiki/Downloads

Run the installer, which should all work quite easily. Next..

gem install vagrant

mkdir vagrant_guide
cd vagrant_guide/
vagrant init

this creates the base Vagrantfile, which the documentation compares to a Makefile, basically a reference file for the project to work with.

Setup our first VM -
vagrant box add lucid32 http://files.vagrantup.com/lucid32.box

This is downloaded and saved in ~/.vagrant.d/boxes/

edit the Vagrantfile which was created and change the “box” entry to be “lucid32″, the name of the file we just saved.

Bring it online with:
vagrant up

then ssh into with
vargrant ssh

Ace, that worked quite easily. After a little digging around, I logged out and tore the machine down again with
vagrant destroy

- TYING IT ALL TOGETHER -
Now we need to connect our Vagrant install with our Chef server

First, clone the Chef repository with:
git clone git://github.com/opscode/chef-repo.git

add this dir to your ~/.chef/knife.rb file
i.e
cookbook_path ["/Users/thorstensideboard/chef-repo/cookbooks"]

Download the Vagrant cookbook they use in their examples -

wget http://files.vagrantup.com/getting_started/cookbooks.tar.gz
tar xzvf cookbooks.tar.gz
mv cookbooks/* chef-repo/cookbooks/

Add it to our Chef server using Knife:
knife cookbook upload -a
(knife uses the cookbook_path we setup above)

If you browse to your localhost at
http://sbd-ioda.local:4040/cookbooks/
you should see the three new cookbooks which have been added.

Now to edit Vagrantfile and add your Chef details:

Vagrant::Config.run do |config|

config.vm.box = "lucid32"

config.vm.provision :chef_client do |chef|

chef.chef_server_url = "http://SBD-IODA.local:4000"
chef.validation_key_path = "/Users/thorsten/.chef/validation.pem"
chef.add_recipe("vagrant_main")
chef.add_recipe("apt")
chef.add_recipe("apache2")

end
end

I tried to load this up with
vagrant up
however received:

“[default] [Fri, 05 Aug 2011 09:27:07 -0700] INFO: *** Chef 0.10.2 ***
: stdout
[default] [Fri, 05 Aug 2011 09:27:07 -0700] INFO: Client key /etc/chef/client.pem is not present - registering
: stdout
[default] [Fri, 05 Aug 2011 09:27:28 -0700] FATAL: Stacktrace dumped to /srv/chef/file_store/chef-stacktrace.out
: stdout
[default] [Fri, 05 Aug 2011 09:27:28 -0700] FATAL: SocketError: Error connecting to http://SBD-IODA.local:4000/clients - getaddrinfo: Name or service not known”

I figured this was a networking issue, and yeah, within the VM it has no idea of my Macbook's local hostname, which i fixed by editing its /etc/hosts file and manually adding it.

Upon issuing a
vagrant reload, boom! you can see the Vagrant host following the recipes and loading up a bunch of things including apache2

However at this point, you can still only access it's webserver from within the VM, so in order to access it from our own desktop browser, we can add the following line to the Vagrantfile:
config.vm.forward_port(“web”, 80, 8080)

After another reload, you should now be able to connect to localhost:8080 and access your new VM's apache host.

In order to use this setup in any sort of dev environment will still need a good deal more work, but for the moment, this should be enough to get you up and running and able to explore both Vagrant and Chef.