If you've been following my past few posts, you've seen i was investigating how best to integrate the plethora of Chef testing tools that've been coming out — foodcritic, chefspec, test-kitchen, mini-test — and although not testing tools per se, Berkshelf and Vagrant are the other pieces of the puzzle… but how to fit them all together? What is the directory structure for keeping your Berkfile - at the top of the repo? Inside a cookbook directory? How many Vagrant files am I going to create here?
If, like myself, you weren't along at this year's ChefConf 2013, you may also have missed on a major conceptual shift that has happened. Instead of the all-inclusive Chef-repo design pattern, as implied by the OpsCode Chef Repo - https://github.com/opscode/chef-repo - which, when used with all the community cookbooks out there, creates a mess of forked, modified and sub-moduled cookbooks and recipes.
The conceptual shift away and now recommended way, is to treat each cookbook as a separate piece of software and to give it it's own git repo, keeping them separate from from your Chef-repo. This combined, with a distinction between Library and Application cookbooks, and then bundled together via Berkshelf, enables a much cleaner and modular way of working. When you accept this move, it's much easier to then fit all the testing pieces together as they all live within each separate cookbook/repo.
This Comment Thread was what really drew it together for me, and then to fully clarify this way of working, watch Jamie Winsor's ChefConf talk which is the original starting point:
At work, our old code deployment strategy was basically a wrapper script doing an svn checkout and some symlinking. With our move to Puppet for config management, we also moved to using Apt packaging for our code deployment, tying them together with a line similar to :
So that whenever we deploy a new version of a package to our apt-repo, it can then be installed with a:
puppet agent --test (and with an initial dry-run using --noop)
( I should mention I manage our Puppet runs via our own distributed scripts, rather than having the nodes set up to check in every 30mins - when I'm doing so much work on our Puppet setup and config, I'd rather not having machines check in automatically in case the config is in a broken state )
Inevitably I would run the above Puppet command and it would not find any new packages, because ‘d'uh!', of course I still need to run an apt-get update.
I've been using Puppet stages for a while now, in order to group package installations in a broader sense rather than manually spelling out every dependency with a require => stanza, so it was a simple addition to add in a pre stage, and have the nodes run apt-get update before any runs.
In order to use stages, you need to first define them in your site.pp. By default every defined class runs under Stage[main], so you just need to add the new stages and define the running order. (full Puppet stage documentation is here)
At the top of my site.pp file, I added a pre and post stage, then define the execution order via:
I have a speedtest perl script i wrote - nothing complicated, takes a file and uploads it to a remote FTP or SFTP server, while calculating how long, then gives you a a measure of the MB/per second bandwidth between two sites.
I want it available on a selection of machines so it can run from wherever, so I thought i'd package it up as a .DEB file and stick it in our local repo. Nothing complicated in that, and there are a number of onlinetutorials about building your own debs. The main drawback with most I found was that they assume you are actually building from source rather than just distributing a script, although I also found a relevant Ubuntu thread which is pretty simple and to the point.
However, even using these tutorials it still took me a few hours to figure out. There are just a couple of non-obvious points, so i figure writing out my own steps is worth recording -
You will need to build from a directory with the name of your script in the form packagename-version, so for mine i created /tmp/speedtest-1.0, then copied in my script ‘speedtest‘ and it's data file 25MBFLAC.file ( which i could have created with dd on the box rather than copy over, but downloading the file is actually quicker in this situation ).
The first step is to run:
dh_make -s --indep --createorig -e thor@valhalla.com (dash-s means create a single binary .deb - i.e. no source version; indep means architecture-independent; and createorig is to indicate you are the original maintainer)
this creates a top-level ‘debian‘ directory containing all the necessary config files. The main one you need to edit is debian/control - you prob only need fill in “section”, “homepage” and “Description”
Mine looks like:
Source: speedtest Section: web Priority: extra Maintainer: Thorsten Sideboard <thor@valhalla.com> Build-Depends: debhelper (>= 7.0.50~) Standards-Version: 3.8.4 Homepage: http://github.com/sideboard/speedtest.git
Package: speedtest Architecture: all Depends: ${misc:Depends} Description: Test Upload Speeds
One of the things which baffled me for a while, which was answered in the askubuntu link above, was how to specify where something is installed — it goes in a file ‘debian/install‘ which isn't created for you. The format of the file is ‘filename location/to/be/installed” (without the initial slash)
so in my case, i ran: echo "speedtest usr/local/Scriptz/" > debian/install echo "25MBFLAC.file usr/local/Scriptz/" >> debian/install
At this point, you should then be able to run: debuild -us -uc
and you should have a deb file built. but..
First i ran into :
dpkg-source: error: can't build with source format '3.0 (quilt)': no orig.tar file found
As the above-mentioned askubuntu post says, you can
This error is due to leftover build-cruft from my last run - if you check the directory one step up from where you are, you'll see debuild has already built some files for you, typically a tar.gz, a .dsc and a .build file. Delete all them, then re-run debuild -us -uc — now it should build properly!
ah!
dh_usrlocal: debian/speedtemp/usr/local/Scriptz/speedtest is not a directory
This one also caught me out for a while - turns out this is caused by my specifying “/usr/local/Scriptz” as my install location -
Most third-party software installs itself in the /usr/local directory hierarchy. On Debian this is reserved for private use by the system administrator, so packages must not use directories such as /usr/local/bin but should instead use system directories such as /usr/bin, obeying the Filesystem Hierarchy Standard (FHS).
So, yeah, i changed my debian/install file to be “speedtest usr/bin”
and finally! running debuild -us -uc completes properly, outputting a /tmp/speedtest_1.0-1_all.deb which can then be installed via dpkg -i /tmp/speedtest_1.0-1_all.deb
One last note — there are four useful scripts to also know about — preinst, postinst, prerm, postrm — these should be in the debian/ directory - pretty self-explanatory - pre- and post- install and remove scripts - if these exist, they will be run exactly as they are named, so for example, i wanted my 25MBFLAC.file still to be installed under /usr/local/Scriptz, so i listed it to be installed in the debian/install file as “25MBFLAC.file tmp” and then in my postinst file, i added:
I put together a quick perl script for comparing installed CPAN modules between two hosts. Find it here.
Quite easy to use: Usage: ./CompareHostCpanModules.pl login@host1 login@host2
The script ssh's into both hosts (so it's easier if you have your ssh-keys setup) and grabs a list of installed CPAN modules and versions, then outputs the differences - it returns two lists - one of modules installed but having different versions, and another list of modules missing from the second host.
I've been working pretty extensively with Xen and Puppet in my new job, really loving it! I've been creating a whole load of Xen hosts, most of which are cloned from an initial image I built using Xen-tools. I've just finished a script which is over on my github page, which basically automates what was previously a manual process.
Basically, it copies your existing disk.img and swap.img, generates a new xen.cfg file based on some interactive input (desired hostname, IP, memory and number of vCPUs) plus a random Xen mac address, then mounts the disk.img file and changes some appropriate system files - /etc/hostname, hosts, and network/interfaces.
All quite simple and straight forward, but quite nice to have automated.
It requires that you have a base disk.img and swap.img already created. I created mine with: xen-create-image -pygrub -size=50Gb -swap=9Gb -vcpus=2 -memory 6Gb -dist=squeeze -dhcp -passwd -dir=/var/virt-machines -hostname=xen-squeeze-base
Fill in some of the variables at the top of GenXen.pl before running, then simply: ./GenXen.pl
The interactive part will ask for hostname, memory size, vCPUs, IP address, then generate a unique Xen mac address, and write these all to a xen config file which will be saved in /etc/xen/
It'll copy your disk.img and swap.img to destination dir, mount the disk.img and create appropriate files for: /etc/hostname /etc/hosts /etc/network/interfaces
I've been reading through ThoughtWorks' latest ‘technology radar‘ which led me to look up Vagrant, one of the tools they list as worth exploring.
Vagrant is a framework for building and deploying Virtual Machine environments, using Oracle VirtualBox for the actual VMs and utilizing Chef for configuration management.
So here's what I experienced during the setup of Vagrant on my Macbook - I decided to start with a simple Chef install to familiarise myself with Chef itself and it's own requirements CouchDB, RabbitMQ and Solr, mostly by following these instructions -
-CHEF INSTALL-
sudo gem install chef sudo gem install ohai
Chef uses couchDB as it's datastore, so we need to install it using the instructions here
brew install couchdb
The instructions I list above also contains steps to install a couchDB user and set it up as a daemon. They didn't work for me, and after 30mins of troubleshooting, i gave up and went with the simpler option of running it under my own user - in production this will be running on a Linux server rather than my Macbook, so it seemed fair enough -
At this point, the above instructions ask you to start the indexer however the instructions haven't been updated to reflect changes to Chef version 0.10.2 in which chef-solr-indexer has been replaced with chef-expander
So, instead of running: sudo chef-solr-indexer
you instead need to run: sudo chef-expander -n1 -d
Next i tried sudo chef-solr
which ran into “`configure_chef': uninitialized constant Chef::Application::SocketError (NameError)”
i had to create an /etc/chef/solr.rb file and simply add this to the file:
require ‘socket'
startup now worked - if you want to daemonize it, use:
sudo chef-solr -d
Next start Chef Server with: sudo chef-server -N -e production -d
and finally: sudo chef-server-webui -p 4040 -e production
Now you should be up and running - you need to configure the command client ‘Knife' follwing the instructions here - under the section ‘Configure the Command Line Client‘
(follow the instructions at the link - you only need to change the location of the two pem files you copied above)
Ok, so hopefully you're at the same place as me with this all working at least as far as being able to log into CouchDB, and verifying that Chef/Knife are both working.
- VAGRANT SETUP -
Now, onward with the original task of Vagrant setup… Have a read over the getting started guide:
I tried to load this up with vagrant up however received:
“[default] [Fri, 05 Aug 2011 09:27:07 -0700] INFO: *** Chef 0.10.2 *** : stdout [default] [Fri, 05 Aug 2011 09:27:07 -0700] INFO: Client key /etc/chef/client.pem is not present - registering : stdout [default] [Fri, 05 Aug 2011 09:27:28 -0700] FATAL: Stacktrace dumped to /srv/chef/file_store/chef-stacktrace.out : stdout [default] [Fri, 05 Aug 2011 09:27:28 -0700] FATAL: SocketError: Error connecting to http://SBD-IODA.local:4000/clients - getaddrinfo: Name or service not known”
I figured this was a networking issue, and yeah, within the VM it has no idea of my Macbook's local hostname, which i fixed by editing its /etc/hosts file and manually adding it.
Upon issuing a vagrant reload, boom! you can see the Vagrant host following the recipes and loading up a bunch of things including apache2
However at this point, you can still only access it's webserver from within the VM, so in order to access it from our own desktop browser, we can add the following line to the Vagrantfile: config.vm.forward_port(“web”, 80, 8080)
After another reload, you should now be able to connect to localhost:8080 and access your new VM's apache host.
In order to use this setup in any sort of dev environment will still need a good deal more work, but for the moment, this should be enough to get you up and running and able to explore both Vagrant and Chef.