On Life and Love

Deployment Automation with Fabric: Bee’s Knees

One immensely valuable thing I learned at Skookum was the value of automated deployments. I worked with a gent who took the time to work up Capistrano scripts for each staging and production environment of the whale of a project I worked with him on.

I appreciated it during development, but I didn’t appreciate it until we were deploying single tweaks out to production on Amazon EC2 in rapid cycles. I haven’t worked with EC2 since then (second half of 2009), but let me tell you, deployments were for the birds.

With his scripts though: run the script, enter your SSH or git password(s) a few times, and you have an automated deployment that runs for each person on the team, despite all our separate setups (Mac, Linux, cygwin, etc.).

It sounds trivial and obvious, but how many deployments did I do by hand, or try (poorly) to document for someone else, or forget how to do before that really sunk in?

The bit.ly API was giving me some problems–it was choking on shrinking some URLs without explanation–so I tossed the ghost character sheet (still unattractive, yes–design work is next) on top of Django with a custom-written URL shortener (nothing to it, really).

Deploying Django on Dreamhost using Passenger? Never ever again do I want to do that by hand. Not even for updates.


Post-platelet donationI’m not kidding. I’d rather go to give platelets only to find out that it’s a 2-hour procedure (instead of the 10 minutes I was expecting), then have the first inbound needle slip and “irrigate” saline into my arm (rather than into the vein) and be stabbed in the back of my hand (how gross is that?!) instead, and then have the outbound needle hit the vein wall (despite my warnings of exactly that always happening!) so that they could spin the needle around until the flow was steady again.

Friday night was the Django deployment. Saturday was the platelet thing. At least I got to watch some of Robert De Niro as the creepiest mofo I’ve seen on screen recently.

Anyway, when I realized that my Django app was best separated out into two locations in my account and that I’d be faced with symlinking and directory hopping, I started scripting using Fabric. I’ve learned the hard way over the years that I have a lousy memory for the idiosyncrasies of server environments. Let my code document that for me.

After my basic test() and commit() methods, I started with:

def initial_deploy()

I’m serious. As I struggled with messageless 500 errors, getting Paste working to see the errors, figuring out what settings files needed to be moved and which linked, and permissions… All of it went into my fabfile.py.

Why not Capistrano? Because I have Python up and running comfortably, and not Ruby. Simple as that.

It meant that for getting this initial deployment script to work, I had to redo the whole setup a few times–blitz the “public” directory so that the script could relink it, etc. Totally worth the time, because I now have a script that not only documents the deployment process (helpful for other projects later), but in the event of a server problem, I can redeploy to a similar setup trivially.

The actual deploy script (for code updates) is pleasantly easy: pull from the repository, touch a file to clean up Passenger.

The two crucial functions:

def initial_deploy(branch='subproject'):
  code_dir = '/home/username/django/projects'
  domain_dir = '/home/username/domain.com'
  project = 'django_project'
  repo = 'https://mercurial.server.com/project'
  with cd(code_dir):
    run("hg clone %s %s" % (repo, project))
  with cd(code_dir + '/' + project):
    run("hg update %s" % branch)
    run('ln -s remote_local_settings.py local_settings.py')
    with settings(warn_only=True):
        # This is going to fail on account of mySQL being unable to handle 
        # indexes with TEXT fields when there aren't limits on the index size.
        # So we run syncdb, let it fail, and then create that table ourselves.
        run('python manage.py syncdb')
    if result.failed:
        prompt("Syncdb failed. You need to create the the BlahBlah table manually and add the index with (some code). Continue?")
  with cd(domain_dir):
    run("ln -s %s/%s/public" % (code_dir, project))
    run('ln -s %s/%s/remote_passenger_wsgi.py passenger_wsgi.py' % (code_dir, project))
def deploy():
  code_dir = '/home/username/django/projects/django_project'
  domain_dir = '/home/username/domain.com'
  prepare_deploy() # Runs the tests and does a commit + push
  with cd(code_dir):
    run("hg pull")
    run("hg update")
  with cd(domain_dir):
    run('touch tmp/restart.txt')

Not perfect code, certainly (I’m still new to Python!), but serviceable for a first pass with Fabric. Unfortunately, I haven’t gotten an SSH key working yet (this is running on the basic Windows command line, not cygwin) which is one password request, and mercurial/kiln requests both a username and a password on each one of those push/pulls, which is twice in a deploy.

Still better than by hand!

I’m going through and making similar scripts for other in-progress projects that use SSH in staging or deployment.