Build server config is generally a very manual process, which makes versioning, collaborating and refactoring difficult.
Logsearch is the opensource project I lead as part of my day job at City Index Ltd. Based on the Elasticsearch ELK stack; and packaged as a BOSH release, it builds you a log processing cluster tailored to making sense of your IT environment and the apps that run on it.
I gave a talk showing how Logsearch can be used to analyse the logs of a Cloud Foundry cluster at the London PaaS Users Group last week that was well received.
Below is a screencast of that Logsearch for Cloud Foundry presentation (youtube)
The new breed of PAAS systems are all converging on a common deployment model.
- A CLI tool uploads your code / executables to the PAAS.
- The PAAS launches a clean “LXC based” staging container VM and invokes a buildpack on your code / executable
- The buildpack bin/detect ‘s whether it knows how to work with your code type (eg, it looks for a Gemfile, or a .java class). If no, it fails.
- The buildpack bin/compile ‘s your code, combining it with any runtime dependancies – i.e a specific JVM or Mono runtime – and any related libraries – eg, what is specified in your .nuget package or Gemfile. This process basically results in an “app” folder which contains all the binaries your app requires to run.
- The buildpack bin/release ‘s by specifying the “startup command”, and any ENV vars that should be set
- All of this output is then zipped up into your app.tgz and added to the PAAS blobstore. That done, the staging container is deleted.
- The PAAS then fires up as many runtime “LXC based” container VMs as you have specified, unzips your app.tgz into your $HOME folder, loads the ENV vars and runs the start command specified in step 2.3
- The PAAS monitors your start command – if it exits it will automatically rerun step 4 to give you a fresh runtime container.
- The PAAS also monitors the host machines running the containers – if any of those fail it will restart all the affected runtime containers on a new host machine. This step happens more frequently than you think (typically nightly), because its also the way that the PAAS keeps the host systems operating systems updated.
Each PAAS (Heroku, Cloud Foundry, flynn.io) has custom components that orchestrate everything, but there is a healthy open source community creating buildpacks for the languages and runtimes near and dear to their hearts; and these typically work (with minor modifications) on any of the PAASes.
Yours truly has now written 4 buildpacks for Cloud Foundry:
- https://github.com/mrdavidlaing/stackato-buildpack-wordpress <- WordPress running on Facebook’s HipHop PHP runtime
- https://github.com/cloudfoundry-community/nginx-buildpack <- Static HTML sites running on Nginx
- https://github.com/cloudfoundry-community/.net-buildpack <- .NET apps running on Mono
- https://github.com/mrdavidlaing/java-buildpack-with-Procfile-container <- Extension of the CF Java buildpack
One of the major pain points in the process is debugging the staging and runtime containers because:
- You can’t SSH into them to poke around and explore
- In the event of a catastrophic failure you the container (and its logs) get deleted before you can extract any of the files.
So, my holiday project was to try and build something to make debugging the deployment process easier.
The result is https://github.com/cloudfoundry-community/container-info-buildpack – a buildpack that exposes information about the staging and runtime containers via a web-app. See the README.md for details on how to use it.
This little experiment has been received with enthusiasm by the CF dev community; so I think I’ve identified a common pain point.
In its development I learnt about two useful things:
- pstree -a <- lists all the processes currently running in your shell.
- forego <- a speedy and memory efficient Go implementation of foreman
- openresty <- a collection of nginx modules that turn nginx into a simple (and very efficient) app server, with the ability to script logic using LUA.
I’m currently experimenting with being able to wrap this “info” buildpack around another buildpack, so you can
- Gather additional debugging info when deploying an app – say a mono app based on https://github.com/cloudfoundry-community/.net-buildpack
- Re-run the staging and runtime processes without having to redeploy your app.
Developing like this enables:
- Consistent environments for development and production deployment
- A test site for every Pull Request
- Automation of the setup of a WordPress development environment
- And as a nice bonus serving WordPress via the HipHop-PHP compiler gives a 5x performance improvement
If you’re interested in finding out more, please join the mailing list
One of the things I love about Git is that your
.gitignore file travels with the repo, so ignore rules remain consistent no matter which machine you are working on.
In the same vein, adding a
.gitattributes to your repo allows you to ensure consistent git settings across machine. This enables the following subtle, but very useful features.
- Force line ending normalization inside your repo to LF
* text=autocauses Git to autodetect text files and normalise their line endings to LF when they are checked into your repository. This means that simple diff tools (I’m looking at you Github) that consider every line to have changed when someone’s editor changes the ending won’t get confused.
Importantly, this doesn’t affect the line endings in your working copy. By default Git will convert these to your platform’s default when checking code out of your repo. You can override this using the
- Language specific diffs
When git shows you diff information it gives you some context as to where in the code the diff lives. Using the
*.cs diff=csharpsetting tells Git to be a little smarter about tailoring this for a specific language. Notice how in the example below Git is telling us the method name where the change occured for th .cs file, compared to the default first non comment line in the file.
- Normalize tabs vs spaces
filter=attribute instructs Git to run files through an external command when pulling them from / to the repo. One use of this functionality would be to normalise tabs to spaces (or visa versa).
- Encrypting sensitive information
It is convenient to store config files in your git repo, but for public repo’s you don’t really want to expose things like your production db credentials. Using Git filters you could pass these config files through an encryption/decryption step when checking in/out of the repository. On machines that have the encryption keys your config files will be placed in plaintext in your working copy; everywhere else they will remain encrypted.
- Useful defaults
If you use the GitHub Git clients, they add useful default settings. Further, the github/gitignore and Danimoth/gitattributes projects contain some useful defaults.
The more I use Git, the more I realise what a powerful tool it is. And I haven’t even touched on how you can use Git hooks for advanced Git Ninja moves…
Having tripped myself up on multiple occasions setting this up, I’m recording these config steps here for future-me.
Scenario: You have a PHP site running on a remote [Ubuntu 12.04] server, and want to connect your local IDE [Netbeans] to the Xdebug running on that server over a SSH tunnel.
- apt-get install php5-xdebug
- vi /etc/php5/apache2/conf.d/xdebug.ini
zend_extension=/usr/lib/php5/20090626/xdebug.so xdebug.remote_enable=On xdebug.remote_host=127.0.0.1 xdebug.remote_port=9000 xdebug.remote_handler=dbgp
- restart apache2
- Create remote->local SSH tunnel ssh -R 9000:127.0.0.1:9000 firstname.lastname@example.org
- Launch Netbeans debugger
The key is that your Netbeans IDE acts as the server in this scenario, listening for incoming connections to port 9000 from the remote server’s XDebug. Thus the tunnel must be from the remote port to your local port, not the other way around.
Some helpful debugging technques
Start ssh with -vv for debugging output
netstat -an | grep 9000
should show something like:
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:9000 127.0.0.1:59083 ESTABLISHED tcp 0 0 127.0.0.1:59083 127.0.0.1:9000 ESTABLISHED tcp6 0 0 ::1:9000 :::* LISTEN
The AMEEConnect API gives access to a vast amount of climate related data. It also exposes standardise methodologies and to perform calculations based on that data.
As part of the London Green Hackathon I created the AMEE-in-Excel addin to tightly integrate this data and calculations into Excel.
So, if Excel is your preferred way to work with climate data, then this should be in your toolkit.
Hurrah! AMEE in Excel won the behaviour change prize:
We believe over 80% of the sustainability field currently use spreadsheets. As a process, this is broken, not scalable and inaccurate. AMEE in Excel Integrates spreadsheets with web-services, to create a behaviour change that could address this issue and bring more credibility to the market.
During June 2011 I presented a session at the SPA2011 conference in London, UK.
The code samples can be found at:
- F# – https://github.com/mrdavidlaing/functional-fsharp
Judging by the feedback I received, the session went very well. People seemed to like the hands-on format of the session; and just being left alone for a while to learn something at their own pace.
I feel uncomfortable when I see large switch statements. I appreciate how they break the Open Closed Principle. I have enough experience to know that they seem to attract extra conditions & additional logic during maintenance, and quickly become bug hotspots.
A refactoring I use frequently to deal with this is Replace Conditional with Polymorphism; but for simple switches, its always seemed like a rather large hammer.
Take the following simple example that performs slightly different processing logic based on the credit card type:
Its highly likely that the number of credit card types will increase; and that the complexity of processing logic for each will also increase over time. The traditional application of the Replace Conditional with Polymorphism refactoring gives the following:
This explosion of classes containing almost zero logic has always bothered me as quite a lot of boilerplate overhead for a relatively small reduction in complexity.
Consider however, the functional approach to the same refactoring:
Here we have obtained the same simplification of the switch statement; but avoided the explosion of simple classes. Whilst strictly speaking we are still violating the Open Closed Principle; we do have a collection of simple methods that are easy to comprehend and test. It’s worth noting that when our logic becomes very complex; converting to the OO Strategy pattern becomes a more compelling option. Consider the case when we include a collection of validation logic for each credit card:
In this case the whole file starts to feel too complex to me; and having the logic partitioned into separate strategy classes / files seems more maintainable to me.
To conclude then, the fact that languages treat functions as first class constructs, gives us the flexibility to use them in a “polymorphic” way; where our “interface” is the function signature.
And for some problems, like a refactoring a simple switch statement; I feel this gives us a more elegant solution.