Build server config is generally a very manual process, which makes versioning, collaborating and refactoring difficult.
One of the things I love about Git is that your
.gitignore file travels with the repo, so ignore rules remain consistent no matter which machine you are working on.
In the same vein, adding a
.gitattributes to your repo allows you to ensure consistent git settings across machine. This enables the following subtle, but very useful features.
- Force line ending normalization inside your repo to LF
* text=autocauses Git to autodetect text files and normalise their line endings to LF when they are checked into your repository. This means that simple diff tools (I’m looking at you Github) that consider every line to have changed when someone’s editor changes the ending won’t get confused.
Importantly, this doesn’t affect the line endings in your working copy. By default Git will convert these to your platform’s default when checking code out of your repo. You can override this using the
- Language specific diffs
When git shows you diff information it gives you some context as to where in the code the diff lives. Using the
*.cs diff=csharpsetting tells Git to be a little smarter about tailoring this for a specific language. Notice how in the example below Git is telling us the method name where the change occured for th .cs file, compared to the default first non comment line in the file.
- Normalize tabs vs spaces
filter=attribute instructs Git to run files through an external command when pulling them from / to the repo. One use of this functionality would be to normalise tabs to spaces (or visa versa).
- Encrypting sensitive information
It is convenient to store config files in your git repo, but for public repo’s you don’t really want to expose things like your production db credentials. Using Git filters you could pass these config files through an encryption/decryption step when checking in/out of the repository. On machines that have the encryption keys your config files will be placed in plaintext in your working copy; everywhere else they will remain encrypted.
- Useful defaults
If you use the GitHub Git clients, they add useful default settings. Further, the github/gitignore and Danimoth/gitattributes projects contain some useful defaults.
The more I use Git, the more I realise what a powerful tool it is. And I haven’t even touched on how you can use Git hooks for advanced Git Ninja moves…
During June 2011 I presented a session at the SPA2011 conference in London, UK.
The code samples can be found at:
- F# – https://github.com/mrdavidlaing/functional-fsharp
Judging by the feedback I received, the session went very well. People seemed to like the hands-on format of the session; and just being left alone for a while to learn something at their own pace.
I feel uncomfortable when I see large switch statements. I appreciate how they break the Open Closed Principle. I have enough experience to know that they seem to attract extra conditions & additional logic during maintenance, and quickly become bug hotspots.
A refactoring I use frequently to deal with this is Replace Conditional with Polymorphism; but for simple switches, its always seemed like a rather large hammer.
Take the following simple example that performs slightly different processing logic based on the credit card type:
Its highly likely that the number of credit card types will increase; and that the complexity of processing logic for each will also increase over time. The traditional application of the Replace Conditional with Polymorphism refactoring gives the following:
This explosion of classes containing almost zero logic has always bothered me as quite a lot of boilerplate overhead for a relatively small reduction in complexity.
Consider however, the functional approach to the same refactoring:
Here we have obtained the same simplification of the switch statement; but avoided the explosion of simple classes. Whilst strictly speaking we are still violating the Open Closed Principle; we do have a collection of simple methods that are easy to comprehend and test. It’s worth noting that when our logic becomes very complex; converting to the OO Strategy pattern becomes a more compelling option. Consider the case when we include a collection of validation logic for each credit card:
In this case the whole file starts to feel too complex to me; and having the logic partitioned into separate strategy classes / files seems more maintainable to me.
To conclude then, the fact that languages treat functions as first class constructs, gives us the flexibility to use them in a “polymorphic” way; where our “interface” is the function signature.
And for some problems, like a refactoring a simple switch statement; I feel this gives us a more elegant solution.
This is the 2nd part of my series on everyday functional programming.
Suppose you have a collection of items and need to grab just a subset that match a certain criteria. Programming C# in an imperative style, you could use a for or foreach loop as follows:
Functional programming recognises this common scenario as a higher order function known as a Filter ; where you want to create a new list for every element that evaluates to true when a predicate function is applied to it.
In C#, filter is implemented as the LINQ Where(Func
Apart from the obvious reduction in number of lines; notice how much clearer the intent of the filter is, and the many opportunities for error we have eliminated.
In C#, map is implemented by the LINQ Select(Func
 – http://railspikes.com/2008/7/29/functional-loops-in-ruby-each-map-inject-select-and-for
 – http://msdn.microsoft.com/en-us/library/system.linq.enumerable.aspx
 – http://en.wikipedia.org/wiki/Filter_(higher-order_function)
 – http://en.wikipedia.org/wiki/Map_(higher-order_function)
This is the first of a series of blog posts where I will be exploring how functional programming techniques are useful in the daily life of a working “enterprise software” developer.
If you, like me, began programming in the 1990’s, then you will probably have started in a procedural programming style with simple task orientated scripts, and then progressed to an object oriented style for its better fit with event orientated GUI applications. As the software craftsmanship movement has grown over the past few years, you will have honed your S.O.L.I.D. OO skills; and focussed on making your code maintainable & testable/ed.
I won’t be trying to explain the concepts behind functional programming – others have done an exellent job of that already; so I’ll just link to them  . Rather, I’ll be curating practical examples where a functional style can be applied to everyday programming problems.
- Higher order functions – simplifying loops
- Implementing the strategy pattern without an explosion of classes
- Side effect free functions – code that is easy to test & reuse
Occasionally one comes across a idea that is just brilliant.
How often have you been writing a bit of code, and got to a point where you think “gee, if the program ever gets into this state, then something is really wrong”. Throwing an exception seems appropriate, but what kind of exception to throw?
Enter the following sage advise from Steve Freeman and Nat Pryce in their great book “Growing Object Orientated Software Guided by Tests”:
A DefectException(). What a simple but brilliant idea!
I’ve been enjoying learning Ruby syntax via RubyKoans – little tests that teach you syntax and convention as you make them pass
Its a bit rough; so please fork and contibute back improvements.
His point seems to be that Fitnesse is a good tool for documenting specifications, and continuously automating their validation. When your Fitnesse tests become like “scripts” (which is how developers are trained to see the world), then Fitnesse is a pretty crummy test execution environment (just use a unit test runner!)
Interesting that alternate tools – http://www.concordion.org, http://wiki.github.com/aslakhellesoy/cucumber/, http://www.specflow.org/ have arisen that effectively try to limit the power of the “test description language” to prevent the acceptance tests becoming script like.
Interesting food for thought – I know that many of my Fitnesse tests exhibit this codesmell
The other day I was asked why pair programming doesn’t reduce productivity; and its taken me a few days to come up with a this succinct answer –
because we’re building a system to release software changes rapidly over a long period of time, not type more lines of code to reach some predefined goal post
The purpose of a software team is to deliver a working software solution that solves customer’s problems over a long period of time.
This would be easy if:
- Customers knew what they wanted
- Developers knew how to deliver the features
- The world remained the same on the day the software is released as it was at the time it was designed.
The trouble is that none of these are true. We have to guess at the right solution, get some real world experience with it, and then optimise when we know more about the problem domain (aka, after we have delivered the feature for the first time).
The way to do this better is to reduce the length of the feedback cycle (think 5 days, not 6 months), and grow a system that can rapidly and repeatedly deliver changes to the software over the life (years) of that software.
Pair programming contributes directly to growing this system by:
- Facilitating communication about the architecture & design of the system, and ensuring everyone actually understands it
- Reducing brittleness & bottlenecks caused by one person “owning” a core module
- Improving consistency and adherence to common standards
- Catching bugs at the cheapest time to fix them
Unintuatively, it also tends to ensure that developers actually spend longer working on the software by:
- Reducing “wander time”. You are less likely to get sidetracked into email, facebook or some interesting blog article when pairing.
- Reducing “stuck time”. Two perspectives on a problem have twice as many solutions to try out
These articles go into more depth:
To conclude, pair programming would be a unproductive if developers had the perfect solution is their head, and programming was just the task of typing that into the computer to release a single perfect version of the software.
But in the real world we we’re in the business of creating a system that can rapidly deliver changes to the software as it narrows in on, and adapts to the best solution to the problem at hand. And to do this, pair programming excels.