:::: MENU ::::
Browsing posts in: .NET

AMEE in Excel

The AMEEConnect API gives access to a vast amount of climate related data. It also exposes standardise methodologies and to perform calculations based on that data.

As part of the London Green Hackathon I created the AMEE-in-Excel addin to tightly integrate this data and calculations into Excel.

So, if Excel is your preferred way to work with climate data, then this should be in your toolkit.

All code is open source and hosted at . Pull-requests are welcome!

UPDATE
Hurrah! AMEE in Excel won the behaviour change prize:

We believe over 80% of the sustainability field currently use spreadsheets. As a process, this is broken, not scalable and inaccurate. AMEE in Excel Integrates spreadsheets with web-services, to create a behaviour change that could address this issue and bring more credibility to the market.

So, if you want to collaborate on some Award Winning Software :), send in those pull requests



Functions with side effects are just rude!

Today I fell into a trap when using a function that had a side effect – it unexpectedly changed an input parameter; causing a later statement to fail. Debugging took an age!

For example, consider the following function:

      string StringReplace(string haystack, string needle)

If this function is side-effect free, we can use it without fear like this:

        string menagerie = "cat,dog,bee,llama";
        string catFreeMenagerie = StringReplace(menagerie, "cat");
        string beeFreeMengerie = StringReplace(menagerie, "eric");

        Assert.AreEqual(",dog,fish,llama", catFreeMenagerie);
        Assert.AreEqual("cat,dog,,llama", beeFreeMengerie);

However, if StringReplace() had the side effect of also changing the passed in haystack, then the second Assert would fail, because the first StringReplace has the unexpected side effect of changing one of its arguments.

Evans in the DDD book has quite a bit to say about this; arguing that having side effect free functions goes a long way towards making a supple design

Side effect free functions also make testing & refactoring easier (less state to worry about etc)

Remember, a function that changes its parameters is rude, and should not be trusted!

PS: Eric the half a bee lyrics


Selenium gotcha – selenium.GetHtmlSource() returns processed HTML

Whilst writing some Selenium based acceptance tests today; I bumped into a hair pulling gotcha.  Hopefully this post will prevent you from the same pain.

The test was to check whether some tracking tag javascript was being inserted into the page correctly or not.

I assumed that I could get the page source as it was being delivered to the browser by calling selenium.GetHtmlSource(); and then check that for the javascript string I was expected.

Unfortunately, GetHtmlSource is just a proxy for the browsers DOM.InnerHTML method; and that returns the Html after it has been preprocessed by the browser.

Turns out that preprocessing does a couple of funky things, including

  • Changing line-endings (Firefox)
  • Changing capitalization (IE6)
  • Seemingly random removal / insertion of ” & ‘  (IE6)

So, when I was expecting a string like this:


<!--
   var amPid = '206'';
   var amPPid = '4803';
   if (document.location.protocol=='https:')
...[snip]...

IE6 was presenting me with:


<!--
   var amPid = '206'';
   var amPPid = '4803';
   if (document.location.protocol=='https:')
...[snip]...

A possible solution is to ignore case, whitespace and quotes when doing the comparison, with a helper method like this:

/// 
        /// Use this to compare strings to those returned from selenium.GetHtmlSource for an Internet Explore instance
        /// (IE6 seems to change case and inclusion of quotes, especially for Javascript.?)
        /// 
        /// 
        /// 
        private static void AssertStringContainsIgnoreCaseWhiteSpaceAndQuotes(string expected, string actual)
        {
            string expectedClean = Regex.Replace(expected, @"s", "").ToLower().Replace(""","").Replace("'","");
            string actualClean = Regex.Replace(actual, @"s", "").ToLower().Replace(""", "").Replace("'", "");
            StringAssert.Contains(expectedClean,actualClean,
                                  string.Format("Expected string nn{0} nnis not contained within nn{1}", expected, actual));
        }

It was the line endings that really floored me; because they were automatically normalized/corrected by my test runner when displaying the error. Aaargh!


Announcing the TDD TestHelpers opensource project

Whenever I start working on a project; I invariably find myself writing a collection of TDD test helper methods.  I quick survey of other TDDers reveals the same; and thus the birth of my latest opensource project, TestHelpers (http://code.google.com/p/testhelpers/).

The aim of the project is to centralise all those little test helper methods you end up creating into a useful assembly you can use to jumpstart your next project.  Things like:

  • Comparers
    • Generic object comparers
    • DataSet comparers
  • Test Data generators
    • Builder pattern
  • Automocking containers

For example, I’ve just added an “AssertValues” functor; which helps you check whether the values of who object instances are the same. 

One area I keep using asserts like this is in integration tests; where I want to check that the objects I’m persisting to the database via my ORM actually end up in the database in a non-mangled form.  In this case, I new up entityA, persist it, reload it into entityB and then need to check that all the values in entityB are the same as those in entityA.

A standard Assert.AreEqual will fail, because entityA and entityB are different instances.  But, my helper method AssertValues.AreEqual will pass, because it checks the (serialized) string values of entityA and entityB.

Here is another, simpler example to illustrate the concept.

    [TestFixture]
    public class StandardObjectsTests
    {
        public class StringContainer
        {
            public string String1 { get; set; }
            public string String2 { get; set; }
        }

        [Test]
        public void ObjectsWithSameValue_ShouldBeEqual()
        {
            var stringContainer1 = new StringContainer {String1 = "Test String1", String2 = "Test String 2"};
            var stringContainer2 = new StringContainer {String1 = "Test String1", String2 = "Test String 2"};

            Assert.AreNotEqual(stringContainer1, stringContainer2);

            AssertValues.AreEqual(stringContainer1, stringContainer2);
        }
   }

I’m sure you have a bunch of similar helper methods lying about your projects.

How about contributing them to the TestHelper project?


ALT.NET; London; 13 Sept 2008

Intro

Debate over what ALT.NET is; should it have a set of guiding principles like the Agile manifesto?

Continuous integration & deployment

There seemed to be 3 major areas where people encountered difficulties doing continuous integration & deployment.

 

  1. Configuration files
  2. DB schema migrations
  3. Data migrations.
Best practise approaches discussed were:
Config files
  1. Make sure that  your config files are small. and contain only that config data that changes often (DB connection strings, file paths etc).  Put all your “static” config data into separate files (DI injection config etc).
  2. Consider templated config files; where specific values are injected during deploy process.
  3. Keep all config in simple text files in source control.
DB schema migrations
  1. Migration techniques borrowed from Ruby on Rails – generate change scripts by hand or using tools like SQL Compare; and then apply them using a versioning tool like dbdeploy.
DB data migrations
  1. Take backup before data migration.
  2. Ensure app fails quickly if is a problem; cause if data has changed since deployment then cannot rollback.
  3. Consider apps upgrading themselves and running smoke tests upon startup – and refusing to run if there is a problem – this technique is used by established opensource projects – WordPress, Drupal, Joomla.
Mentioned tools: TFS, Subversion, CC.NET, Jetbrains TeamCity, dbdeploy, SQL compare.
Acceptance testing
It seemed to me that the majority of pain experienced in this area results from a lack of a ubiquitous domain specific language:
  • Build a DSL incrementally during short iterations.  Gives you opportunity to refine, fill in gaps, and train whole team to use same language.
  • Without a DSL, acceptance testing via the UI testing becomes brittle, as you end up specifying your tests at too low a level, (click button A, then check for result in cell B); rather than having a translation from acceptance tests in a higher DSL language to specific UI components.
  • Consider prioritised tests – have a set of facesaving tests / smoke tests that always work, and ensure major things are still working (company phone number correct?  Submit order button still work?).  Acceptance tests can be thrown away if they have served their function of evolving the design / team understanding.
  • The acceptance testing trio – Developers test for success – thus automated testing only tests happy flow – still need exploritory testing by someone with testing mindset; what happens if you do weird stuff?  Tester must have domain knowledge.  Business – what are should happen?  Don’t let developers be forced to make up business rules?
  • Ensure all layers of stack (tests, manuals, code, unit tests) use the same DSL language.
  • How do you get workable acceptance tests – see Requirements Workshops book
  • Short iterations – more focus, incremental specs, opportunity to discuss missing test examples.
  • Key is having a ubiquitous language encoded as a DSL (domain specific language) – develops over time, enables automated accpetance tests, 
  • Sign off against acceptance tests (Green Pepper tool – capture & approve acceptance tests)
  • Talk: The Yawning Gap of ?? doom – infoQ, Martin Fowler
  • Avoid describing these activities as “testing” – people avoid because testing has low social status.
Mentioned tools:  White for Windows GUI testing
Domain driven design
  • Discussion around the difference between DDD; where we treat the concepts & actions as central; vs DB centrered design, where we’re thinking about the data as central, and UI centered design, where the screens are considered central.
  • Concensus was that domain shouldn’t be tightly bound to the DB, or the UI.
  • Ideas around passing DTO objects up to view (UI, webservices etc), andchange  messages bad from view, indicating how the domain should be changed (rather than passing the whole DTO, where you don’t know what has changed).
BDD
  • Defined as Dan North’s Given, When, Then
  • Is it any difference from Acceptance testing? Only that it is better branding, because BDD doesn’t have the word “testing” in it; which prevents people being switched off hearing the word test when discussing specifications.
  • BDD is writing failing acceptance testing first; before writing code.  
  • Unit testing is ensuring that the code is built right, but acceptance testing / BDD ensures that the right code is built.
  • Toolset is still immature.  Fitnesse .NET & Java tooling is most mature toolset.  Many BDD tools (other than Ruby’s rSpec) have been started and abandoned (nBehave, nSpec etc)
  • BDD is not about testing, its about communicating and automating the DSL.  Be wary of implementing BDD in developer tools (e.g, nunit), which prevent other team members (business, customer, testers) from accessing them.
  • Refactoring can break fitnesse tests, because it isn’t part of the code base.
  • Executable specs (via acceptance tests) are the only way to ensure documentation / test suites are up to date & trustable
  • Agile is about surfacing problems early (rather than hiding them until its too late to address them).  So when writing acceptance tests up front is difficult; this is good, because you are raising the communication problems early.
  • The real value is in building a shared understanding via acceptance criteria; rather than building automated regression test suite.
  • Requirements workshops can degenerate into long boring meetings.  To mitigate this problem
Tools:  Ruby Rspec, JBehave, Twist, Green Pepper
Feedback
In the post conference feedback; everyone was overwhelmingly positive; and found the open spaces format very energising.  Fantastic sharing of real world experiences; introductions to new approaches, nuggets of information; great corridor conversations.  Format that allows human interaction.
Next ALT.NET beers on 14th Oct.
Next ALT.NET group therapy in Jan 2009, with larger ventue.

When to make technical stories?

During initial sprint planning, stories correspond to user features, and typically follow a

As a [user type]
I can [some action]
So that [some benefit]

structure.

Its important to keep the stories focused on features, rather than on tasks; because we need the users / product owner to be able to decide which stories to add or remove.  (A user cannot decide which tasks to add or remove, because the dependancies aren’t obvious).

However, during development of a particular story, you will often come across an area of the code that needs to be refactored.  A classic example is the case of removal of duplication; where as the design has evolved we discover additional areas of common functionality.

It can be tempting to attempt to work this refactoring into the current story, and if the refactoring is relatively small, this is a good idea.

However, in many cases the refactoring is too large to do without increasing the complexity of the story so much that it might not get finished in the current sprint.

This is the time to create a new “technical story”, which encompasses the refactoring (and perhaps any related work).

Its important that this block of work becomes a story to increase its visibility to the team, and to the product owner.  I’ve found that other team members always have useful input (hey, area Y of the team needs that too), and the product owner gets to prioritise the refactoring along with other stories.

This also makes plain to all the stakeholders why technical debt is increasing – if too many of these technical stories have been neglected in favour of new features.


Adding a prompt option to Html.Select in ASP.NET MVC

Digging around the source code to the ASP.NET MVC Source Refresh, I discovered a useful extension to the Html.Select helper method.

Say you want a select dropdown with a NULL case, which instructs the user to make a selection.

Eg: You want your first option to be:  ==Select==

To do so, call Html.Select with a htmlAttribute of prompt=”==Select==”

Like:

<%=Html.Select((“Quote.ClientId”), ViewData.Clients, “Name”,”Id”, ViewData.Quote.ClientId,0,false,new {prompt=”==Select==”})%>


alt.vs.team.system

Some thoughts on alternatives to the Visual Studio Team System for building webapps:

  • Requirement gathering
    • I like Axure (axure.com) for HTML prototyping ideas, and Visio is okay for architecture diagrams
  • Code editing
    • VS Pro + Resharper is pretty good. One of the best “step though debuggers” I’ve used
  • Source control
    • Subversion + TortoiseSVN. Any played with Git? Is there a good Git GUI tool?
  • Continuous Integration
  • Jetbrains TeamCity
  • Issue tracking / project website
    • I quite like Trac – wiki, milestones, good SVN integration. Have to maintain Python app though.
  • O/R mapper
    • NHibernate? DataSets? LINQtoSQL? What do you use?
  • TDD / BDD
    • ASP.NET MVC looks like its going in the right direction; but still early days
    • nUnit / mbUnit / VS built-in? Is there anything that comes close to rSpec in the Rails world for elegance?
    • Mocking frameworks? Moq has a nice syntax – most of the others seem overly complex (record/replay?!)
    • TestRunner? www.gallio.org looks like a good start, but still early days
    • Functional testing? Selenium? WatiN?
    • Continuous Integration server? Is there anything prettier than CruiseControl?
    • Load testing?
  • Logging
    • Log4Net>
  • Monitoring
    • Current favourite is Webperform
  • Support email handing
    • SupportSuite from QualityUnit has a nice Gmail style interface + KB