:::: MENU ::::
Browsing posts in: Uncategorized

HOWTO – configure Netbeans PHP debugging for a remote server, over a SSH tunnel

Having tripped myself up on multiple occasions setting this up, I’m recording these config steps here for future-me.

Scenario:  You have a PHP site running on a remote [Ubuntu 12.04] server, and want to connect your local IDE [Netbeans] to the Xdebug running on that server over a SSH tunnel.

  1. apt-get install php5-xdebug
  2. vi /etc/php5/apache2/conf.d/xdebug.ini
    zend_extension=/usr/lib/php5/20090626/xdebug.so
    xdebug.remote_enable=On
    xdebug.remote_host=127.0.0.1
    xdebug.remote_port=9000
    xdebug.remote_handler=dbgp
    
  3. restart apache2
  4. Create remote->local SSH tunnel ssh -R 9000:127.0.0.1:9000 yourname@yourserver.com
  5. Launch Netbeans debugger

The key is that your Netbeans IDE acts as the server in this scenario, listening for incoming connections to port 9000 from the remote server’s XDebug.  Thus the tunnel must be from the remote port to your local port, not the other way around.

Some helpful debugging technques

Start ssh with -vv for debugging output

netstat -an | grep 9000

should show something like:

tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:9000 127.0.0.1:59083 ESTABLISHED
tcp 0 0 127.0.0.1:59083 127.0.0.1:9000 ESTABLISHED
tcp6 0 0 ::1:9000 :::* LISTEN

Googlewack!

Hurrah! My first googlewack, discovered by complete accident.


The Correlation between Schedule Pressure & Low Quality

Research suggests that

  • 40% of all software errors are caused by pressure on developers to complete quicker (Glass 2004)
  • Under extreme schedule pressure, code defects increase by 400% (Jones 2004)
  • Projects which aim to have the lowest number of defects also have the shortest schedules (Jones 2000)

This makes sense is you consider that good engineering practises are the first to leave the building under pressure to finish, and most teams will revert to quick & dirty hacks to get things implemented, without complete testing etc.

My personal opinion is that the only way to shorted development cycles is to reduce the feature set. Its pleasing for me to see that the research seems to back this up.

When deciding which features will be dropped; I think its worth revisiting the business requirements that are driving a particular set of features. In many cases a simpler “design” could suffice; for example a fancy calendar widget could be replaced with a simple textbox; a little used settings screen could be retired in favour of manually changing config files; or overly complex but little used workflows could be put on the back burner.

I maintain that a lot of “features” can be dropped, without actually impairing the business functionality of the system.

Just remember, what every you do DON’T consider dropping testing or QA in an effort to meet your deadline; unless you want to guarantee that you will continue to miss all future deadlines until the project gets cancelled!


DDD7 – Nov 21, Microsoft campus, Reading UK

Wow.  DDD, the community conference for UK MS developers, hosted by Microsoft, but completly driven by the community continues to go from strength to strength.  This year, the 400 places were filled within 4 hours of this annoucement that registration was open via twitter.

I really enjoyed Mike Hadlow‘s talk on IOC injection; with specific reference to his opensource eCommerce application, SutekiShop.  Clearly an expert on the subject on ASP.NET MVC, Onion architecture, Repositories & Services,  and binding it all together with IOC; he is also a gifted presenter.   If you’re looking for a reference implementation of an ASP.NET MVC application (or indeed just a loosely coupled, TDD driven web application); I’d strongly advise you to check out Mike’s SVN repo.

Toby Henderson gave an interesting demo of how you can run .NET apps under Linux (Ubuntu) using Mono.  Worth bearing in mind when considering your hosting & deployment options.

Sebastien Lambla gave a highly entertaining (if opinionated) presentation of a series of WFP tips and tricks.  My favourite tip (which isn’t really WPF related)

Tired of always checking if your event delegates are null before calling them?  Just declare them with a standard empty delegate.  Then they are never null!

  event MyEvent = delegate {};

Recommended book: WPF Unleashed by Adam Nathan

As always it was a great event – remember, if you want to be at DDD8 (2009); sign up early!

See www.developerday.co.uk for slides & videos from all sessions


Complexity Smells

I propose that one of the principal things that gets projects into trouble is too much complexity.  My theory is that there are a number of “complexity smells” that if identified and addressed early on can radically improve a projects chances of success.

To explore this theory, I recently ran a workshop where we brainstormed complexity smells and possible preventative actions.

We selected and ranked to product the following list:

(1) Over engineered code/applications – 46% of votes
Do you really need all those features? Should you really be introducing this code abstraction or design pattern; or are you just speculating that it will be required in the future? Will a simpler solution work for now?
Prevention strategies
a) Do you really need that functionality / code. No really.
b) Refactor mercilessly (at all levels, functionality, architecture, classes, methods, algorithms)
c) Ensure your team has good coding standards

(2) Lack of TDD & Acceptance test automation – 26% of votes
Is your unit test coverage above 80%? Can you click one button and have all your acceptance tests run automatically? Have you considered that any change made after version 1 is released (aka, in 90% of the lifecycle of the application!) is the equivalent of someone opening a novel they have never read, changing what happens to a couple of characters, and then without re-reading the novel; hoping it all still makes sense.
Prevention strategies
(a) Set up automation in sprint 0
(b) Project manager or tech lead needs to drive automation
(c) Ensure the team has testing experience (or at least a resource to guide & educate them)
(d) Initially test automation just seems to slow down progress; be ready to explain how automated tests are the gift that keeps on giving

(3) Poor time / priority management – 20% of votes
Does your project have a clearly prioritised backlog? When new features are introduced, is it easy to see which features should be moved down the priority list. How frequent are your feedback loops – between deciding on a requirement, designing a feature and getting feedback on whether that implemented feature fulfils the requirements?
Prevention strategies
(a) Break your project up into 3 month releases
(b) Appoint a strong product backlog owner.

(4) Ownership – 5% of votes
Who owns the projects feature set? Who owns the code past release 1? Is there someone who can made decisions quickly and decisively?
Prevention strategies
(a) Customer should decide what is produced, and what acceptance tests validate that it works.
(b) Place emphasis on collaboration, knowledge sharing and transparent communication
(c) Ensure rapid feedback cycles built in
(d) Keep same team on project through who product lifecycle

Some other complexity smells identified

 

  • External dependencies – consider building “anti-corruption layers” between your application & 3rd parties. Prefer talking to humans rather than documents (re: 3rd party APIs)
  • Poor communication – collocation of team; prefer face to face rather than email conversations; keep teams < 10 ppl
  • Standards – too many or too few; standards = guidelines rather than rules

 

If your project is exhibiting some of these smells; perhaps its time to have a complexity retrospective with your team and nip them in the bud before they spiral out of control and kill your project.


The Complexity Retrospective

Many projects go awry due to excessive complexity; and its always worth evaluating whether your team is approaching things in the simplest way that can work; especially when the deadlines begin to loom.

I recently lead a retrospective with my team focusing on complexity across all the areas of our project, using the a handful of techniques from “Agile Retrospectives – making good teams great” (a must have for every agile team).

As suggested, we structured the hour long retrospective into 5 parts:

  1. Setting the stage
  2. Gather data
  3. Generate Insights
  4. Decide what to do
  5. Close / Action plan

The purpose of “Setting the stage” is to get everyone engaged, and thinking about the same theme. To do this I reminded people of the actions we had set ourselves from the last retrospective, and then asked each person to complete the sentence “If we were a military commando, and our mission was last retrospective’s actions, we would be _______”

Awarded medals
Promoted
Ready for another mission
Court marshalled
Dead

I then told the team that in this retrospective we would be considering complexity, and whether we had too much, or just the right amount in each of the areas of our project.

To gather data, I drew a complexity radar, with each spoke a different area of complexity. I began by suggesting a couple of generic spoke names (Data model, Workflow), and then got the team to suggest the other areas. Using dot voting, everyone voted as to where they felt we ranked on each spoke, with closer to the centre being just right, and further away being overly complex. Joining the clustered dots produced the following radar map:

To generate insights we used the 5 whys excersize. I asked people to break into groups of 2, preferably cross discipline, and assigned each group 2 of the high ranking spokes. They were then tasked with asking each other “why is <spoke area> complex”, and “why is <answer>?” and so on, until 5 why’s had been asked. The answer to the 5th why was considered the root cause of the complexity, and recorded on a card. As the root causes cards came in, I grouped them, and when everyone was done, read the root cause groups out.

To decide what to do we constructed 2 more histograms, one considering the risk of the root cause, and the other difficulty to address the root cause. I then ask each person to vote which of the 2 root causes had had the highest impact & and which was the least difficult to address. This produced the following histograms.

In conclusion, we then combined the impact & difficulty histograms into the following map

My intention was that the final exercise would make it simple to choose the actions to take forward for the next sprint (basically chose the low hanging fruit – the easiest things to address which had the biggest risk reduction); but there wasn’t a clear winner shown on the graph.  Generating actions took a bit more discussion.

We found that this format was a fun and effective way to address the complexity problem.

Hopefully you’ll find running something similar with your own team helpful!


Fixed price bidding vs the Cone of Uncertainty

Software estimation’s “Code of Uncertainty” suggests that before the requirements & user interface design is completed on a software project, time to complete the project can vary by up to 16x.


(Source: http://www.construx.com/Page.aspx?hid=1648)

So, before you have pinned down the exact requirements, the actual time to compete the project could take up to 16 times longer than your first estimate. Only after you have pinned down the user interface should you expect your estimates to be within 25% of the actual time to complete the project (an error % that can be managed). Note that this data assumes an experienced team, good estimators and NO additional requirements introduced late in the project (and good luck to you finding a project like that!)

Given this research what are the chances of a fixed price bid coming in on time or on budget?  Virtually zero.

And what is the first thing to slip under schedule pressure – quality.

So, our theory implies – if you go with a fixed price bid for any significant piece of work, before the user interface has been designed, then:

  1. The work delivered will be late (somewhere between 4 & 16 times the original estimate)
  2. Work delivered closer to the delivery date will be of low quality.

Is this true?

What have your experiences been?


Story points = Complexity points / relative size

One of the ideas many people seem to struggle with in Agile projects is that of Story Points.

In an agile project, the time to implement a story (a feature), is deliberately estimated in a weird unit called story points, rather than in number of hours / days.

The most important thing to remember is that story points do NOT equal units of time.  Initially you will naturally find yourself trying to convert story points to days, or estimating in days or hours, and then trying to convert that to story points.

RESIST this temptation!  There is a method behind the madness.

  • Research has shown that people are better at estimating relative sizes (A – C is twice as far as A – B, Basket X is about 1/3 the weight of Basket Y) than coming up with absolute estimates (A to B is 15km, Basket X is 7.5kg)
  • Days are a very subjective unit of measure.  Depending on other commitments, your ideal days are very different from mine.
  • Estimating relative size is much quicker; and you need less information to get started (you don’t actually have to know how long anything will actually take, just the relative comparisons between different stories)

With a new project its impossible to know how quickly features will be produced.  There are just too many variables – learning of the domain & toolset, agreement within the team, stabilizing of work patterns.

What you do is complete a couple of iterations, and then measure how many story points you delivered on average.  This then becomes your velocity, which you can use to derive an estimated completion range based on the story points.

Note that with this technique your story points are still valid; as they are just measures of relative size/complexity.  The only time you really need to re-estimate story points is when you got the relative size of a story wrong – perhaps it turns out to be much easier to send emails than you thought, or much harder to draw graphs.


Draconian parking fines, by Camden Council

The quest to make our world a little greener is a worthy challenge, putting one in opposition with seemingly everyone in our modern society.

Case in point – I attended the Geek Kyoto conference in Camden a few weeks back, and rather than drive home on the Friday and then back into London on the Saterday (a round trip of 80 miles), I elected to spend the night in a London hotel. Finding parking overnight was, shall we say, trying, and wildly expensive.

Eventually I found a parking, for which you had to pay between 08:30 & 13:30, and paid the required £10 (!) to park for the first 2 hours. I carefully place the parking reciept on my dashboard, confident of being a law abiding citizen (Observe the clear sign across the road)

When I returned to move the car in 2 hours (maximum parking 2 hours), I was shocked to see that I had been ticketed a whopping £120 for my citizenly efforts.  The bay I was in is sign posted as a residents only bay.  How ASBOish of me; how could I have missed the residents only sign – its clearly displayed behind a tree!  (Look, just above the silver car parked behind me)

Egad – I managed to pay £10 for the pivilege of being fined £120.  So much for attempting to reduce my CO2 footprint by driving less!

I’m appealing; lets see what the Camden Council have to say…



Pages:12