Going Agile

Tuesday, May 23, 2006

Picked a tool

In the end, cost and simplicity won. We're sticking with ExtremePlanner for now, but actually going to license it. It's a bit underpowered IMO, but is simple to use while we're still in the pilot stage. I pushed for Rally, and we may yet go with them, but for now we're doing cheap and simple - just backlog management and iteration/task assignment.

Friday, May 19, 2006

Retrospective

We just had our sprint retrospective (aka post-mortem) and some things didn't go so well. I forgot the basic tenent of running a retrospective (or any critique, for that matter): don't just focus on the negative.

There are always three questions to ask:
  • What went well/what works?
  • What went poorly/what doesn't work?
  • What (if anything) should we change?
It got off on a sour note when one developer started talking about his tasks. He mistakenly had not integrated them with the main source tree because the work story was not a clearly defined feature. Normally this would invalidate the code written and the feature could not be considered part of the sprint. Some draconian teams might even make him delete the code and start over. I let it slide and he demoed since we're still getting our bearings, but it put him on the defensive (he easily gets defensive). The irony is that his task was only to gather the requirements, so writing functional code went well beyond what was expected.

My feelings about the longer cycle length cast a negative pall over things as well. We agreed to shorten to a three-week cycle. That seems to be a better sweet spot once you consider two to three days of the sprint involve the demo and kickoff meetings.

Another thing I did wrong was to dictate some new processes. As the scribe for the team I've been feeling the pain of poor data tracking more than others, so I just told everyone what they'll be doing. It's the correct stuff, but it was incorrectly sold. Our new practices will be:
  • Each developer is responsible for knowing/discovering their requirements
  • Each developer is responsible for having at least one acceptance test written for each work story
  • Each developer will start entering their own data in our current tool (ExtremePlanner)
The only really bad thing is that we all left the meeting not on a positive note.

Thursday, May 18, 2006

One more tool

We recently had a conference with Rally software, makers of RallyDev. It is quite the slick and powerful product. An interesting thing that came from the discussion with HUD is that we don;t want a product that is too powerful. At this stage we have little use for complex metrics (coincientally, that is VersionOne's big selling point). All we need is something simple, straightforward, and integrates with Mercury Quality Center, which THOT has recently convinced them to spend a bazillion dollars on (still working on the rollout).

Monday, May 15, 2006

Tools, tools, tools

My kingdom for a horse! Agile is still early stage, so the tools are still maturing, along with our understanding of how to use them. I've now gone through the learning curve with three tools and here's what I have to say.

White Board
Sticky notes, white board, etc. - i.e. the classic XP way to manage a project. This was incredibly effective for the first two iterations. After this point it started to become a drag on productivity.
Reason 1: Our white board was too small. As stories unfolded there simply wasn't enough space to hold all the data and allow for a scratch space. We would need a dedicated room with two walls of white board - minimum 6' X 8', possibly 6' X 12' each. That would allow a rollup, priority list, and a scratch area to accomodate the story cards.
Reason 2: Historical data lives only in memory. There is little way to see where we were mid-sprint or track velocity.

Extreme Planner
Initially I found this tool to non-obvious, yet overly simplistic. I used it because the licensing terms allowed for long term use (albeit with only two users). It also lacks support for complext, multi-team projects. Over time, however, I have found this to be an excellent tool. It has an obvious interface one the project has begun. Drilldown and data entry is very straight-forward.

Version One
Large and lumbering, Version One boasts all the features an enterprise-class product could wish for - multiple team support, rich reporting, complex project dependancies. Problem is that it is damn near unusable. Rather than use drilldown navigation, every edit screen is a popup - and some don't even let you edit the data. Views that you would expect to guide you to task editing (iteration summary, home dashboard) lead to dead ends. The tiny tabular text is nigh unreadable. And it is currently a sealed product (no integrations) although they tell me an API is on the way. Please hire a graphic designer and an information architect.

Thursday, May 11, 2006

Perils of the One Month Sprint

Volatility behaves much like interest (think money). Volatility is constant over a time period. It also has a compounding effect. Therefore, while the rate of volatility doesn't change, the longer you allow the time period to be the more likely volatility will have a disastrous effect. This is, in fact, one of the principle problems (with waterfall) that iterative development seeks to overcome.

In a one-month cycle I have found two things to be true:
  • Volatility has increased to the point that we are working on a non-optimized set of tasks (i.e. no longer working on most important items from the backlog).
  • A month is a long enough period that developers can do a smaller version of the "back-loading" problem facing larger projects.
The long and the short is that a month is long enough we can get lazy, then try to do a rush job at the end, and by the end of the period our original assumptions have had time to get stale.

Granted, these problems are nothing compared to what you would see on a six to 12 month project, but they are creeping in. And what of the perceived benefits of more test time (for qa) and better stability and more complete docs? None of it happened. Granted, our one test resource went on vacation for two weeks, but I wasn't seeing it happen anyway.