Rediscovering the Obvious

…stumbling in the footsteps of greatness

Archive for October, 2009

When estimates don’t matter

without comments

This is an edited version of a post I wrote to kanbandev on 10/22/2009. The thread was started by somebody asking about work decomposition and how it relates to estimation, and asking why we have been saying that estimation doesn’t have value. Disclaimer: This is all "in my experience" and may not represent the views of anybody else.

Goals of estimation

  1. People that ask for estimates are generally looking for commitments on when something will be "done", which seems to mean "earning or saving money".
  2. Teams and people doing estimates of their own volition are generally looking for the understanding of the work that comes from estimating and breaking down the work.
  3. Teams doing estimates also use them to expose inconsistencies of understanding (i.e. planning poker)
  4. Related to #2, but teams additionally use the estimation process for finding dependencies among work, and as a more general planning tool

All of these are good goals to have and to do. However, estimation is just the mechanism we’re using to accomplish these goals. It was a reasonably effective mechanism in the right hands and a horrible one in the wrong hands.

Other mechanisms

  1. Instead of estimating, we commit to a generalized Service Level Agreement for a given class of service and just slot items into those classes. Individual items may miss, but on aggregate the commitments are pretty solid as a result
  2. Pairing & swarming are both great tools for exploring the work and sharing that understanding, especially when the customers are involved in the process
  3. The work gets done so quickly with swarming that the inconsistencies are driven out VERY quickly, and nobody works alone long enough to propagate those misunderstandings. (But, beware cult of personality)
  4. This is the expand/collapse pattern, and the work breakdown tends to occur as soon as the team starts a bigger item, the dependencies are figured out as part of the expansion, and the loosely ordered items are pulled through very quickly. Each level of expand/collapse has its own internal ordering dependencies, but these generally don’t cross levels much.

This is a general and very effective behavior I’ve seen when looking at lean approaches to evolving the way we work. Instead of looking at a practice as a tool, it’s very useful to back out a level or two and look at the goals that practice was meant to accomplish and even further to the principles and values backing it. When I do this, I often find myself seeing other practices we could do (or already do) that accomplish the same goals.

Scrum is a very nicely balanced set of practices that work great in the proper context because they collectively cover most of the goals that we need to cover as an agile development group. My criticism of Scrum is that when it doesn’t fit the context, the perception is that you should change the context to fit Scrum rather than the other way around. The danger in changing Scrum is in failing to understand the goals of the practices. Dropping estimation is a very bad idea if you don’t have another way of making your commitment, for example. Dropping sprints is a bad idea if you don’t have inspect/adapt, commitment, delivery, etc all covered. Blindly changing is just plain stupid.

Written by erwilleke

October 22nd, 2009 at 6:55 am

Posted in Uncategorized

Calculating Touch Time

without comments

This is a follow-on from the same conversation with Liz as the last post. We were talking about how Inkubook used branch by feature to keep our main line clean and to avoid additional complexities around the “Share code” bullet from her post:

Share code. If the teams check in before the code is finished, their scenarios will fail. If they check in examples which haven’t yet been coded, those examples will fail. This won’t be a problem if no one else is modifying the code base; however, if it’s a subset of a much larger team breaking the build can cause havoc, and the habit of keeping builds green is a good one. Try distributed version control, which will allow a team to check in on USB keys or a temporary space until the code works. (There are techniques for getting, say, Mercurial, to work alongside, say, Subversion – mostly by making each system ignore the other). You could also pass around patch files to keep the code in sync.

Inkubook used branch by feature almost exclusively. We did lose some of the benefits gained from Continuous Integration, but recognizing the principles over the mechanisms, we managed to mitigate most of the losses. The primary two tools were keeping cycle time low and thoughtfully choosing features to avoid parallel work in the same area of the application. These two practices avoided much of the integration pain we would have otherwise seen. We were lucky enough to feel the pain of disregarding these early in our flow days, and were able to learn from that lesson after two days of merge pain.

This was the point that I realized that our branches could be used as nearly perfect indicators of cycle time. We had exactly one per MMF, we created them when we started working on something, and we deleted them when something was pushed to production. That’s useful.

Even more interesting, though, is that we could use them as reasonable approximations for Touch Time! It’s always been quite a challenge to find a non-invasive way to calculate touch time on a per-feature basis. A 24 hour period with a checkin is “touched”, the same period without is “not touched”. More fine-grained than that probably isn’t valuable, and good agile developers would be checking in at least daily when they’re working on something. This would be more than sufficient for how I’d use touch time… exciting!

About touch time

Others have written quite a bit about touch time and what it means in manufacturing, design, and queuing theory. I’m not going to try and produce a comprehensive description of how it works. Instead, I’m going to talk about what I would use it for and what I would not. These are “would” because I haven’t had the data before, so it’s theory, not practice.

Before I start, I’m going to make a couple definitions. They probably do not follow standard usage, so be warned. I’m going to call efficiency “E” the ratio of Touch Time to Cycle Time (TT/CT) where the time measurements are based on 8 hour weekdays, so two weeks of steady work is 100%, regardless of weekends, holidays, etc.

First, I believe it would make a great leading edge indicator for certain risks on a given feature. If you create a branch and it sits idle for a few days (E = 0), then it means something’s blocking people from working on that feature. It could be known impediments, which are reported in the stand-up, but it could also mean that they’ve been pulled to something else and this feature is at risk of (severely) missing SLA from being resource starved.

Next, something with falling touch time ( dE/dt is negative), then something’s blocked the feature from being worked on effectively. Same reasons as above.

Finally, an aggregate of touch time across features would be useful as an indicator of churning and focus, possibly as an indirect measurement of the effectiveness of a swarming approach, and possibly as a way of tuning down WIP limits around development to the proper point.

Nicely, all of these uses could be fit into a decent dashboard and automatically generated with no changes in how developers work other than ensuring the branches are trimmed back when features are released into production (or whatever serves as your company’s “end of the line” for cycle time until you can measure meaningfully all the way to production).

The key things here that justifies even looking into this is that there are no changes to existing behavior, and that there is nothing that will point at individuals. Ideally, these measurements would only be used within the team to help find improvement opportunities, but that is a cultural aspect independent of the measurement.

Written by erwilleke

October 17th, 2009 at 9:30 am

Posted in Uncategorized

Multi-pair stories: a response

without comments

This post is a response to Liz’s post on Mocks, outside-in, swarming features and guesswork, specifically the bit about swarming. I was telling her how swarming tended to work on the Inkubook.com team when I was there and she asked me to make my comments public… here they are.

How we swarmed an MMF:
1.a – Eric (as architect) and Jacob (as UI/graphics guy) pair on getting the rough shape of the UI doing what it ought to – shaped right, primary interactions at least identified.

1.b – Jeff (dev) and Byron (dev) code the last bits of feature n-1.

1.c – Cathy (test) and Ken (marketing director) provide real-time acceptance of their work both at their workstation and in the pre-deploy environment.

 

2.a – Eric & Jeff start working on the next layer of collaborators down for the primary UI story, pushing sequentially through the thick client, into the service layer, and when needed pulling in Matt (DB arch, shared across teams) to help with the database scripting bits

2.b – Jacob and Byron start fleshing out the UI and getting the primary interactions working better.

2.c – Cathy finishes validation of the feature n-1 and pushes it to production with Ken’s approval, then starts talking to the devs about what how she’s going to test bits and provides early feedback of how things feel, letting us know where things aren’t right.

2.d – Ken works on the marketing around feature n-1 and provides early feedback to the team about how it looks and what he thinks.

 

3.a – With the first full pass complete, Eric, Bryon, Jeff, and Jacob work in various combinations to broaden out the feature, generally starting at the UI layer and pushing back towards the DB through the service layer, although we will often pause to design the DB interaction and service layer for consistency with previous work and foreknowledge of likely needs from the current MMF.

3.b – Cathy provides feedback on intermediate builds (multiple/day) while Ken gets demos of the UI at least daily, often nearly continuously during the early stages of complicated UI features.

 

4.a – When the feature’s nearly ready (based on Ken’s opinion of “market ready” and the team’s opinion of “production/quality ready”), Eric and Jacob spin off to start the next effort with Ken’s deep input.

4.b – Jeff and Bryon finish up the feature with Cathy’s help and the cycle starts anew.

 

5.! – Every week or two, James (director of IT) brings in a decent (not just pizza) lunch to celebrate how things are going.

Conversation

That’s generally how things flow, but not always, and since I’ve been gone a few months I’m sure it’s an idealized version of how things actually worked, but that’s my memory. The names and roles where what things were, but they weren’t strict by any means, and people filled the roles that were needed at a given time.

The important part from a flow perspective is that a pair gets into the feature a half-day or day ahead of the rest of the team. Thus, instead of a single “point” (the UI) preventing parallel development, we can instead create the skeleton of that first feature and then continually tie more work onto it. We rarely attempted to declare what “done” meant before starting the feature, instead starting with a vague goal and developing it in collaboration with Ken, who spoke for the broad customer base.

Please note that it took quite a long time to get to the point I described, and we tried a number of variations as team, which I speak about here. I imagine things have changed since then, as the team was generally very good about adapting the process to fit what needed done.

Written by erwilleke

October 17th, 2009 at 7:23 am

Posted in Uncategorized

How time flies…

without comments

I’m doing a bit of research around Inversion of Control and Dependency Injection today for work, and I’ve been continually struck by how long these concepts have been around relative to the length of my career. Martin Fowler’s early article on the topics was published in January 2004, which was AFTER his Patterns of Enterprise Application Architecture book was release. These were the materials that were hitting my desk about the time I exited my first professional engagement (three years on a very non-agile C++/COM/WTL desktop application) and started the broadening stage.

Around that time, we took the principles and practices to heart on the year-long .NET desktop application  (that arguably did more than the three year project we did previously), but the .NET language was still pretty new (just into 1.1), as were all of the frameworks around DI, so we ended up doing things more like this as a result, if I remember correctly. We were also using nUnit and feeling the design benefits of having good tests.

My point in all of this is just how not new these things are. And yet, I’m still regularly teaching people about the basics of these approaches, and finding people in our industry that are otherwise quite capable that are entirely ignorant about SOLID, DRY, DI, and many of our other basic assumptions.

I think that we, as a community [1], need to spend more time focusing on the basic of our craft. We have some prominent members who are quite effectively sharing their knowledge, yet many conversations I have at events like Agile 200x have almost a disdain for teaching those basic concepts.

Imagine a community culture where everybody teaches and mentors… I can… and I love what I see. Join me?

Written by erwilleke

October 15th, 2009 at 2:02 pm

Posted in Uncategorized

The relationship between Agile and Lean – one man’s perspective

without comments

This is a copy of a post I wrote to leanagile on 6 Oct, 2009. I have copied it here because I feel that it explains my perspective well.

One of the things that has caused me to embrace both Agile and Lean is that they are not closed systems. Each of them (explicitly, I believe) recognizes the need for learning and self-redefinition.  As such, I tend to assume that each of them has already absorbed the best of what the other has to offer. In many cases, the two were saying the same thing using different names. In other cases, one or both holds something to be an underlying assumption (I’m thinking the role of People ) that doesn’t get spoken of much, and causes confusion. The primary discussion point of value to me is in understanding how to more properly utilize the thinking patterns coming from both sources. In Lean, I find great value in the attempts to provide a scientific basis for why things work, giving me mental models to help understand future situations. Others perceive this same aspect as a negative, dehumanizing, theorist perspective. I respect that view, even though I don’t agree with it.

I personally perceive that Scrum as a framework has been defined as a closed system. This turned me off on it for a long while until I see people like Tobias Mayer at work… now I question my understanding… thank you Tobias.

I personally perceive that Kanban as a framework has been defined as an open system, but is at risk of being turned into a closed system. I do not wish to see this happen.

Regarding the CAS [1] aspects of this thread:
I believe it is an incredibly valuable systems-thinking tool and perspective to evaluate what happened in a situation. As such, I believe it is also a valuable tool to predict what may happen in future situations. This puts it in the same bucket as lean, agile, and many of the other tools at our disposal. I wouldn’t “Go CAS” any more than I would “Go Lean” or “Go Agile”, but I will happily add them to the set of lenses I use to understand and inspire in a given situation.

I hope this perspective helps, or at inspires a bit of thought.

[1] The thread had a deep discussion of the role Complex Adaptive Systems theory could play in Lean transformations.

Written by erwilleke

October 6th, 2009 at 6:58 am

Posted in Uncategorized

Kanban Team beats Scrum Team!

without comments

Yesterday Karl Scotland and I were able to witness an amazing event with one of the teams here in the UK. A kanban team measurably demonstrated 5x the results of a Scrum team with comparable software experience and capabilities working towards the same goal. Not only this, the same results happened two efforts in a row! On top of all this, the kanban team was able to over-deliver continually during the effort, although we did not count this towards the final measurements.

 

Amazing evidence in favor of the way kanban teams work, right?

 

Too bad it was foosball… today we would play for money, but the Scrum team seems to have disappeared after two successive 10-2 games.

Written by erwilleke

October 2nd, 2009 at 3:58 am

Posted in Uncategorized