This is a follow-on from the same conversation with Liz as the last post. We were talking about how Inkubook used branch by feature to keep our main line clean and to avoid additional complexities around the “Share code” bullet from her post:
Share code. If the teams check in before the code is finished, their scenarios will fail. If they check in examples which haven’t yet been coded, those examples will fail. This won’t be a problem if no one else is modifying the code base; however, if it’s a subset of a much larger team breaking the build can cause havoc, and the habit of keeping builds green is a good one. Try distributed version control, which will allow a team to check in on USB keys or a temporary space until the code works. (There are techniques for getting, say, Mercurial, to work alongside, say, Subversion – mostly by making each system ignore the other). You could also pass around patch files to keep the code in sync.
Inkubook used branch by feature almost exclusively. We did lose some of the benefits gained from Continuous Integration, but recognizing the principles over the mechanisms, we managed to mitigate most of the losses. The primary two tools were keeping cycle time low and thoughtfully choosing features to avoid parallel work in the same area of the application. These two practices avoided much of the integration pain we would have otherwise seen. We were lucky enough to feel the pain of disregarding these early in our flow days, and were able to learn from that lesson after two days of merge pain.
This was the point that I realized that our branches could be used as nearly perfect indicators of cycle time. We had exactly one per MMF, we created them when we started working on something, and we deleted them when something was pushed to production. That’s useful.
Even more interesting, though, is that we could use them as reasonable approximations for Touch Time! It’s always been quite a challenge to find a non-invasive way to calculate touch time on a per-feature basis. A 24 hour period with a checkin is “touched”, the same period without is “not touched”. More fine-grained than that probably isn’t valuable, and good agile developers would be checking in at least daily when they’re working on something. This would be more than sufficient for how I’d use touch time… exciting!
About touch time
Others have written quite a bit about touch time and what it means in manufacturing, design, and queuing theory. I’m not going to try and produce a comprehensive description of how it works. Instead, I’m going to talk about what I would use it for and what I would not. These are “would” because I haven’t had the data before, so it’s theory, not practice.
Before I start, I’m going to make a couple definitions. They probably do not follow standard usage, so be warned. I’m going to call efficiency “E” the ratio of Touch Time to Cycle Time (TT/CT) where the time measurements are based on 8 hour weekdays, so two weeks of steady work is 100%, regardless of weekends, holidays, etc.
First, I believe it would make a great leading edge indicator for certain risks on a given feature. If you create a branch and it sits idle for a few days (E = 0), then it means something’s blocking people from working on that feature. It could be known impediments, which are reported in the stand-up, but it could also mean that they’ve been pulled to something else and this feature is at risk of (severely) missing SLA from being resource starved.
Next, something with falling touch time ( dE/dt is negative), then something’s blocked the feature from being worked on effectively. Same reasons as above.
Finally, an aggregate of touch time across features would be useful as an indicator of churning and focus, possibly as an indirect measurement of the effectiveness of a swarming approach, and possibly as a way of tuning down WIP limits around development to the proper point.
Nicely, all of these uses could be fit into a decent dashboard and automatically generated with no changes in how developers work other than ensuring the branches are trimmed back when features are released into production (or whatever serves as your company’s “end of the line” for cycle time until you can measure meaningfully all the way to production).
The key things here that justifies even looking into this is that there are no changes to existing behavior, and that there is nothing that will point at individuals. Ideally, these measurements would only be used within the team to help find improvement opportunities, but that is a cultural aspect independent of the measurement.