Based on a question posed on kanbandev by Jeff Anderson, I’m sharing these four slides to share again one way of representing parallel work streams that have to merge. They were originally presented at Lean & Kanban 2009 in Miami.
First, planning, prioritization, and initial comp work was done by the BA group on the left side. This was handled at the requirement level. Our requirements were roughly similar to an MMF and could vary greatly in scope (maybe 25-1 variation?).
Next, development was handled in the main central area, with broken out tasks moving through the top portion of each swim lane. Since our WIP limit tended to be 2, we made a complete swimlane for each requirement.
The very top chunk was for high priority bugs (different SLA, as we’d call it now) that the development team would choose before any other work on the board. Developers also handled bugs in the right hand zone if any were found during the test execution process.
The QA team would write track preparing their test plans and scripts in the middle section below the development work (after planning the work along with the developers). When all test scripts were written and all code was ready, the entire requirement would move over into a designated SQA environment. We had three different environments that could be used for verification and/or experimentation. The horizontal swimlanes in SQA would track what was happening in each one at a given time so we knew where to look for reproductions. If bugs were found, they’d go in the tiny swimlane under each environment allowing them to be worked, submitted for retest, and marked as done. When all bugs were fixed (or deferred) and the product manager liked what he saw, we moved it over to “ready to deploy”. We often would deploy immediately, but occasionally batched things together and deployed to support a specific marketing campaign.
Finally, we had another type of creative work that was independent of the development team. This consisted of our awesome graphics guys crafting new themes, backgrounds, borders, and anything else that would give the user more options to play with. He did his work here, and when he had a good set to release (usually tied to a marketing campaign), they’d go straight into either a SQA environment or directly to “ready to deploy”.
Just as a bonus, here’s a picture of what it looked like at the time (sorry for the poor lighting, it’s the best I had).
Green – standalone – Requirement
Yellow – standalone – Task (broken down items used to implement a requirement)
Orange – adorner – External impediment
Purple – adorner – Team member token – identified who was working on something
Blue – standalone or adorner – Bug. Adorner if in/pre SQA, alone if in high priority lane or reinjected into system.
Hopes this helps!