Monthly Archives: March 2013

A3 Management and Stock Issues

There are zillions of ways to make an effective A3. I needed to find a way I could understand intuitively, so I could get past trying to figure out the technique and move quickly on to using it for actual hard problems.

This week, Steven introduced us to the A3 management system and specifically the A3 format for delivering recommendations to our customer.

This is based on a Lean book, Managing to Learn (one of many books Steven lugged all the way across the Atlantic not on his Kindle). The deal is, the A3 is a paper size (approx. USA 11×17), and there are two columns with particular formats for presenting the nature of the problem and the proposed recommendation to fix, in a concise and collaborative manner.

As I was struggling to understand the mountain of A3s (all different) that Steven had brought along as examples, I noticed a pattern to them that meant something to me:

  • Harms
  • Significance
  • Inherency
  • Plan
  • Solvency

The A3 fits the outline of a very old-school stock-issues high school policy debate case. Of which I’ve written more than a few. (Cool kids don’t debate this way anymore, I’m told.)

  • Harms: the problem
  • Significance: what is the extent of the problem, and what metrics can be used to assess it before and after?
  • Inherency: what structural or attitudinal factors are reinforcing or worsening the problem?
  • Plan: the recommendation
  • Solvency: how will the recommended action steps resolve the problem, and which metrics will be used to measure success?

And here’s what it might look like in practice:

  • Harms: four teams are developing working software, but their integration and stabilization phases are trainwrecks and they have all come to dread their merges.
  • Significance: Team A has burned a week of their latest two-week iteration just on merging. Many of Team A’s changes from their previous iteration were lost when Team C merged over the top of them, and this will take at least another week to fix. Meanwhile, Team C’s release, which seemed tested and ready, has been delayed by nearly a month fixing bugs discovered after merging.
  • Inherency: the root of the problem is team branches. All four teams are working on the same product, and following a similar release cadence. All four teams easily decompose their work into small increments of working software that they are able to test and release every few days at best, every two weeks at worst. Isolating the teams doesn’t benefit anyone, and has led to the bad habit of isolated test and last-minute merge. No one team or role has responsibility for post-integration testing. Teams can’t easily understand or resolve merge conflicts found after weeks of isolation, so instead they tend to delete the changes they don’t recognize. Minor repairs to their process (many tried and failed already) won’t solve the fundamental problem that needless isolation causes harms. Only a comprehensive new branching strategy will solve.
  • Plan: implement a branch by quality strategy whereby the four teams, who are, after all, all working on a single product which is ultimately totally integrated, do their primary development together in one Dev branch.
  • Solvency: combining team branches into one will actually eliminate most code conflicts, and make any remaining conflicts smaller, simpler, and quicker to resolve. Earlier integration will force a number of additional practice improvements they are currently avoiding, most especially teamwork and coordination upon each checkin. Bugs caused by integration can be detected earlier. The teams will experience pain at first, especially because they have limited automated testing and regression will place new demands on their testers, but it will expose better data about their specific testing priorities, leading to better fixes in the long term. Finally, the early and frequent integrations should instill a sense, currently missing and sorely needed, that they are ultimately all one product team and that their success is measured not by the achievements of any one sub-team but by the value of the finished product.

So there you are. If you were a traditional-style high school policy debater in the USA in the late 1980s and you now want to know a key Lean management practice… yeah, OK, I’m the only one, aren’t I? Well, I’m good to go now.

And that’s my message here. I’m geeking out a bit about my modest debater past, but the real takeaway here is that sometimes I let learning get in the way of my learning. My new A3 trick is probably sub-optimal in lots of ways, but it’s superior to the A3s I wasn’t going to write at all because I didn’t know how.

Advertisements

Vertical slices and SOA

Even the term “vertical slice”, a common stumbling block in agile adoption, kinda implies a large-scale n-tier application. Modern architectures and agile can play nicer together than that!

“Story sizing”, decomposition, vertical slice of functionality, Minimally Marketable Feature (MMF), Minimally Viable Feature (MVF), and my personal least-favorite, Potentially Shippable Product Increment (POS*). I think it’s the biggest hurdle for orgs moving from not-agile to agile. I think many other problems with initial adoption (estimation, timebox sizing) boil down to this one.

Every dev team I see trying to get started with this initially tries exactly the same wrong thing, usually because it’s how they’ve organized their work in their not-agile process before: they want to split things up by architectural layers, and build, let’s say, all of the database and then all of the business layer and then all of the UI.

Any time I see a sentence with “do all of… and then all of… and then all of…”, what’s that remind me of? Oh yeah: waterfall. There are reasons we devs cling to this in spite of ourselves. Maybe another post another day.

The thing is, “vertical slices” aren’t satisfying either. Every single team I’ve worked with resists and/or struggles with this for basically the same reason: the users asked us for an epic-sized feature because that’s what they want. They don’t want a slice of a feature, they want a feature. One of the cornerstones of agile is that we’re doing these short iterations in order to get feedback from users. That’s hard to do when they’re inherently unsatisfied with these ugly proto-features they don’t want (and they’re deeply alarmed when someone calls them “potentially shippable”)!

I discovered an interesting thing at one of my customers recently, though. We struggled with “vertical slices” vs. Big Database Up Front for like two days, and only then did I find out how much they’ve worked to transition their legacy LOB apps into a SOA model: collections of beautifully loosely-coupled services and APIs with clean interfaces talking to each other to achieve some nice user-interfaced result.

Wow! This was exactly the hook I needed. Because what is a service or an API if not a neat encapsulation of a small logically-contained bit of functionality? I realized that even the term “vertical slice” implies a traditional n-tier architecture in kind of a large-scale sense. Today’s SOA (is that still what we call it?) has already broken down those giant tiers into little slices. The team didn’t even realize they were already doing it. Each service might have its own little n tiers, but on a much smaller scale. Small is exactly what we need!

My customer got stuck trying to decompose from the epic feature level, still thinking about all the little services they’d need to assemble (plus BDUF) in order to hook up a UI and show a “vertical slice” to the user. They didn’t see their services as value in themselves, but I think the value is right there. APIs don’t have a user interface, but, um, the “I” stands for “interface”. They encapsulate something someone finds useful, and they are independently testable. Better yet, they almost demand automated testing, a practice we already wanted to reinforce. Imagine: at the iteration review, sure, the team should demo UI mockups early and often to get feature-related feedback from users… but can’t they also “demo” individual APIs (that implement underlying business capabilities and algorithms that the users do care about) by reviewing the acceptance criteria for the service and showing off a suite of automated test results to prove that the logic works?

I guess my point is that, as it always has, agile practice goes hand-in-hand with what we know about how to architect high-quality, maintainable software. I was just pleased to understand this in a new (to me) way.

* j/k. But I do hate that term.