(Once upon a time I had another blog at a similar URL, which I hardly ever updated and which was subsequently hardly ever read. I’m posting more frequently now, but I since I have a slightly higher readership I thought I’d share some of the old posts I think still stand up. The project I mentioned shipped and was a success.
The title of this blog post comes from a Bad Religion song.)
For those not familiar with the ideas behind Lean software (and even for those who are!), please check out Competing on the Basis of Speed, a talk given to Google by Mary Poppendieck in 2006. One of my work goals for 2010 [ and 2011! - Ed.] is to compete on the basis of speed. Specifically, I want to help my team:
- identify tech debt, defects, or process problems that are increasing lead time
- minimize or eliminate the creation of new defects and tech debt
- convince stakeholders 0f the value of delivering fast to get customer feedback
I think I’m already off to a good start. The aim of the project I’m currently working on is to add functionality to our product that makes it easier to integrate into customers’ existing infrastructure. The technical requirements are known to us, and we’ve finished implementing the functionality. However, we’re at a point where we’ve hit a bit of a wall - no one on the team has any experience as a consumer of this functionality (that is to say, none of us are IT administrators), and it has been difficult to get focused feedback from others within our organization who have such experience. What it comes down to is that we’ve got a feature that is technically correct (follows RFC specifications, has been load tested, etc.) but has not been tuned to customer environments.
This is not an uncommon situation in software - in fact, it’s part of the reason why “Have an embedded customer representative on the team” is a practice in Extreme Programming. However, it’s not always possible to find internal “customers” (we call them Product Managers) with extensive experience with every particular area of the field. For larger projects it may make sense to train the Product Manager in the specifics of the new functionality by having them consult heavily with paying customers, but for smaller projects this is not always feasible. My current project is less than a month old and we’re code complete, and much of the time was over the Christmas holidays with our Product Manager (and most of our customers) on vacation, so consultation wasn’t much of an option.
So, here we stand - a mostly finished project that can be release-ready within 2 weeks but that has not been fine-tuned to meet all customer requirements (as such requirements are unknown). What to do? The traditional approach within the company has been to do a beta, but these don’t necessarily solve all problems. Our betas are opt-in, and often contain very few members (less than .5% of our customer base). Feedback can be hard to gather as beta systems are often put into non-production situations. There is also quite a bit overhead involved in coordinating and communicating with all of the customers involved.
Instead, I’m pushing towards getting this thing released to all customers as early as possible. The more people playing with it the better. The code is not buggy (we hope!), it just may be lacking some specific features or compatibility. Rather than wait around for 2 or 3 months as we do research and try to completely accurately model customer scenarios (a process that’s inevitably difficult and fraught with errors), we’ll get the code out into the field.
The best case scenario is that everything we’ve done so far is adequate for the market, and there are no future requirements. This means we’ve starting recognizing value 2-3 months earlier than if we had waited and done more market research, and we haven’t sat around gold plating the project for a quarter. The most likely scenario is that our code is adequate for some, but others will need some enhancements before it will work for them. We can then prioritize these enhancements based on some sort of financial metric (renewal date of customers who need the feature, likelihood that the enhancement will bring new customers, etc.) and deliver them over the next little while.
The worst case scenario is by far the least likely to happen - that would be where customers get the new feature, find that it’s not quite up to snuff, and because of this decide to overhaul their IT infrastructure and rip out all of our company’s products because of this. That’s so unlikely that it’s barely worth mentioning. Something like this is more likely in the case where we’re changing an existing feature instead of adding a new one, but even then it’s a slim slim chance, and would be the result of a decision on the customer’s part based on emotion rather than reason.
If you presented a customer with the choice between the following two options:
- a rudimentary version of Feature X now, with improvements to come soon afterward
- a “complete” version of Feature X several months from now
… I’m willing to bet that most customers would pick the first option. The choice would be even easier once the customer realized that the “complete” version from the second option would likely have to be followed up by a release or two afterward containing improvements that the developers / Product Management failed to identify in the first go-round.
I’m excited that there seems to some buy-in to this approach so far - hopefully it pays off for us!