Skip to main content

There are plenty of metrics used to track QA progress these days. Most of them are quite useful, but they all tend to be one-dimensional, in the sense that they are snapshots of a moment, or sprint, in time. This is true even of rate and trend metrics, which are incredibly important, but, again, snapshots in time.

When I worked at Symantec, I noticed this:

There were a lot of metrics employed to determine the *level* of quality achieved (allegedly) by the ship date. But no metrics were gathered, either in-process or retrospectively, to measure the *cost* of achieving that quality.

In other words, we didn't care about how efficiently that quality was achieved over the life of the project.

Your product may have shipped with high quality, but did it need to take as much time and effort as it did to achieve it?

This is an efficiency question, not a quality question per se, but the two are very much related.

The Intersection Of Quality and Efficiency

Quality and efficiency intersect because one of the major reasons software projects often go significantly past their committed schedule (and this often comes as an unexpected “gotcha” in the final phase of the project) is poor code quality. Not just initial code quality, but throughout the project.  

Another reason, which is really just an inevitable consequence of poor code quality, is defects that require multiple attempted fix cycles, spanning multiple test passes or sprints before the defect is actually fixed. If it all.

These endless iterations of fixing, testing, and failing again add countless person weeks to almost every software project.

Yet, interestingly, this failure pattern is not captured uniquely in any other project or quality metrics I have seen. Because, as noted above, all these metrics are snapshots in time, and so fail to capture this phenomenon.

It occurred to me that there was a simple way to capture this failure pattern. I came up with a metric I called "Time to Quality", or TTQ.

Discover what’s new in the QA world.

Discover what’s new in the QA world.

  • No spam, just quality content. Your inbox is safe with us. For more details, review our Privacy Policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

Introducing A New Metric: Time To Quality (TTQ)

The metric itself is quite simple. It works like this:

For each test pass (however that is defined), track not just how many tests passed or failed, but how many tests passed on the first attempt. And how many required two, three, or more attempts before the same tests finally passed.

As a simple example, if you ran 20 tests and 5 of them passed on the first attempt, your TTQ ratio is 25%. The higher the number, the better. It's a simple, calculable, and easily accessible indicator of your overall code quality.

This metric is a very good indication of initial code quality.

That's because the higher the quality of the code, the fewer times you will have to assert a test against it before it passes. And vice versa. This metric is much more accurate than bug counts and trends.

TTQ is also an incredibly useful project tracking metric.

For example, if your first test pass 70% of the tests passed, and 30% failed, you can reasonably assume your project is still on track schedule-wise. But if say, only 20% of your tests passed on the first run, then, to put it in technical terms, "Girl you are in trouble! "

Because this means you didn't in fact run a successful first pass. And that time needs to be re-added to a future pass as quality debt.

The TTQ metric is extremely useful as what I like to call a "tripwire" metric.

If you use it consistently, it provides very accurate data that will allow the PM to tell whether the project is going off the rails long before the train actually leaves the track. 

This means proactive measures can be taken much earlier to bring things back under control. Avoiding embarrassing admissions very late in the game that the project schedule is in shreds. In other words, use a TTQ threshold to define whether the project is still green, in yellow, or about to go red. Make it integral to the project status itself.

Because let’s face it, one of the consistent pathologies in software development efforts is persistent denial that things are going wrong, and in a big way, when it first becomes obvious this is the case. Everyone is afraid of being branded as “negative”, or that they are just too lazy or uncommitted to fix things, or not a “team player”. The parallel with dysfunctional families is uncomfortably obvious.

This denial and repression of what everyone knows are really going on creates the failure pattern of refusing to acknowledge the project is far, far off schedule until the very last minute.

When this can no longer be hidden from upper management, the unwelcome surprise this creates for them only serves to undermine, sometimes permanently, their faith in their own development teams. And who can blame them?

But if you incorporate TTQ consistently into your metrics and project management—and act on what that metric is telling you—you can avoid this.

Measuring Time To Quality depersonalizes the decision to acknowledge schedule and quality risks are building in the project in its earliest phases.

It becomes very cut and dried, a matter of numbers. Not a matter of personal heroism, which often comes at great cost to the person. 

The TTQ metric is also very useful for a project's project retro/post mortem.

It will allow the team to pinpoint exactly what sections of the code were, and remained, weak throughout the project. And by extension how efficient the team was at producing quality as such. Or how expensive.

This is a question largely avoided in project retros. Partly because teams don’t have the conceptual vocabulary to formulate it, and partly because it is often politically sensitive to point out consistently poor code quality from engineering.

In Summary

TTQ will provide enormous transparency and predictability to your projects at almost no cost in time or effort since it is simply a meta-analysis of metrics you are already gathering.

I have been using TTQ on my QA projects for two decades, and it has won wide acceptance among project managers I’ve trained in it, for all the reasons noted above.

Try it out, and you'll see for yourselves. It will truly be a game-changer for your team. As always, best of luck.

P.S. If you want to hear more war stories and the reasons behind the TTQ, I spoke to Jonathon Wright on the QAL Podcast.

By Niall Lynch

Niall Lynch was born in Oslo, Norway and raised in Fairbanks, Alaska, 100 miles south of the Arctic Circle. He received a BA in Religion from Reed College, and an MA in Ancient Near Eastern Literature Languages from the University of Chicago. Which of course led directly to a career in software development. Niall began working in software in 1985 in Chicago, as a QA Lead. He knew nothing about the role or the subject at the time, and no one else did either. So he is largely self-taught in the discipline. He has worked over the years in the fields of file conversion, natural language processing, statistics, cybersecurity (Symantec), fraud analysis for the mortgage industry, artificial intelligence/data science and fintech. Learning how to adapt his QA methods and philosophy to these wildly different industries and target markets has been instructive and fruitful for their development. And his. He now lives in Palm Desert, California, where he does SQA consulting and is writing a couple of novels. Send questions or jokes to NiallLynch@outlook.com