Republished with permission from Kristin’s excellent blog, thinkingtester.
In my last post, I introduced the concept of the quality maturity model: a series of behaviors linked to attributes of quality that help teams attain various attributes of quality in their software.
One of the things it’s important to note is that adopting a Quality Maturity Model requires the whole team to contribute to quality.
Quality is not something to be thrown “over the wall” to testers; rather it is a goal that both developers and testers share.
But how can you get the whole team to own quality?
One way is through the creation of a quality strategy. This is a document that the whole team agrees on together. Think of it as a contract that describes how quality software will be developed, tested, and released by the team.
Here I’ll discuss some of the questions you may want to answer in your team’s quality strategy.
Creating and Grooming Stories
Question: How does the team decide what stories to work on?
This could be a decision by the whole team, or by the product owner only. Or the prioritization could be done by someone outside the team.
Question: Who grooms the stories to get them ready for development?
This could be the whole team or a subset of the team. Ideally, you’d want to have at least the product owner, one developer, and one tester participating.
Question: How does the team decide who works on which story?
It could be that the developers can pick any story from the board, or it could be that developers each have stories in specific feature areas that they can choose from.
On some teams, software testers work on simple development stories as well, such as changing words or colors on a webpage or adding in automation ids to make automation easier.
Question: What does “Done” look like for the story? Is it measured by meeting all of the acceptance criteria in the story? Is the developer required to add unit tests before the story can be considered done? How do you know that a feature is ready for testing?
On many teams, it’s expected that the developer will do some initial testing to verify what they coded is ready for further testing.
Question: How will a story be handed off for testing?
On some teams, this is done by simply moving the story into the “Testing” column on the storyboard. On other teams, a more formal handoff ceremony is required, where the developer demonstrates the working story and provides suggestions for further testing.
Question: Who deploys the code to the test environment?
This seems like a trivial thing, but it can actually be the cause of many misunderstandings and much wasted time. If the developer thinks it’s the tester’s job to deploy the code to the test environment, and the tester assumes that the developer has done it, the tester could begin testing and not realize that the new code is missing until after they have spent a significant amount of time working with the application.
Question: Who will be doing the testing?
On some teams, developers can pick up simple testing stories to help improve product velocity, while the more complex stories are left for the testing experts.
Test Plan Creation
Question: Who will create the test plans? How will they be created? Where will the test plans be stored?
Some teams might prefer to do ad-hoc exploratory testing with minimal documentation. Other teams might have elaborate test case management systems that document all the tests for the product. And there are many other options in between.
Whatever you choose should be right for your team and right for your product.
Question: Who will write the test automation?
On some teams, the developers write the unit tests, and the testers write the API and UI tests. On other teams, the developers write the unit and API tests, and the testers create the UI tests. Even better is to have both the developers and the testers share the responsibility for creating and maintaining the API and UI tests.
In this way, the developers can contribute their code management expertise, while the testers contribute their expertise in knowing what should be tested.
Question: Who will be doing other types of testing, such as security, performance, accessibility, and user experience testing?
Some larger companies may have dedicated security and performance engineers who take care of this testing. Small startups might have only one development team that needs to be in charge of everything.
Question: What tools will be used for manual and automated testing?
Selecting test tools is very important when you want the whole team to own testing. Developers will most likely want to use tools that use the same language they are using for development because this minimizes how much context switching they’ll need to do.
Question: Who is responsible for maintaining the tests?
It’s amazing how fast test automation can become out of date. One word change on a page can mean a failed test. Ideally, a team should have a “you break it, you fix it” policy where tests are fixed by the person who checked in the code that broke them.
If that’s not possible, at least make sure that everyone on the team understands how the tests work and how to fix them in a situation where a fix is needed quickly.
Bugs and Tech Debt
Question: How are bugs handled when found in testing? Are they discussed by the developer and the tester, triaged by the whole team, or logged on a backlog to be looked at later?
It’s often a good idea to fix bugs as soon as they are found because the developer is already working in that section of code.
Question: How will the team deal with tech debt?
Does the team have an agreement to take on a certain amount of tech debt per sprint? Some teams have a policy that when a developer has run out of stories to work on, they pick up tech debt items from the backlog.
Question: What kind of testing will you do before a release? Will there be a regression test plan that the whole team can execute together? How about exploratory testing?
One high-performing team I know gets together for exploratory testing right before they release. Using this strategy, they’ve uncovered tricky bugs and fixed them before they were released to production.
Question: How will the software be released?
In some companies, there is a release manager who takes care of executing the release. In other companies, the team members take turns releasing the software. One very helpful technique is Continuous Deployment, where the software is automatically deployed and tests automatically run to verify the deployment to each environment, saving everyone time and effort.
Related Read: HOW TO RUN API SMOKE TESTS IN YOUR CONTINUOUS DEPLOYMENT PIPELINE
Question: How will you measure the success of the release?
Once the software has been released, it’s easy for development teams to forget about it; but this is the time the users begin working with it. What kinds of metrics might you use to measure how well your product is working? You could keep track of defects reported by customers, or look at logs for unexpected errors.
Question: How will you monitor the health of your application?
It would be a good idea to have alerts set up so that you can find out about problems with your application before your users do.
Question: What kinds of behaviors should you be looking for?
Quality Strategies can be as varied as snowflakes. Imagine the differences between a small startup often people making a mobile chat app and a company of twenty thousand people designing software that flies airplanes. These two companies will need very different strategies!
You can design a quality strategy that works well for your team by discussing these questions together and drafting a strategy that you can all agree upon.
For tools and technology to help with your processes, check this list out of the 10 BEST TEST DATA MANAGEMENT TOOLS IN 2021.