Skip to main content

Releases and release management are an essential part of our work. Releasing the right product to the customers will make them happy. Performing a smooth release will make us, those who are involved in the release, happy. 

How can we achieve a smooth release? 

Having taken part in a large number of them, I can say that there are a few key points that can help in achieving successful release management. 

Let me tell you what they are.

A successful release starts during development.

Identifying test cases

Good release management starts way before the day of the release, from the time the feature to be released is in the development phase. During this time, the feature needs to be QA signed-off. This means we need to test all the possible scenarios we can think of, which makes sense for the given feature, in order to validate that the feature works according to the requirements. Some of these scenarios will be tested by automated tests, while for some, it is either too difficult or not worth it, to create automation. 

When we identify the type of testing we need to perform for the given feature, we should consider the following: the first time this feature goes into production, we need to fully test it. We need to make sure no critical issues will be found by the customers. We need to validate the requirements. And we need to ensure the good performance of the feature. 

But once the feature goes live, unless some updates are required for it, we don’t need to test it so thoroughly with each future release of the same code base. The behavior of the feature would only change either if the underlying code has been changed, or if an external change to some of the dependencies it is using has occurred. The rest of the time, sanity testing of the feature will be enough, where sanity would, of course, include the most important test cases. 

Having said this, when the feature is in development, make sure you evaluate: how many test cases you need to test; how much time you have available for testing; which of the test cases are critical; which of the test cases you would run with each release.

This way you can identify which are those test cases that you will automate. Focus on having at least the critical functionality covered by automation. For the rest of the tests, you should create written test cases, in either your project tracking system (e.g. Jira), or the test management tool you are using. This way, they won’t get lost or forgotten, and they can be run whenever a release requires it.

Discover what’s new in the QA world.

Discover what’s new in the QA world.

  • No spam, just quality content. Your inbox is safe with us. For more details, review our Privacy Policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

Running the tests in the CI

Once you have the automated tests, you should start running them through the CI, on the dev environments, on a periodic basis. As other features are being developed before the release happens, it’s good to ensure that the feature you want to release still works properly, especially if you already signed-off on it during the sprint. Running the tests for the feature you are interested in, let’s say daily, will give you fast feedback regarding any fixes that need to be applied to the feature, for bugs that are caused by other changes in the system.

Early detection of potential bugs of the feature to be released will give plenty of time for the fixes to be applied, and for the feature to be retested, without having the stress of doing this in the limited time you have in the release phase. The more coverage of the requirements you have in your automated tests, the more likely it is you will find only a small number of bugs in the release phase.

Pre-production is important and should have a built-in buffer.

Timeframes 

The release phase usually consists of a period dedicated to testing in a pre-production integration environment, apart from the production release day or period. As a tester involved in the release, I always make sure to ask for the appropriate time slots for the release, in such a way that the pre-production phase allows me to properly test the features to be released. 

I also always consider a buffer time, because I do expect unforeseen issues to appear within the release management process, whether these are bugs in the features to be released or external factors. Just to mention one of these factors, pre-production environments are test environments, and they are prone to not being the most reliable they should be. Many times they either work very slow or become unavailable for a while, just when the release testing occurs. 

Having an added buffer time is always a good idea, and even if you don’t use that extra time, that’s fine. You can give your sign-off before the allocated testing time is up since you can spend the remainder of the time working on something else. It’s worse not to have a buffer time and to need it, as in this case, you will have to somehow fit all the testing needed in a smaller amount of time. This might lead to some test cases not being run, which in turn might lead to some bugs only being uncovered directly in production.

Scope

Another important aspect of the pre-production release, from my perspective, is having a dedicated release management coordinator. For me, this means someone who takes care of certain aspects, the first one of these being: checking the scope of the release. Before starting to test any release, the tester needs to know what to test. This boils down to the release scope. 

Product people, aka the POs or BAs, expect certain features to go into the release, and this is agreed upon when the development begins. However, the code that goes into the release might not always cover the exact scope that is expected by the product team. Sometimes, people will forget to push code to the branch from which the release build is made. Or worse, they might push code to this branch that should not be pushed. This could represent features that are not yet complete and QA signed-off, or features that are not in the current scope.

In order to make sure that the scope will be achieved, the release coordinator should compare the expected scope with the actual one. For this, the expected scope should come from the Product people and should be clearly stated in some sort of documents (and perhaps in your quality assurance plan). For example, you might have some sort of planning documents, which show the project milestones, and what is included in each milestone. 

For the actual scope, a changelog should be extracted from the VCS (Version Control System, e.g. Git) the project is using. The changelog will reflect all the commits made during the development phase to the branch that will generate the release build. Hopefully, if you are working in an organized fashion, each commit will have a description pointing to a Jira item that the code refers to. This way, you can see what are all the Jira items that have corresponding code coming into the release build, and which of these items should not have been included in this build. Of course, if you find that these commits are related to necessary bug fixes, that is perfectly fine. What you don’t want is for these commits to represent unfinished feature work or work-related to features that should not be going into production with the current release.

As you delve into advanced release management techniques, you'll find that integrating your processes with test management solutions designed for Jira can offer a more cohesive and efficient workflow

Whenever a discrepancy is found between the expected and actual scope, the release management coordinator should liaise with the product team and the development team managers, to identify what is the best approach. If the commits uncovered as not being part of the scope represent unfinished feature work, it is best to remove them. This is because you are already in the pre-production release phase, and you don’t want to risk allowing the remaining code to be written and tested in a rush, just so it ends up in the release. It is risky, as this rush can lead to test cases being forgotten and not run, and to critical bugs slipping into production. It is best, in this case, to allow the remainder of the work for these features to be properly done for a future release.

The testing

Once the scope is clear, the testing phase should begin. During the pre-production testing phase, you need to re-validate the entire feature you are releasing. You should imagine that this is the first time you are looking at it, and test every scenario you did while in the development phase. Run the automated tests you already wrote, but in the pre-production environment, and don’t forget about the non-automated scenarios. Perform a full regression on this feature, for the simple fact that once you are in this release management phase, and in the pre-production environment, there are probably other teams that will need to release during the same timeframe as you. This is basically the first time all the dependencies come together, with production-ready code, in the same environment. And this, in theory, is the configuration that will run in production starting with the release.

That is unless some issues are uncovered during testing, and fixes are required. Should that happen, you need to consider what you need to retest. No matter whether the fix is in your code, or in the code of some external dependency, if the changes affect your feature in any way, you need to consider another round of testing. Make sure you retest at least the critical functionality of the feature.

This might seem boring, especially if many fixes will occur during this phase. However, you cannot accurately predict the impact the fixes might have on different parts of your feature.

Small changes might cause huge side-effects, so it’s better to be bored but confident that the critical functionality is still working properly, rather than regret not having tested enough.

And of course, if any bug found during this testing phase is a minor one, don’t bother fixing it during this phase. Again, you don’t want to rush any implementation or testing done on an otherwise perfectly working feature, so as not to introduce the risk of breaking it.

how I prepare and test for my releases featured image

During the release, communication and testing are key.

Once the feature has been signed-off on the pre-production environment, it is ready to go to the production one.

Communication

When it comes to the production release, the release management coordinator has a few tasks to accomplish. First of all, the release date and time should be properly set up in advance and communicated to all the parties involved. I find that having the release date and time set up as a meeting in the participants’ calendars is very helpful for release management in order for the participants to organize their day, a task which can be done by the coordinator.

On the day of the release, the coordinator should remind everyone involved in the release timelines. Ideally, communication should be done on a channel that everyone is using, for example, Slack. Having a dedicated Slack channel where all the important aspects are communicated ensures that everybody involved has the same understanding of what step of the release is done when, what is the scope, what issues are uncovered, and who can help with these issues. Such communication ensures that someone who needs to know certain aspects of the release will see this information, as opposed to verbal communication (where this person might be absent, without the person communicating the information noticing that they are missing). 

The release management coordinator should keep track of, and announce the release milestones on this channel, like when the release is starting, when the release was signed-off. Also, every party involved in the release should communicate via this channel that they are performing their assigned steps so that everybody else is aware of the status and progress of the release. In case any party which is needed at any point of the release is not available, the release coordinator should get in touch with the appropriate people, in order to have everything running smoothly.

The testing

During the production release, testing of the new feature should again be done entirely. This is because, although the feature was signed-off in other test environments, those environments are not 100% identical in setup and resources to the production ones.

In order to prevent unforeseen poor behavior of the feature released, it is important to run all the automated tests available, together with the non-automated bits for which you have written test cases. Never give sign-off in production without having tested a feature.

Just assuming that the feature works in production does not mean it actually does.

Make sure it does, test it.

After the release

So the release is done. But, making sure the feature works properly is more work than just testing it in the release phase.

Once you are done with that, make sure to have your automated tests running in the CI, for production. This will pick up any undesired behavior caused by external changes that you are not aware of.

Constantly monitor the performance of the feature, to make sure that production load and customer usage is not causing a degradation in performance.  And, make sure you keep an eye on any problem reports coming from your customers. They can signal any areas that are not working properly so that you can quickly schedule a future update, should it be needed.

And, in case the release did not go as expected, make sure to have a post-release retrospective meeting. There you can address all the problems that occurred within the release management process, and assign action items to the relevant people, so that any future release goes smoothly and successfully.

By Corina Pip

Corina is a Test & Automation Lead, with focus on testing by means of Java, Selenium, TestNG, Spring, Maven, and other cool frameworks and tools. Previous endeavours from her 11+ years testing career include working on navigation devices, in the online gaming industry, in the aviation software and automotive industries. Apart from work, Corina is a testing blogger (https://imalittletester.com/) and a GitHub contributor (https://github.com/iamalittletester). She is the creator of a wait based library for Selenium testing (https://github.com/iamalittletester/thewaiter) and creator of “The Little Tester” comic series (https://imalittletester.com/category/comics/). She also tweets at @imalittletester.