Skip to main content

Each year, more software companies switch to a microservices architecture using Application Programming Interfaces (APIs), because APIs make it easy for different teams on a project to access each other’s resources. APIs also enable software companies or individuals to get data from each other: for example, Twitter has an API that is used by many small businesses to retrieve tweets related to their business for display on the business’s webpage.

Another great feature of APIs is that they are smaller than a monolithic application, which means that they can be deployed frequently. At the company where I work, each team has several APIs, and we have eight different environments that we deploy to. If we were relying solely on manual testing to verify that the APIs had been deployed correctly, we’d have to run eight sets of tests for each API release. And if we were deploying more than one API at a time, which we often do because our APIs work together, we could be running hundreds of tests. And if we deploy every two weeks, those tests will amount to a lot of tedious, repetitive testing! 

We’ve solved this problem by setting up automated API smoke tests that run with every deployment, to every environment. In this article, I’ll describe how we decided what to test and how we set up our smoke tests. We used Postman, Newman, Powershell, and Octopus to set up our automated smoke tests, so I’ll be describing what we did with those tools. However, these strategies could be adopted to any continuous deployment (CD) pipeline.

Step One: Decide What to Test

The main purpose of smoke tests is to do some simple, high-level tests to ensure that the deployment was successful. This is not the place to test every single thing that your API can do. When we chose what to put in our smoke tests, here is what we considered:

  • We set up one “Happy Path” test for each endpoint—that is, one test that is expected to be successful. For example, a GET request for a resource with a specific ID would be expected to return that resource. You’ll want to test each endpoint once to verify that each one is working as expected.

  • If there were major variations in how a specific endpoint would be used, we added one or more tests to check those variations. For instance, we have a file retrieval API where a file can be retrieved by two different methods. The endpoint looks the same, but the bodies of the requests are different. So we added one test for each method.

  • For endpoints which required a certain level of security, we added a test to validate that a request without the appropriate authentication would NOT return information, and would instead return a 400-level error.

  • We did not add any other negative tests, such as a POST request with data outside the allowed parameters, because we didn’t feel that those were necessary to ensure that the API was working. We instead ran those tests as part of a nightly regression suite.
Discover what’s new in the QA world.

Discover what’s new in the QA world.

  • No spam, just quality content. Your inbox is safe with us. For more details, review our Privacy Policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

Step Two: Export Your Tests 

We used Postman to create our API tests, so when the tests were created, we exported both the test collection and the test environment file into JSON files. For those who are not familiar with Postman, a test environment is a group of variables that can be referenced in a collection of tests. 

Step Three: Write a Script to Run Your Tests

Postman has a command-line tool called Newman that can be used to run a test collection, so this is what we used to run our tests. We created a Powershell script that would run the Newman command. 

Step Four: Create a Deployment Step to Call Your Script

Once the Powershell script was ready, we set up a deployment step in each Octopus API deployment project that would run the test script. If the script failed, the deployment would fail.

Step Five: Organize Your Variables

Variables are often a consideration when running smoke tests in different environments. For example, in your QA environment, your test user might have an ID of 340, but in your Staging environment, your test user might have an ID of 620. Another issue with variables is that sometimes for security purposes, you might not have access to passwords or keys in Production environments. This was the case for our team, but fortunately, Octopus has the values we need to run our tests in Production, so we solved the problem by having Octopus pass the values that we needed to our test script.

There were three different kinds of variables needed for our smoke test:

  • Type 1: Variables that were unchanging for each test environment, such as “firstName”: “Prunella”. These variables could be put directly into the Postman environment, so nothing more needed to be done with them. 
  • Type 2: Variables that changed for each test environment, but did not need to be kept secure, such as “userId”: 340. These variables were added as Octopus variables in this fashion: “smoke.userId”, and the value of the variable was set for each environment; for example, QA was set to 340, Staging was set to 620, and Production was set to 450.
  • Type 3: Variables that changed for each test environment and needed to be kept secure, such as “apiKey”: “b20628a9-3c00-4dad-b38c-0a4d2d85ffab”. Type 3 variables had already been set in the Octopus variable library. 

We then used the variables like this:

  1. When we called the Powershell script in Octopus, we sent the variables set in Octopus to the script. 
  2. In the Powershell script, we accepted the Octopus variables sent in and assigned them to Powershell variables. 
  3. When we used the Newman command in the Powershell script, we sent the variables in the script to the Postman environment.
  4. Newman used the variables sent in the Powershell script, combined with the variables in the Postman environment to run the Postman collection.

With our API smoke tests running in Octopus for every API and in every environment, we can feel confident that any big problems with an API deployment will be detected right away. Furthermore, we can test in Production environments even when we don’t have access to sensitive API keys. Having automatic deployment smoke tests frees us up to do other types of testing, such as manual exploratory testing and security testing, as well as write more test automation for nightly regressions. 

For more how-to guides, subscribe to The QA Lead newsletter.

Keep learning and check this podcast out: HOW OPEN SOURCE SOFTWARE SIMPLIFIES INTEGRATION IN AUTOMATION ENGINEERING (WITH JAMES WALKER AND SANJAY KUMAR)

By Kristin Jackvony

Kristin discovered her love of software testing in 2009 following a career in music education. She’s been a QA Lead, QA Manager, SDET, and is currently working as a Principal Engineer at Paylocity. She believes that good testing begins with good thinking: knowing why you are testing, planning what to test, and determining the best way to test it. Find more of her on her website, thinkingtester.com.