In the digital reality, evolution over revolution prevails. The QA Lead approaches and techniques that worked yesterday will fail you tomorrow. So free your mind. The automation cyborg has been sent back in time. Ted Speaker Jonathon Wright’s mission is to help you save the future from bad software.
- Subscribe To The QA Lead Newsletter to get our latest articles and podcasts
Related articles and podcasts:
Read The Transcript:
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Jonathon Wright This podcast is brought to you by Eggplant. Eggplant helps businesses to test, monitor, and analyze their end-to-end customer experience and continuously improve their business outcomes.
Hello and welcome to the show. Today, I've got a very special guest who's got some amazing tech skills that you could have really not want to miss out on. He's a principal software developer in tests. So we're going to be finding out how he's been establishing some of the latest capabilities within QA and testing. So, welcome to the show.
Arun Kumar Thanks. Thanks. Jonathon.
Jonathon Wright Do you want to just give us a bit of intro about, you know, your details? Know, the best way to get contacts with you. And you know, also talk a little bit about, you know, where you started.
Arun Kumar Yes, sure. So I basically started my career in IT back in 2011 and I basically graduated from college and I was like testing was like a second citizen for me in those days. And I thought that I will be a developer. So I got there in development. And then somehow my manager asked, you know what, I comment we've got some projects, nice projects in QA. Do you want to join QA? So initially I said, no, I don't like in our testing space. I want to be a developer but then somehow she convinced me that she's like, OK. This is not a traditional QA. You would be like a white box tester, back in 2011. And then she basically got me there. I said, okay white box tester. So what do you mean by that? She's like, you'll be writing a unit test. You'll be basically dealing more code than doing more and process stuff. So I started my journey in 2011. I started working, as I said, as white box QA and I started working for TicketMaster back in the day. And we were basically building REST APIs for them, you know, following scrum and agile. So for me, to be honest, I'm lucky, I know, I walked in, Waterfall motive.
So we were building REST API as I said, and we were building SOAP and REST bots. So we were writing a lot unit test to test this stuff and it was built in Java back days. So we were using Java to write unit tests. I was doing a lot of pair programming with developers and helping them, you know, writing unit test and getting my head around, you know, what things to be testing at the unit level and the API level. And then we were using SoapUI Pro for API testing and at times so we were writing a lot of Ruby scripting to do the pair testing. So as I said, it was just a backend project. So we basically delivered a lot of management APIs for them and it was like a good, good, good project. And I learned a lot and I was working one job at that time. I just finished that project after a year and a half and then like another came up and I was more again inclined towards Java. And obviously I was in my early days of QA. And I was like, you know what? I don't understand technology. I want to stick to Java. And then again, my manager and domains. I will give credit to her for that. She's like, you know what? For you, you're very young and technology sort of "meta", you have this chance. Why don't you change and go .NET and I was like, "you know what? No, no." And then somehow basically said, you know what? And I wasn't into that time. And she's like, OK, if you start working on those things, maybe, you know, in future you can go out. You know, .NET is growing so fast. They said I said, okay, fine. I started working on .NET. So we were basically building an automation framework for them in .NET and SpecFlow for the front-end. And I started working there. And to be honest, after some time I started liking .NET as well because in Java I did a lot of backend testing. I never used Selenium in Java. And for me the first time and I use Selenium, it was with .NET. And I liked the SpecFlow. And then I worked with .NET and SpecFlow for a bit. And then I moved to the UK and basically we were doing the whole platforming for ASOS. So I continued with .NET for a bit and we were like, to be honest, I never worked as a traditional QA, like the way I see QA in the industry today. So I always worked very closely with developers. So like in the company, like even when I was young, I was just waiting around for drinks outside and the people would go, "Are you developer?". So for me, like I had that image always because I was very close to developers. I always believed in, you know, shifting the whole testing to a little bit left. So yes, in the early days my experience was not that great so I had so many white tests. But if you ask me now, I always ask you myself and I have some sort of conversation with developers as well as QAs as well. Like whenever you add a test to the next level from a unit, you should as yourself why you are writing that test there, because. first, finding a defect at that level will be, you know, in a lot of states, and fixing that effectively would expensive. So I always believe in, you know, but I think your test. At the start, so obviously, if you can do TDD, that's best. So if you follow like DTT or TDD approach, that's amazing. But if not that, you sort of always ask a question to yourself when you go up in the ladder in the test parameters that you have.
And I know you mentioned ATDD and BDD and TDD.
But, you know, the idea is, you know, if you look at something like specification by example for Geico and them kind of executable specs, you know, you might have a well, this needs to be 250 milliseconds. Well, what why does it need to be 250 milliseconds? Well, it does, because, you know, we might scale up to a million. And, you know, you need to understand what that volumetrics are looking like. So you need to have an understanding of what you put in there that actually says if this runs less than that, then it's you know, it's not to be right. Because, you know, you can't make it. You know, there are certain discussions where, you know, rest isn't the most efficient way when you scale up. So we had a problem where we had a simple rest endpoint, which then put everything you would post into Kafka. You know part of it is, you know, once you're on Kafka in a big assembly like an enterprise service bus, you know, part of it is you can get a guarantee, the delivery effect. You know, you've got consumers, you know, you can go down, it can come back up. You know, it's not a problem. But here in the front API is, you know, it doesn't guarantee delivery, right. So if it comes back with a 5 or 4 or comes back with something which is an error, you know, how does it deal with the error handling? How does what all those negative tests, you know, using rest assured, you know, you're going to go through and you can stop able to write those scripts that kind of say, I'm going to do the negative parts, you know, do the happy testing as far as the cut of the contracts somewhere those contracts go to go to, but also, I can understand well for marking out the server with wire marker or whatever it may be. You know that I understand that some of the payloads that could come back up, you know, that that that that contract, you know, that already exists or so, so today. So, you know, you need to deal that and you got you've got upstream and downstream systems like Stripe. You know, you've got you to know, the payment gateways, which you've got codes which can automatically say there's are no funds available. So you got to write those into your scripts to say, you know, the card being declined or, you know, some other thing. Say, you know, part of it is there's a lot of stuff that you know, is your advice around thinking ahead makes a lot of sense. And it also it's that mentality is it's kind of going well within the contract testing, which sounds like you've done some amazing things, that small box. You know, part of it is, you know, you had to stand where your ecosystem is and where it ends. So you understand just thinking around which kind of nodes you're actually going to be in and out of scope. You understand what payloads need to be delivered to each one of those and what they make the request-response pairs would be for each one of those transactions, even if it's just a template. And then at least another contract testing can go on with the upstream systems that can take those as a reference to what the schema looks like and then what they could potentially be expecting from, you know, an approved transaction, for instance, into, you know, that the backend systems say, you know, I like I love the idea of the marks and the steps of the shims and the idea of really understanding what the scale is going to look like. I also love this idea. If the been able to deliver into the pipeline and, you know, also this kind of low code kind of approach that, you know, you're documenting in your, you know, get lab or you get hub repository, you know, you're giving people the setting of how they would build that locally. So, you know, if they're spreading that upon now on a Linux box versus, you know, maybe they use Docker for Windows or a Mac, you know, is, you know, for a schema registry perspective, you know, is there any configurations that different? You know, is there any kind of contagious that they've got to pull down any dependencies, you know, understanding all that really helps, and I think from the documentation, it's one of those things that just enough is right. And it also supports new staff as to where to go in and enter and able to pull those down, run them on the local one, then deliver them through a pipeline and maybe go to OpenStack or wherever else you do. But also the idea of things like monitoring is code, you know, the idea of being able to, you know, actually have some rappers around. Well, if I'm deploying Buycott, Prometheus, if Prometheus, what's my data? I built a pod to Cabana. You know, what is it? What is a key important is the eye ops, you know, is it the amount of time, you know, just for the requested response to CPU? What is it that you actually wanted to understand and monitor from an operational? So you kind of then thinking not only just ahead, but you're thinking on an ops perspective, if you know, what would those guys be looking guy? And if they're trying to do you like root cause analysis or pinpoint failure, then try to understand where the issues come from. Then they've got all the monitoring, they've got all the logging that requires they've also got a history of what they, you know, everything should look like. So you APM can do that kind of capture off of and help people diagnose issues in production. So there's a lot of really valuable lessons that you've kind of added in there. And it sounds like the maturity of every role that you've taken is increased study. Each kind of step, you know, for people who are trying to learn this technical stuff, do you have any kind of tips for kind of places that they could do their online training? So, you know, like Udemy, you know, where do you find that you get the best resources when you go online?
I think the bonus for me, like as I that earlier if your basic programming concepts are good enough, then I think learning these new things it will be easy because if you take an example of any of these new tools that you have in the market, they are just up there on the Selenium. So if you know Selenium, you can do these, not do it. So for me, like I don't I'm sure like Udemy and these Linked-In lending and tool learnings are all good. But for me, like I don't learn stuff on any of these paid platforms. I just find I just go to that tool and just take a look at their documentation and try to, you know, get my hands dirty with that. So I basically played on the load and then if I got stuck somewhere, then maybe I just search on the Internet like, what's this?
But I normally don't do any costs for any new technology that I learned. I just go to the documentation. If I want to learn a low cost I just go to the documentation. Do a little bit of research first I run without Dockery's, so I just do a normal installation play with it and then I just basically do the docker one. So I always basically when I try to use something, I want to make sure that there's --- for it and that it's stable. But I said when I learned I learned without Dokken I stuff and I say okay. They had already made that because when you want to like for example there are so many automation projects where you can just learn stuff, you can write code, but people are not executing those tests in their everyday life that's because they see these test either flakey or not easy to run in the pipeline because for example, as a QA, you say, you know what, I want to add these tests to the pipeline and then tech lead or architecture or one of the senior developers will say, you know what, we can edit, but can you make sure that your test is not flaky? So it's like and then, you know, running those tests like a digital Oaken, he made for the framework that you're using, that will be very helpful.
So, for example, as we are using Cypress for the front-end testing and it's very easy for us to run that in the pipeline. We just ran on Chrome and Firefox in the panel for front-end testing and it's already handy same for low cost did it. Okajima [0.4s] So it's like for me, like I don't learn. I don't do any full-online learning. But yes, I take a look at the tools. First, I do some research like which is the new tools. I always make sure that I spend I have some spare time for my learnings in a week like every week I just use new in the QA space what's new in the developments it's not only QA, I just look at the development as well like which film book is good. What's happening with the whole, you know, how the system works, what are the best way of testing these? And then if I find something new, I just take a look at the documentation of the tool, and then I just get up and running on my local machine and then I will make sure that it's not only up and running, I have some sort of test project to have that in CICD as well so that I can just go and sell that to business saying that guys because otherwise, it's very difficult.
I'd if you were going to a big company and you did some sort of use, you know, local machine, but that was just to hold rightest. But that's not important because of the whole and Glanzer nazy important. So you can write this, but then your test will be on your machine. It may be on the other GitHub depot, but we are not executing those tests on a day to day life. So we should always think about when we think okay, fine, let's use this tool so we should do it and do and evaluate that. Okay, we'll be able to use this tool as part of the A.C.T. and why we need this because I'm sure we already have something.
Every company has something in place, so we need to prove that. Okay, these are the days that we have. These are the problems that you're facing a domain B1 which these things with this new tool and then you should go with a small P.O.S. which is and doing that is it have a sort of CACD and then maybe propose your manager at Aldine or someone in the business. So but buddy, I prefer this approach because if we don't learn then obviously the Vatican or D.N.C. I think will be lost somewhere.
Jonathon Wright So do you have any kind of test projects, you know, that you use to demo your end to end capabilities? So like I use. So we react native, for instance.
You know, I 'd want to do it with a Dubie app, you know, or if I'm trying to do something like using locusts, you know, I'd like to, you know, hit some performance, say Z0. You could use something like Blades.demo.com As that kind of a sandbox environment that you can hit the API. You can hit the front end, you know? Do you have any kind of favorites for that kind of stuff?
Arun Kumar Solich for me, as I said, a doubleness that I always let's say if I want to test legitimate, it already found this locust. If I want to Cyprus or any of the UI frameworks, I basically build a small UI application for myself and have that in the pipeline that either then hitting the existing one so that I can do the whole and went pipeline because you know, I need to see that. Okay. Let's say if I want to propose something to the act or eulogists, then it's very easy. I can see that download from one of the GitHub reports on that and then the next thing that gets HelloWorld GitHub up. Beauteous application or some sort of form application, something like that. And then I basically have that in my biplane deploy that as part of my biplane and then don't test on it. So I basically [0.0s] follow that approach. SBIC Yes, you write sometimes I just may find someone on the Internet that if I want to do some fixing, but I normally forefoot approach where you build an application, if you're not building, it's fine. You can find on the Internet and then you try to host that on your local first, get your head around, and then you need to transform that. Didn't do a pipeline on you get the benefit that I get LeBert so easy. So I get left for my personal projects and I follow that approach. I try to run unit tests even though if I don't write and then I can add some at any update there says what are the pipeline as well? I deploy in my pipeline, which is, as it said, not on the physical enrollment. It's deployed somewhere on the Cotopaxi good that I had on my test. And then I had my UI test as well as part of the pipeline before it is deployed. So it is like I follow that uploads. But sometimes, yes, if I need a very quick window and I just take one of the existing websites, if you're playing with a UI, you can just take any e-commerce website and then play with it.
Jonathon Wright And you know, I was in I flew back from Bordeaux a couple of days back and we were using Captain. I don't if you see captain, it's a get repository for kind of unbreakable cut pipeline. But the idea is it works. We get Lappe the idea is it gives you kind of those quality gates that tells you to base on lots of historical runs what the know deviation of that execution is at different levels. So obviously at UI performance API, it's got all that information and it kind of gets into that kind of you can see where you get score different environments and you can drill down into it. Do you find that you've got any tools that you like within the CICD pipeline fit to do to something?
So to select a vainest for us like you would get the labor to set and we have some check as part of a pipeline. For example, if I want to deploy to enlargement, which is like let's say my first environment, which can be developed in my mind or staging that are different names that you can follow. But if my. Obviously, if my unit test is failed as part of my pipeline, then obviously it will not go further. So let's see if anything fails and then you won't be able to deploy. So Same goes, as I said, lead in the unit test would and API test and we'd any white test before we deploy. So if any of these guys failed, also we validate the contract as well that we have a different team, let's say different applications. So that also we related before we deploy so we make sure that everything is green. Also, we check on woman's health check as well, because sometimes what can happen is your code is OK, but there's a problem the interests exit. So what we do is before we deploy, we make sure that alignment is healthy. So we do all the quality checks which we have in a separate get laid out and file because I decided that I think a good thing maybe for the audience had AIDS. So the way I think is you can normally you have a blanketly IBM and file for the whole project. But the way I see the. You need to have a different get labial on malphite for different activities. So for example, all the quality checks should be in one. Get Lavie on the file. So then In-Q-Tel had something you just update that GitHub gets Lib-Dem and fight. Same goes for, you know, the static chording unless you stink. You can have a Yamal file, you can have five slotted something for reporting. You need to have a separate get Dave on that fight so that you'll find one B-to-B. So so we do have these all different quality checks before we deploy. And then I say we do then woman health check as well. So we have a health tech URL. We just hit that. I said, OK, is it green? Is everything okay? I'll be able to connect two different systems, blah, blah, blah. Oh, look. And then we deploy after deployment. The first thing we do is we check it again, obviously, and then we do some validation. And again, obviously, everything is automated. So we deploy, we do the health check and then we check of it and our functional test or some sort of validation in the pipeline. And if that's good, then if you go to an Asian woman, a name that's good and go to Nickson Ahmed and then obviously follow product production at home in and are doing. But we have a manual click. You just click and then go to.
Jonathon Wright And as far as kind of your pride, so you're going to do continuous deployment over continuous delivery there. But you also once you've got into production, do you use any like real user monitoring?
Do you add anything to your APM aims to do kind of just check things are okay? Synthetic monitoring tests.
Arun Kumar So we already have, as I said, these Dysport board buildup and we have alerting policy for a different application that okay, for example, this particular source will get these minute quest and should, you know, respond with 200 or do you know, based on the Yoda comment, we have all these configured. And if after any deployment, obviously we boost application Watson engagement every time and we deploy it, go to our logging systems, which I would ever use.
And then if there's anything wrong, you will get an e-mail. obviously. As I said, we have a predefined alerting policy. So if something is wrong, you will get an e-mail saying that, hey, this is not working. And obviously, if you are building something new, so it's just like that, there's a tradeoff always. So if you are doing some improvements on the existing EPA or a facility that you have, you don't how to build that monitoring thing. But if you are introducing a new resource, then obviously has part of that development. We built this Hedgeville so that when we deployed to prediction, we basically make sure that this thing is, you know, getting populated and reacting, you know, and we are getting that response as expected. So, yes, we do consider those things, as well as part of the deployment but I said it's pretty stable now and then. Yes. If something goes wrong, we get an e-mail. Now, you know, due to the coronavirus I will get an e-mail yesterday oh what's happening? Because you are not getting the, you know, expected traffic because people are not doing stuff so it's like.
But these things also we need to keep revisiting them because otherwise, it's going up or down. And then, you know, you need to keep revisiting your alerting policy as well. Okay. What's what? Let's say you roll out into new countries, then you need to think about your alerting policy as well because it's going to go up or down. So but yes, we do consider those things such as well as part of the release.
Jonathon Wright Well, thanks so much for that. It's been a great session today. I really enjoyed the time and congratulations on the new role as well. So we'll definitely have to get you back when you've been there a little bit longer. And you've got some more exciting stories to share with us. All right. Have a great day!