Related Links:
- Subscribe To The QA Lead Newsletter to get our latest articles and podcasts
- Join the waitlist for The QA Lead membership forum
Related articles and podcasts:
- Introduction To The QA Lead (With Ben Aston & Jonathon Wright)
- What is Chaos Engineering and Why Is It Important?
- The Dark Side Of Automation Testing (& Why Manual Testing Will Never Die)
- 26 Software Testing Quotes On Code, Bugs & Building Quality Software
- The QA’s Ultimate Guide To Database Testing
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Audio Transcription:
In the digital reality, evolution over revolution prevails. The QA Lead approaches and techniques that worked yesterday will fail you tomorrow. So free your mind. The automation cyborg has been sent back in time. Ted Speaker Jonathon Wright’s mission is to help you save the future from bad software.
Jonathon Wright This podcast is brought to you by Eggplant, eggplant help businesses to test, monitor, and analyze their end to end customer experience and continuously improve their business outcomes.
It's absolutely great to have you on the show. So if you want to just introduce yourself, tell the listeners a little bit about you.
Nenad Cvetkovic So I am currently QA lead working as a consultant on that position for one company from the i-gaming industry. But my career started now 13 years ago. I started as a software engineer and then I moved to another company and actually started working as a software tester. I fell in love with testing. And since 2011, I am in the software testing all the time. So I was changing companies, but I was always QA.
Jonathon Wright You were also at a brief time were a part-time lecturer as well.
Nenad Cvetkovic Yeah, a bit. Not every day.
Jonathon Wright This was more just coaching and mentoring that you were doing.
Nenad Cvetkovic So Currently I am involved in a community here in Serbia. QA Serbia. And then you are trying to, first of all, gather people who are involved in testing and quality assurance to help them overcome everyday problems. For that, we have some Slack Channel. We are trying to answer all the questions and as a part of this initiative we have a dedication center where I am, I am, I am actually I am seeing that I am actually presenter there of a certain topic. I cannot say that I am actually teaching people that I am a teacher or something like that.
So it's more like people, people who are interested in the topics. A topic about software test automation. So this is my topic which I'm presenting. So basics of test automation and also some advanced concepts of the test information. My primary language is Java, but I'm using other languages anyway. Java is, I would say pretty popular here in Serbia and that was the reason why is these courses are held in Java. So this. Serbia is an organization that is pretty young, the area to the organization established in 2017.
And so far we get more than 1000 people and all our events are really really well attended. So we're approximately around like a hundred people are on on on our meetups. Bjork's organize some event called the Test Automation Day that presents different topics. And we also are connected with the other communities from Bosnia and from Greece. And also we have some contacts with other countries, Romania, Bulgaria. So basically this Balkan region.
Jonathon Wright That's awesome. So, when you do these days for test automation, what kind of is a kind of a bit of a hackathon, do people, you know?
Nenad Cvetkovic Well, the hackathon was one of the ideas. But we couldn't how to say. Develop that idea fully and at the end of the abandon that. So it's more like presenting a good new concept as a workshop so that people can actually try the things at the end of the workshop day. They're receiving materials, pieces of code that they can use in their test automation frameworks, or depends on a topic.
For example, last year, 2019, the Test Automation Day covered API testing, then Cucumber how it can be used. Okay. I know that many people do not agree with that, but it's obvious that many companies are using Cucumber for testing as that's an automation framework. And one of the concepts which I presented was about how to use Docker in a test automation framework in how to actually improve the way, how a test automation framework can execute test cases.
With the help of Docker. So Different, different topics, different aspects of software testing, and Test automation. In the past, we also had some presenters about the basics of software testing, how to start with that in these kinds of things.
Jonathon Wright It's really interesting. So we were doing a warm-up yesterday for the British Computer Society and Lisa Crispin's going to be talking about BDD. It's interesting that I dunno who was kind of slow on the thought works the idea around JBehave and you know, the concepts behind behavior-driven development.
And Lisa was talking about the importance of BDD, but also in a DevOps kind of landscape. You know, I know you do a lot with things like Protractor and using Cucumber by Dockerizing it, and doing that course on Docker with Docker really helps with that kind of GitOps approach. So you can you know, you could build things, you could spin up the docker containers, you can run it through a do the execution, either headless for your UI tests or you could do your API tests in there and then, you know, get the results back into your whatever your continuous integration and deployment toolset is.
So it's quite fascinating that actually you cut off your kind of helping people with build their frameworks and give them the enablement to actually be able to achieve this kind of continuous testing. And we've got Eran actually from Perfecta Mobile of the show this week as well. He's written a book on continuous testing. And, you know, it seems like if the golden, you know, what everybody says before is actually really difficult. Do you have any tips for kind of listeners on, you know, places to start when they're trying to understand containers and that kind of landscape?
Nenad Cvetkovic I would just go back a little bit back about this continuous testing. It's definitely a goal which averaged but every company should accomplish. Sooner or later, I mean, every company which is actually doing automated testing. Docker containers definitely can help out a lot. So there is no recipe on how to start that. But I would say that there are two main things. First of all, for every test that DevOps is his best friend or your best friend.
So it's really important to establish real communication between testers and DevOps without the help of DevOps. I would say it's almost impossible to achieve anything even close to continuous testing. And also, when we are talking about continuous testing and containerization, it's important to work toward putting your application into containers. So if you have only testing things or in containers and you're trying to accomplish something with that, you'll make some achievement. But that's not what you should aim for.
You should your goal should be to put everything inside the container. So the application container and also testing framework to combine these things. And then it's just your imagination how to work that. So you can, for example, you can start multiple instances of your own application. Then you can basically work, actually execute test cases in parallel. But each test case will actually be executed in one instance.
So your application. Then also wiping out to the database before each test case and these kinds of concepts. These brings also test cases that are completely independent. So basically, in order to accomplish this, continuous testing containers play a major role I would say the most important all. But if you just put it, that's a test automation framework into the container.
That's not enough. You have to put also application in containers. And then then, of course, it's a question of your resources or your infrastructure, whether you can support multiple instances working, running test cases in parallel, these kinds of things. But this is more like a question for business, how much they want to invest in that. This is not a technical thing.
Jonathon Wright Absolutely, absolutely great advice. And I love the fact that you. You're kind of the first person I've talked to the show who's got that whole vision of tearing down environments, you know, loading data in databases so that the states are already and consistent. You know, we had Huw Price on the show talking about test data management and the importance of getting good data. And, you know, I've always found that when you're trying to do high volume automated testing, you know, what are things? It's because she has a lot of data that's lost states in there.
And then, of course, you know, you're changing those states or modifying them, especially if stuff goes wrong. You kind of want to freeze-frame it. And, you know, one of the things we were working on, which is a technology Delphix used called VTDM, and that is database virtualization. So what it does is open source project now at this particular podcast. But what it allows you to do is virtualize it like an oracle or MySQL database, allow you to test it to run against an instance which had a much smaller footprint, a bit like Netflix or databases. It allows you to make a change to it.
And it only keeps the delta. So the changes that banks in the database are stored, you know, on the local machine, the backend, which could be a huge database, can be virtualized. And I think that's a really clever technology. I know you were using things like Mongo, DB and SQL and stuff like that. You know, it's getting the data, obviously the environment, you know, ready in that kind of stay. And then the second one is really about that big challenge. If once you've done it and you execute today and you were disturbed, you know, your API comes back with some form response is then to be able to at least do the root cause analysis by saying, well, actually,
I upheld those states on those particular Docker images. I could GitHub pod now and have a look at what the problem was. And you know, the developers can tailor it, they can go in, they've got the right amount of logging that's in there and say, do you have any kind of tips? Because, you know, I've always found the hardest part has been reporting. Right. Is this idea of how do you know it's been it's a false positive or it's a pass. Right. Or you know, it's a proper fail. You know, do you wrap it into your test when you're running your script to give you extra logging information?
Nenad Cvetkovic Unfortunately, usually reporting and logging in forgotten parts of the test automation framework. Usually, they are developed at the end. So basically when you develop test automation framework you how to think about that thought from beginning to end. It's the best way to do that is actually to incorporate logging inside your test automation framework and basically to provide it automatically so that it's obvious. Obvious what's going on there.
On the other hand, also good composition or good architectural test automation framework can help a lot to find certain problems. If you go back to this special reporting, well, it depends on what is your goal in the reporting? Definitely one of the goals is to actually find an easy way to debug your application and to find out what could be what can be a default course. It's not always, always possible to accomplish that, especially if you're talking about, for example, API testing.
So then you have to actually takes some snapshots or take a day, could see class and response and, you know, these kinds of things to save them. But it's also important to know what is the state of your application at that moment. And then something went wrong and these kinds of things, it's not easy to accomplish and to put them in your report in order to help you to debug. So this question is not easy to answer, to say,
OK, you how to do this and this. You just have to think about different aspects of the problems which can't happen during here's execution of your tests and then adopt your reporting to do that. Maybe you can also approach as you want. Some evolution took an evolutionary approach, the start of something small, and then when you start experiencing some problems, then you making tenements, you know, in your test automation framework so that it will be easier for you in the future to find out what went wrong.
Jonathon Wright I think that's really good. You kind of understanding what the end state is, know what you're looking for. And then part of that is this, you know, setting up assertions in the kind of information that you want to check. So if you do get a response back and it's a 504 or something and you're kind of handling it internally and then you've got it. Okay. Well, you know, I need to capture the request-response. You know, I need to log that as a fail. And then you can go off and investigate.
That particular scenario is really interesting. And, you know, there's a lot of I guess part of those frameworks is you can things like I know you use Gradle, which you know, I miss about I was no master at Gradle. But, you know, when I first got started looking at, you know, Gradle and the kind of well, actually, if I'm using a test that gee, that I've already got a knife wrapper around it for some kind of information about, you know, what kind of report. And I'm going to get, you know, part of this headless kind of mode was always a bit more difficult because, you know, getting a screen capture, you know, was a different thing.
So, if you could still get a screen capture, you can see what the HTML rendering it would look like and the error that was on the screen. And, you know, testing would manage that for you. So there are lots of little tips, which I think are really useful. And, you know, I think you're absolutely right. The idea with DevOps, you know, and Dev making that possible is that Dev will have certain things that they want to add it to or to call it a recipe for a second. But you know, that recipe part of Gradle, you know, to kind of say, well, actually, I want everything in JUnit 5.
You know, that the framework which they got a wrap around it for component testing. But I think that's really good. And if you're working with the development, then there's that shared capability which they can go off and look. And then, you know, from an operations perspective, you've got things like Prometheus, Grafton where you can start saying, well, actually, I'll bag that and I'll get some information that pushes out so I can, you know, access that info and then depending on people how they set it up.
You know, if you're reading that local data to debug it, then of course you can write things out to files, which then you can access. So, you know, I think there's a lot of really useful things. I think it's quite difficult to show. I know we both come from the generation where you kind of use the FTP or STP on it or Something right, onto a physical machine that getting into a container. You know, if some of those skills of skills that were you say, you know, BASH using BASH and using, you know, SSH, you know, just the kind of things like that are always kind of the first thing people are used to.
And I think if it's really useful that you've got this guide if you're helping people get through this journey because if to me, is this like the if the golden goose that we're chasing, you know, before it was, you know, this idea of automating every day, which, you know, I let the hard way of it with you know, with you've got a lot of projects where you've done thousands of test runs and you had to learn that actually it's more about what's the value of those?
How robust are they? You know, because, you know, if there is a value in executing those every day, what do they prove? And, you know, they've got a purpose. And also, there's a certain amount of confidence that you've got behind those. You know you develop as a company and what you're doing because you're catching stuff and giving them report in which they can act on that actionable insight is really useful.
And we had Paul Grossman on the show a couple of weeks ago, and he was talking about his magic page object bubble, which he sent me the GitHub repository. Finally, put it all together. But, you know, do you find that you know, especially the UN UI landscape, that I actually object, you know, the page object model is a bit of a challenge still, do you work with the developers to get better ideas set up there or, you know, how have you dealt with UI test being less fragile?
Nenad Cvetkovic OK. Definitely depends on the company. When I say that if the company introduced testing early after some time, they basically front-end developers already got used to the idea that it will be tested base by selling a new home or some other UI testing framework. And they also changed the approach to how they develop the UI part. If the application was never tested on a UI level, then it can be pretty messy. That's my necklace, my opinion and that's my main passion and bolts testing go in the last couple of years.
And joining some companies as a first QA engineer. In general concept of the page object model, I would say it's still good, it's still valid, but you can always work on some improvements and do a how well I am approaching is actually that I'm trying to take this component-based model, so breaking down pages into components and then these components also can contain some other components in that to you have a model or an of the of your presentation of a page. So it's easier to identify.
Very, very is actually the problem. And in case some part of the page or change, you have to identify only to each build a component it blocks. Unfortunately, you know, developers sometimes saying that and they neglect some obvious things. And it's not always easy to accomplish this component-based approach. at least from a point of view, how it should be done. You can work, of course. It's because you are not dependent on any on the UI completely.
But still, because of the Sellick doors and everything, you need to identify these off at best, which will actually represent your component. So sometimes when you identify this wrapper and there is some new change, or maybe you don't need the current change, you see that the parts of yours. Something that should represent your component is actually not inside that wrapper, but it hangs on some other somewhere else. So you have to be really, really to use your imagination, imagination, and try to persuade different developers to change their approach.
It's not always accomplishable because usually applications are developed for a couple of years and now you know you're coming in asking some big, tremendous change. But I think that after some time, they understand that. So, yeah, basically, I'm calling not to make them fragile. They're basically trying to, you know, use all this, that one which Selenium brings when we are talking about the UI. So these implicit rates, explicit rates, and fluent rates and you know, then if you're talking about it, then it will be implementation, like if it is Angular React, Maybe we can rely on some synchronization points on application-level just to check how many connections that open in these kinds of things.
So, you know, just to improve things. know that I had really, I would say bad experience in one of the companies that were working great for the area nightmare, these securitizations. First of all, the server on which the application under test was running was a pretty low-performance machine. And then there is the response which was received back then. You know, Selenium opens the page. It was not consistent.
And you know, these timeouts are happening randomly. So I invested a lot there, too, to make it work. I can say that a complex that that at the end. But it was a really, really tough time. So basically, if also I think that page object model helps there a lot to make, your test case is less fragile in a way that you see what's going on there, that maybe you can identify some parts of your you're your net application.
We tried loading with a bit slower, so maybe you can also speak with the developers about that problem, and maybe that's something that can be improved in the future. But in general. In general, I would say synchronization points and be smart to be smart about your architecture and or restore create the fountain developers to make improvements which will make your test automation framework less fragile and these page object models are more resistant.
Jonathon Wright I had fantastic advice. You took about front end developers and you know, there is this temptation now with full-stack, right. Which I think is quite difficult. You know, not always having your back end guys on your front end, guys. And you know, I love MVC. So, you know, the idea of just added the model view controller really helps me get an idea of what they ask would look like just from looking at code.
Right. And but at the same time, you know, I'd be bad like what you just said, things like PhantomJS, you know, when the end of life for a headless. And then I had to move over to Chromium. I think, you know, part of it is, is that kind of the only downside of the open-source stack is really that is how, you know, how well maintained and how well it's supported. And I know we mentioned Eran coming on the show this week.
He'd be working on something called the Quantum Framework. I'd recommend I'll put the link in there, which is a BBD framework. But they've kind of open source there. And I know a lot of people are trying to do that, try and help the community by developing these kinds of capabilities and frameworks, because, you know, it doesn't really help. But I know you kind of gone up with the JBehave to the point way you and Cucumber, you've kind of got into that men mentality. And I think it's a great way for people to start as well.
You know, I think it's really interesting what you said there about synchronization, because, you know, when Paul was on the show, where he was going to say, well, actually, you know, he's now trying to fight squeeze every second, that performance actually winning. You know, he wasn't about to run fast, whereas, you know, coming from you know, from the 90s, kind of my view was always this kind of well, if there is it enabled visible, can I interact with it? You know, there was no way.
But there's a lot of synchronization to make sure it was more robust. But, you know, it's kind of a compromise between speed versus making them, you know, a bit too brittle and making them more robust. And, you know, I think it's really interesting now because of what you said with performances. There's a lot of organizations that probably say all the organizations are probably listening that have this viewpoint, that response time if page load time. So when DOM's enabled and ready, you know.
I ran it. I reran because of the kind of the situation we're in at the moment, I reran it against some of the big retailers that were having problems. So, you know, we said that then if we were recording today with I got a gateway time out when I went onto their Website, you know, heavy load is giving me an error, you know, and some of the retailers are roughly the same for food and things. And it's interesting because with that very plain, you get the response code, you know, that the gateway typed out, whereas, you know, you can get a response back from the system may be in four seconds or three seconds.
But actually, by the time it's rendered, the DOM and know that loaded all the JavaScript. Actually, you can't actually interact with the button till, you know, 40, 50 seconds then. And, you know, if I went for business and said, do you really expect me on a poor 4G or 5G with packet loss to wait 50 seconds for your shopping app for that load they go. No way. You know, they say, oh, well, loads in four seconds I can show you because here's my APM with the, you know, the stats telling me that the apps running with the full second response time.
But then there's all the framework or bloatware at the front, which is, you know, possibly passing off loads of CDNs, which aren't in your SA anyway, and pulling images down which are eight meg on a phone and you sit there, go a wow. You know, there's things like page speed for Google and stuff, which you could really do very quickly. You could include those in your actual test. Can give you some kind of idea if good or bad indicators you.
I just thought to do that with kind of Pivot and a few other tools to kind of say, well, actually I'll plot it over time to say, well, actually you can see there's a 5 percent increase. And it's interesting because, you know, I remember about 10 years ago I was working for a commodity company. We were trying to build performance and from day one. So they were failed. The bills, if they know, there was a 100 percent or 20 percent increase in performance from one build execution to another. You know,
I think there's a lot of value in starting early in that quality lifecycle and caching those things before you get something that. Yes, maybe built on some amazing stack. But actually, from an end-user experience, you know, it's so poor, but they don't see that because they're looking at synthetic tests, which may be just ping something. They don't send a quick payload to get for the API endpoint. They don't really have, you know, a real device, real simulating the real network environments, using network virtualization, then actually accessing.
And I think this is a this was my big quote for 20/20 was this kind of did full experience monitoring. It was real users connected to the network, you know, and allowing you to kind of get this kind of proxies to see what would my customers actually their experience be like? Because it's very different compared to running it locally. And also, I think the same thing goes with Docker as well. I've seen some government organizations recently that will use a docker scheme, a registry with, you know, the same four gig allocation and one core that they produce, you know, they send off to elastics, you know, elastic Docker and then put it into production because they don't think about capacity planning and sizing.
So you kind of used to it because you could have pre-production, you know, performance environment and you hated that and you see the bottlenecks. But really, you know, because physical hardware is not there anymore, people are going well, actually, and it's like sixteen gigs I need to do the caching. You know, I think that the big thing is that the configuration of code, they don't really look at it.
They don't look at Mongo DB has 115 com figuration that can tune how far ahead it reads, how how much caching is enabled. What kind of settings are done? You know, how much CPU it uses, cycles it uses, and how it does that. I just think it's so complex that I think we're going to get burnt down the line when we've got everyone's using the default, Mongo said as the default, you know, Kafka kind of say, you know, if it kind of shows why someone like yourself is going to be incredible, you know, you know, worth a lot within an organization.
Say, you know, do you have any tips for people on, you know, how they can get for your kind of level of, you know, maturity as far as, you know, continuous testing, you know, a very good book. So, you know, where do you find your kind of sources of inspiration?
Nenad Cvetkovic Look, you the book, you don't have any real advice here. Mostly articles. And then the third is webinars. And basically trying something one eye on or catching ideas, some somewhere, and then trying that on my own environment regarding good. This would do just talking about this performance testing and this kind of thing. I would say that you have to focus on certain things. So if you are creating disinformation favor for functional testing, then focus on functional testing. If you want to do performance testing and you have a feeling how.
Performance is your application on you in different conditions. Then focus on that. So don't. I'm always trying not to mix things because you cannot accomplish every aspect of quality in one test automation framework. Even if you can do that, if you even if you can accomplish that, you have to carefully choose how you're approaching that problem. So it's not the same when you are testing if you're doing functional testing and examining different scenarios. It's a different aspect.
Then you are actually trying to see what is the usability of your application or how your customer sees your application or whether it holds for a very long time or so you use. You should if you want to examine that, then focus on that aspect. Think about it. Think about how to actually measure these things. Think about how to assimilate these things. And even if it is not possible to do with the kind of test automation framework which you developed, do it separately. Because maybe this structure which you created is not for that purpose and you will just struggle if you're tied to adopt that that first approach to something completely new.
So regarding the continuous testing. One of the parts or this whole pipeline in continuous testing should be actually it is continuous performance testing. So you have an I would say integration test on the API level. Then you have a functional test on UI level and then you should also have the continuous performance test. Also, as a part of this wall structure, I think it's really important than when DevOps reach some level of maturity that this first actually has a core because this is actually the goal, which which you want to have all these containers and utilization of each other, what you want to take.
So you have infrastructure as a goal, think about introducing test cases for this as well. These were things that you mentioned that maybe under certain circumstances some servers are not giving proper answers or that there are some problems. Maybe you can simulate that when you create test cases for your infrastructure. Because now the infrastructure is a code you can test it. Then there is that there is a there are some libraries that can help there.
So it's really worth investigating that at that part. It. If I go back to continuous performance testing, usually this should be actually that you create some levels of trash code where you actually, which you are actually monitoring and also maybe include some simulation of certain environments. Again, in some control, they saw at least how you see that maybe you're up. Your customers are using it, maybe network users. So maybe it's not a 4g if it's a 2G or maybe you're.
Your clients are using some really old phones or something like that to try to find a way how to simulate that kind of environment and then start measuring the results of your continuous performance. In general, performance testing is a bloated term. There are lots of different aspects, but that will see a focus on some simple ones for the beginning to introduce into this pipeline. And if you want to really totally test the performance of your application, then dedicate time and dedicate effort and resources to do that to that.
Otherwise, if performance testing is done just ad hoc without any control and with some approach will adjust the height. It will not give any benefits to talk to your organization or to your put into improving your application. Of course, when you are doing performance testing, it's not just job of a tester to write down a test case is probably tester together with some project manager, business analyst, product owners or whichever it all existing in the organization just to do some initial brainstorming about how clients or how an end-user is actually using your application.
Then you identify these kinds of scenarios, implement them in that in the mean that this is a performance testing script. And then together you do a lot more technical support to DevOps or whoever is doing monitoring in your application and also together with the backend developers. Then you execute their skase's monitor, what's going on there? And after this testing session, sit down and analyze the result and of course, time to find the potential improvements in the application or at least put all improvements in parts where you maybe don't receive enough feedback. You don't have a proper response so you cannot actually see what's going on in application during these performance testing sessions.
Jonathon Wright I think that's incredibly wise words. You know, it actually it resonates with me so much because I remember when I started out there was someone who'd put some stats saying that if you've got to dedicate for automation testing, that 92 percent of automation testings fail because they're not given not treat it like a legitimate project. Right. And I remember, you know, a really good session that I went to. I think I was doing a customer advisory board in and in the US with a guy from Boeing. And he was a performance. He ran that performance testing.
And he said to me, could you send me his white paper exceed documented? I think it was at 92 different definitions that Boeing it had for what different types of performance and load and capacity testing. And I was blown away by it because, you know, to them it was critical because, you know, it's Boeing and it's critical. The systems are in you know, not every project is he's in there. The number of times I see, you know, which kind of pains me inside as he goes in. And they go, well, you know, this sprint, we're going to do performance when you kind of go well off our security and that non-functional, you know, non-functional requirements and functional requirements.
You know, there there's a breach there. And there's a massive gap with the kind of well, the type of skills. You know, I'd love to. I know many security guys who work for the cyber reserves in the UK and they do things that are like wizardry to me. Right. And I kind of look at it and go, there's no way that I could install, you know, a static coded other system security at all running. You know, like very code running as part of my CICD pipeline. I feel that I've done security testing because I haven't. Right. So, you know, part of it is the same with performance.
Just throw in some, you know, Jamie, two scripts in there with some load. You know, again, doesn't really prove that much. It kind of has to be a legitimate project. But there's got to be a reason for that. And, you know, I say this from four wise words. I actually got a client recently who ended up re-architect in their entire platform. It was an eye platform as well, because, you know, there's this kind of concept. If the guy well, we're doing everything is rest. And then they'd look at the adjacent payloads and they would they're just gigantic, you know, 20 mag payload.
And you sat they got. This is not war. The rest was about right. This was you know, there are better ways to do that. And, you know, I think part of it is the architecture. That are made from solution architects sometimes don't factor all these components in and I love what you said about, you know, we'll speak to the business and also speak to operations because they're the ones that I've got to deal with the manage stay off, you know like you said kind of saying earlier, you know, you've got something like in a COVID-19 where everyone's working remotely.
You know, we're kind of nearly used to it. And I know I've seen from some of your jobs that you've actually worked purely or, you know, remotely siege in Slack as a channel. You know, maybe there's an issue which is getting reported from your APD Splunk or, you know, your APM tool. Say there's an issue that needs investigating you online. You go in there and have a look at it. You're looking at the logs, you're kind of finding a problem.
You move it across, maybe try to reproduce it in your own environment, you know, and then, you know, look at how you can solve this problem, which is a production issue. Right. But at the end of the day, that organization on the front is actually, you know, I've got doubts offering downtime or, you know, maybe the degraded performance platform for that for the right customers. And the great example, what's literally happened in the last couple of weeks is these.
One of the biggest food delivery services in the UK called Accardo completely went down. So first of all, it was their web, the mobile app, and they kind of said, oh, yeah, well, we're round-robin, you know, kind of people in queues, which wasn't the case. They just gain an absolute hammering. And then shortly after people went back to desktops, fired up their old laptops because we couldn't just use it on their phone and then literally start to hit it. The mobile, the Web app, and then the Web app went down as well. And then Accardo put out to the press.
Unfortunately, we're having to bring all our systems down and we're going to, you know, rebuild and kind of redeploy a new application. Hold on for us. Right. But that is ops. What I was kind of having to fire fi remotely, you know, not able to go and press the reboot on these servers if they're running anything locally in the office anymore because they're at home and now they're in a real situation because they own the brand of the organization. Is that not only the loss of, you know, of confidence in the platform? You know, I've read a really good start, which was around performance, which said if a customer believes that your application is slow, it doesn't matter how much you fix it. You know,
I come and tweak the back. It's not until a rebrand happens where they go, oh, this application is faster because the customer perceptions, the only realities, what they think the pace is how they can perceive it. Say, you know, Disney Plus just loads across Europe. You know, Disney Plus went down on the first day we talked about this in the series already. But, you know, the idea is, you know, it's not Disney's servers, right? Disney, there's no Mickey Mouse servers there. There'll be at Amazon AWS or as your kind of infrastructure that couldn't cope with the amount of demand. And now that it's being piloted in the US, it's fine for us Europeans to have it.
But, you know, that's the same thing is Disney then gets this brand new. Disney is Brand and more than anything. Right. You know, they do that because they want children and they want everyone who has the experience to have, you know, quote-unquote, have a magical day. Whenever I've been there for star Western, star east, you know. You know, there's this kind of concept, if, you know, they put that head of everyone. And what I found interesting about the Accardo issue is Accardo it's actually a subset of John Lewis Partners, which John Lewis's website stayed up.
And also the subsidiaries of John Lewis like Waitrose. So they've all got different infrastructure. I remember sitting with the guy who was head of performance, GEICO, Paul Smith for John Lewis Partners. And he said, yeah, we're just spending this huge about money, bringing all the different companies together on the same platform so we can maintain them. And he's like, fantastic project. You know, a couple of years with the transformation halfway through, they kind of went, we can't do it. Everyone else has to manage their own ecosystems because these guys are on the dot net.
These guys are run Java. You know, we can't bring the family together. We can't get the developers on both boards that they're now using ReactJS and not MVC. You know, they add too many arguments. We've just let them all do their own thing. And now you can see why by doing their own thing, one of their three entities is just, you know, pretty much been wiped off. And I think this is really interesting because the architectural decisions that are made team by the team also affect company services and different armed services.
Disney Plus has probably got a different team too, you know, the rest of the Disney, you know, you know, the Disney capabilities. And, you know, I always ask. And I know is that Dean Leffingwell is kind of just updated the new SAIFI for kind of model with this kind of really big focus on architectural solution architects and then lower level, you've got, you know, within the team because it feels like we used to have these great enterprise architects who are kind of gods of men who walk the walked opposite policy when. Yes.
This is the sprinkled architecture which we'll have. But, you know, I had a good friend, Stuart Moncrief, who does reserve a website called My Load Test. And he said to me when he was doing some analysis that I can pretty much tell you the architecture of every company that I go and look at because I could see when the company was formed and I could see was at that point in time was the best stack to do for that particular time of the year. And he literally when they're using three-tier. These guys are doing service.
You know, he could point to it because he could, you know, look at it. Look at it. Resolved the IP work out quite quickly if what was hosting what the web service it was. And you just kind of sat there going, it's amazing because architectural decisions that were made three years ago, you know, differ to what they are now. And that is really interesting. It's really, really fascinating. I know you said at the start that you've just got this massive passion for what you do. And so in software testing and quality, because I think it's that kind of passion that, you know, everybody needs to have because, you know, it is really hard.
And it's you know, it's a combination of upskilling against these different technologies. You know, like you said, learning about Java and then having to kind of switch different platforms each time you go into a new role. Be very flexible. And, you know, it can be quite daunting for those people who are kind of starting out because the landscape is so hard, so kind of going in and saying, well, I want to do some infrastructure as code or resilience test day or, you know, this idea of, you know, kind of chaos engineering way at the site, reliability engineer or you kind of looking and going, okay, well, we'll purposely have some scripts that won't you know, we'll bring down the Kafka whilst I'm sending it a frontend some some some requests to people on there for the producers of the consumers.
And, you know, part of it is what happens? Does it spent mark-up, you know, does it you know, does something get lost? You know, in the old days with the Enterprise Service, Bushey's kind of guarantee that the message is still going to be there. But, you know, that message can be political. If that's a health care system and that's somebody, you know, scan appointment. You know, the fact that he's gone missing and no one can find it is probably they are. And I wouldn't believe these things happened.
But I've seen organizations where they like. I have no idea where it is. It's not like a Terminator at the end of network cable where it just poured out all the Internet. It's literally, you know, there's something's gone. A system either rejected out into the systems, not handled it properly. And it's been written to a log, but no one's ever going to know. And I find that absolutely fascinating and also terrifying at the same time.
But I think we're in a brave new world. And, you know, it's actually been amazing to have somebody who's got that much wisdom and knowledge around, you know, just that whole kind of journey and the important stuff, you know, QA and testing throughout that process. So, you know, do you have any kind of recommendations for, you know, for for for people as far as training material where they can find out more as well as kind of a good way to contact you now.
Nenad Cvetkovic OK? Definitely. They can contact me on LinkedIn. That's probably the best way. So regarding learning resources. Well, of course, that helps with books. And there are some pretty interesting courses. Obviously, EDX and I think Coursera, also adds on this to Udemy, which is pretty popular and you can find the courses pretty cheap. I cannot recommend any because I was just, you know, after some time I. You just do want to go on some advanced level and then you just you got bored after like one or two hours and then just give up.
But my approach is usually to read the books and read the articles. So did my library. Lots of lots of books which he didn't finish. I started, but then something happened and they did have to. I forgot about them for at least temporarily. Things that are occupying me now are mostly metrics and how to measure, actually. Your quality and how to measure and measure your Quality assurance of process and how to improve your quality assurance as up in your organization how to improve software development processes as well, and basically how to extract these detailed tasks and translate them into some meaningful measurement which can at least give you some pants or can give you some idea that.
Your organization is improving or it is actually going to go opposite to it. I'm currently in this leadership position to solve for me. I'd like to be struggling to establish some meaningful objectives and key results form for the team which I am leading. I thought it's much, much easier. But then when you actually start doing that, it's not that easy. And basically what I figure out is that these objectives must have some connection in your business goals and especially these kinds of times.
Then when everything is changing again, then the future is completely unpredictable. It's a little difficult to find objectives to relate to it to your business. Also, when you're talking about objectives, these objectives must have some meaning to be meaningful to the team so that they see that actually there is some target there. Their personnel and our guest can hear progress and also progress toward the better quality of the product. So these are the topics you try every time. Busy.
Now I'm going to be going out of this test automation team. In the last two, that is six months. But this is, I would say, just temporary. I will go back pretty soon in this mindset. Mindset again. I think everybody knows how I mean. Google is your friend. Just Google whatever your did you try to do to learn and you will find useful sources of big discovery for me was Twitter. I didn't use it until 2018. And I went to some conference and one of the presenters was saying about how it could be useful. So I find I find it really useful because, you know, there are lots of people you can follow them and they share their knowledge.
And this knowledge is accessible to you. And you are. You don't have to actually try to find some new approaches or some new points of your day. They actually give it to you and you just have to consume it. So my advice is to use Twitter and start following some people which you think are important for you or your job. So, yeah, regarding learning and training. So now, of course, these two schools, in software testing. One is this context-Based and the other is this the ISTQA so I don't want to go on each side. I took some certificates in ISTQB.
For me, it has some meaning. Maybe they're people say it's meaningless, but I say from my point. For me personally, it is. This was a good thing. And I think that there are a lot by learning about these exams. I also don't neglect to try to read whatever I can regarding this context-based code, It's really useful. And these are the mindset to which rich brings context-based testing. It is extremely useful for every person who is a hands-on tester.
That's all. So, OK. There are different aspects. They're also there, they have their own aspect of test automation or test management, these kinds of things. But I would say for every person who is out there and is testing, I would say it's not crucial to be part of a certain school. You just take what if what you're thinking it's meaningful. You need as much as possible and stay informed. And if you prefer some days more than the other one, it's okay. So I think that still there is no one solution for every approach.
Jonathon Wright I think that's great advice. And it's interesting. It feels very much like the LGBT plus, you know, trans and the other ones they add end together. You know, I've kept out of the kind of the conversation and I do remember a close colleague of mine, Julie Gardner, saying, why can't we just all get along? You know, in the sense of bringing the two schools together. But, you know, yesterday we had Adam Smith, who's on the ISO Standards Committee for the new A.I. and you say, you know, he is kind of talking about the fact that, you know, it's a way of a great way of communication.
And I think, you know, I think it's really interesting because you're now going after the holy grail of automation. Right. Which is reporting which nobody boldly has gone and done before, because it's just too hard. Right. You know. That's why you see Test Director and QC, ALM, all got the same schemas. No one kind of wanted to go past and try and re-understand the new kind of schema registry for what that the complexity of the entire CICD kind of pipeline would look like.
And I think that somewhere that people are really challenged with because there are so many layers. And part of peeling back some of the layers is you kind of going up to this. Well, I've always kind of talked about exec scorecard view, which is exactly what you talked about is what is that meaningful value, what sits at the top of the business. Right. And we're skipping kind of revenue generation, but it could be efficiency, it could be anything. Right. And sometimes there's this kind of goal for an organization they sell.
Well, you know, we want to improve efficiency by 20 percent. In the end, the production lines are whatever it may be. And then measuring that from start to finish through multiple projects, though, you know, apps that get commissioned, get decommissioned, teams that get moved. You know, there's so much information all the way through that. I think it's quite hard to get up to that cascading KPI level to say, well, what is that measurement? And I remember a team that I was doing, which was a fairly large team, and they were kind of going well, the metrics were based on each team.
So there'd be certain teams within, let's just say, derivatives over here, which would be like, well, actually what's important to us is the quality of, you know, the transactions that the, you know, within a certain accuracy. So, therefore, we're running whatever. And this is our metric. And on their kind of wiki page landing, I know a lot of people start to do this and get. But, you know, they'd have that information about the KPI that matters to them. And then a level above, they'd be harvesting that information to say, well, actually, we rely on the derivatives platform, but we also rely on the affects platform and their teams have different metrics.
But we'll pull those together because a green over here and green over here means that they're both able to be you know, they've got Shimura step that's ready to go into the end to end upstream and downstream aspect. And what we did, which may be useful, this is we started pulling data out that was specific to each team on a dashboard. And it was interesting because I people love the idea of dashboards and the number of times I go into organizations and I look at a dashboard and I'm like, so excited to go and look at it.
And I go, what you showed us. And they like production issues. Okay. And then I go into the next one and it's like, oh, the close time of production issues. Okay, great. And then I go over to another one and it's the diner trace, you know, beautiful graph. And I go away. You're showing that fantastic graphic like, oh, that's the sexiest. Well, we've got. And, you know, he's interesting because when I speak to Andy, he's the chief, the evangelist for data-trace.
He said we're going to actually disabled dashboards because people have got the tendency to put stuff up there, which is meaningless, but looks really awesome because we put, you know, where traffic's coming in from the globe and everybody likes to sit around and say, oh We can see how many people from China are sending messages over to us. It's like, yeah, we know you got McAfee security. Great, but is that meaningful to a business perspective? Do you know all your customers in China? Is that why you want to keep an eye on the quality there? You know what they are experiencing, any outages, et cetera, et cetera.
And I think if you're able to focus the other teams and we do it to a point where we made sure it didn't have blame on it. So it was an idea of, this is broken, the pipeline. It wasn't Andy's code that broke. It was more about Well, you know, these are quality gates, somebody to go jump on that and go and pick that up and then we'd have the same stuff over in marketing. Right. And I remember working for a company which set decided to go so DevOps, which I don't know if you can never go full DevOps, which is always a bit of a joke, but a bring in marketing into this kind of teams.
So like marketing would make a decision. We're going to launch this new product and then he'd walk offered, you know, five months later to go. Won't we launch an hour on, you know, the Olympics or whatever? And they go, well, we couldn't build it in time. You know, it was literally bringing everybody into the business and having a different viewpoint because that's what men meant a lot to them. You know, Marketing wanted to know the progress of things, but they didn't speak the same language.
But they wanted to understand, you know, things like Java and all that stuff. They can understand where the progress is. And if the times are slip in, they can, you know, make adjustments to their marketing strategy or that did many strategies. They also had this thing where marketing could come up and put onto the board, although nice to halves, that could potentially be valuable for, you know if there's additional time if there's little, you know, stories that were small enough to fit in and they made sense. And I think this is a fascinating area. So we will.
Hundred percent have to get you back on the show once you've kind of got your head around metrics because you might be shaking backward and forwards because it's so complicated, but it is great to kind of get an idea of how you got through that process and what is valuable. You know what? How do you quantify the business value? You know, compared to you know, I've seen a few organizations that are going well, I can tell you exactly how many story points are going to get delivered in September. And I'm like, that's amazing.
But what does that mean for the business? They're sitting there going, well, we've got a capacity of a thousand story points per every three API planning sessions. yet that still doesn't mean anything to me. What did that really make a difference to the business? So good luck with your incredible task. You know, we will definitely get you back. It's been an absolute pleasure to have you on. And thanks so much for being on the show.
Nenad Cvetkovic Thank you. Thank you very much for inviting me.