Jonathon chats with Joel Montvelisky of PractiTest about changes in the testing world and how testers and quality assurance managers can keep up.
Related Links:
- Subscribe To The QA Lead Newsletter to get our latest articles and podcasts
- Check out PractiTest
- Connect with Joel on LinkedIn
- Follow Joel on Twitter
Other articles and podcasts:
- About The QA Lead podcast
- 5 Key Differences Between QA and QC
- Automation Testing Pros & Cons (+ Why Manual Still Matters)
- 6 Hacks For Great Quality Engineering In Remote Dev Teams
- QA Tester Jobs Guide 2020 (Salaries, Careers, and Education)
- The QA’s Ultimate Guide To Database Testing
- What is Quality Assurance? The Essential Guide to QA
- Top QA Experts & Influencers To Get Inspired By
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Audio Transcription:
Intro
In the digital reality, evolution over revolution prevailed. The QA approaches and techniques that worked yesterday will feel you tomorrow. So free your mind. The automation cyborg has been sent back in time. Ted Speaker Jonathon Wright, Mission is to help you save the future from bad software.
Jonathon Wright This podcast is brought to you by eggplant. Eggplant helps businesses to test, monitor, and analyze their end-to-end customer experience and continuously improve their business outcome.
Yeah, really good how things?
Joel Montvelisky Busy. But this is good, I think, nowadays.
Jonathon Wright It is, yeah. I think that's the way it's got to be. Israel's two hours ahead is that correct or.
Joel Montvelisky Which means that you are actually started the day earlier. Well, I'm an early starter, but I do appreciate you starting late. So it's cool.
Jonathon Wright Oh no, no, it's awesome. So, yeah, it's really good to talk to you, actually. I was chatting to somebody from Tel Aviv yesterday for a company called Light Run.
I don't know if you've come across the company. So I was part of the Blazemeter acquisition team kind of spun off. Yes. So there is a lot of who just started a new company which launched a couple of weeks ago. And then part of the other part of the team did like Light Run, which is a debugging and production application, which is really exciting. But, you know, something, something, new. But yeah. So you date back all the way through to the Mercury Interactive days. I'm very interested to hear your entire story.
Jonathon Wright So he said, you know, any particular.
Joel Montvelisky I admitted before and before Mercury, OK. Or Carnival. Do you ask anything particular? I, I really don't know. I mean, Mercury usually people want to hear about because it's, you know, it's the director and stuff like that, but we're not in that category. But I think that there are many young people who don't realize there was a Mercury company once. So that's a little bit about dinosaurs as well. Is the only test Kafka that it's actually something cool that a lot of people are actually enjoying. It's gaining some traction. There is a state of testing that it's also cool. Whatever. By the way, I saw that you're involved in safe paths. We're also trying to lend a hand in there as well.
Jonathon Wright Yes, I think I might actually use the practice test to say, well, the safe paths. So what does it do?
Joel Montvelisky Well, I'm. I'm working with. What's his name? Wow. Dmitri.
Jonathon Wright Dermid, yeah.
Joel Montvelisky Yes, he hasn't. Dermid. Yes. I'm trying to understand how to pronounce that. I've been trained on this. I had a process for a while there. OK.
Jonathon Wright It's a strange, strange spelling, actually, because of I kind of. Obviously, he's based out in Italy.
Jonathon Wright And yeah, he's is a great guy, though, so. So what's kind of what do you do with the safe path skies then?
Joel Montvelisky Well, we provided them PractiTest basically to work with them and actually have in the middle of corresponded with an army that he told me that the people who actually wanted to go into this practice weren't very scripted and they left the project. And right now he's looking into this, the left script. So I went to work with him to show him that because. I'm not sure that a very scripted approach is good with a project like, say, Safe Paths, where you don't really have people in there that are like working full time and stuff. On the other hand, based on what I understood and I also see that some other companies are working with you, I think that automation might be a good point there to integrate into practice. On the one hand, working lighter in practice, using the expert's very testing features in practice, this radical idea. And also, again, showing the whole system, showing also the results of the automation. And if you are running, for example, likely to stuff like that, it's pretty cool when you work in open source projects, basically because you can ensure that no one is breaking the system. And you need to ensure it better because the moment that is distributed, then mandibles become if you don't have continuous testing, that at least I think with only your second-best approach. So that's what I wanted actually to talk to him. I sent him an email yesterday. But as I obviously understand, everyone's actually busy today. It's incredible. And we're busier now. There are two modes. You're either unemployed or busier now than you were back in January, which is kind of mind-blowing if you ask me.
Jonathon Wright Yeah since the pandemic, I've been busier than I've ever been, really. And it's kind of interesting because you're part of it is, on the one hand, it's a lot more virtual content, which is great. And I see the benefit of it. But also this kind of challenge rounds, especially things like the Safe Paths is a project is such an interesting project. But, you know, getting critical mass with crowd testing is actually good.
I've been asked to do a session with Michael Bolton and James Barr and a few other people in a couple of weeks' time around, crowd testing and, you know, or testing in the wild or testing and production. And I think it's such an interesting challenge that people haven't really wanted to talk about. But, you know, it's the safe pass application is a perfect example of having to test something and production test again in Boston at the moment the for the rollout. It is really, you know, a bit of a challenge because obviously, this is using a spray, using synthetic data surveillance to understand routes through. And I know, you know, I'd be actually I'd be using PractiTest for the full the actual test mountains of the projects. That was the first thing David kind of got me to say goodbye to the administrator, which is quite exciting. And part of it is he wanted to kind of do this, you know, this idea of having charters. And, you know, there's quite a lot of structure that I don't know if that's something you kind of have seen, you know. I know you mentioned the test director. But, you know, from kind of the synergies of, well, how to test director grew up, you know, to QC, ALM, agile manager, whatever it is now.
Joel Montvelisky I think that structure has been a myth. And there's two-term and I'm trying to be politically correct here because I again, I, I really cherish my time in Incarcerator Quality Center. But in hindsight, I see a lot of mistakes that we made there as a team. And one of them was basically that we felt we were listening to testers, but we were trying to complete their sentences more than listening critically to what they were meaning and trying to understand. If you take the analogy from Wayne Gretzky, where the puck is going to be and basically skate over there, and that is something we're not doing in practice. That's been given an example, we are at the core of the product team.
We are whole testers. People who have the least experience is five years and testing and more. And I have 20. But still when we go out, then we try to put challenges to customers, courtesy the challenges as customers have. And then we listen to how they would solve, solve them, not to provide their functionality, which is something that it's kind of weird because you're basically. Delegating the IDM process to your end-users. But in the sense of testing is better because, in most companies, the product team is people who evolve from developing. So you have either marketing people who became part of marketing people or the people who became part of marketing people for a Manager and people per se. And then it's like when you're saying, hey, I'm a developer. I've seen a lot of testing in my years. So I'm going to tell you how this should be done.
And that's where you say it's close. But it's too. It's not the way it is. So if you're looking for, for example, the structure of the fallen center. It's close, but it's not necessarily so.
OK, so there is where you have either people who say, no, it's too much for me or it's not answering my needs that much. That is basically the changes that we're trying to do. I think that one of the violations of many tools is that they oversimplify the complexity of testing. And testing is anything but simple. So that's one of the things that we're trying to do with the product. But again, I could go on and on and the philosophy behind the practice, and customers will see it uncensored. They just stay with it and take us through all the other organizations. So much so that, again, if you notice, we're the only company in the area who hasn't raise capital at all. We're self-funded. We've never raised the dollar. We everything that we own, the operation is actually paid for by customers working the system. And that is because we have very. You have to stay with us for a long, long, long time and again, but that's just one of the things that I like about a company.
Jonathon Wright I find it fascinating. You don't have to. You have to clear up a myth. The RAND test director. I remember someone say to me the original schema for the difference for the test director was done by a student who came in on a sabbatical or something. For a couple of months, work experience put it together, and then left and took the code. And then there were a lot of people reverse engineering there. The kind of what he'd done or she'd done and that kind of formed what was or wasn't a test director. Like, I remember test director one, which just looked like an Excel spreadsheet system.
And then I remembered kind of the scheme or afterward using it through the TDA, API, and stuff. And it was all way. It didn't really change that much. And you're right, it was quite rigid as far as you know, there was a mapping, a forced mapping of requirements to test cases to a particular kind of structure, which I guess people just don't work to anymore.
But, you know, I remember being in Australia with the when I was an ALM ranger for the Microsoft team when they started their MTM or their Microsoft test manager products. And they kind of said I remember a quote from one of the product owners at the time is this We're not interested in testing. You know, it just fills the lifecycle as far as the visual studio products. And so obviously, there were a lot of challenges with MTM and what is TFSA grown up to Visual Studio online or as your DevOps now. But, you know, part of it back then was they said, you know, things like Bug, you know, people really wanted to change bug to defect. And, you know, it was one of those little things. But you broke, if you broke the schema design on on on TFSA, which I did a number of, sometimes, you know, it wouldn't even start. So, you know, part of it was there was never that flexibility of being able to kind of customize the workflows to how your organization works, if it's a defect lifecycle, you know. You know, a developer can't, you know, just fix a bug without going through review. And that's kind of stuff. So do you see, that's still happening when that flexibility having to happen and we practice test.
Joel Montvelisky If it were a lot more per meeting in PractiTest, I think that we need to understand that. Development change, meaning the moment did you accept that Agile is not so much a mythologizing approach? And you say, hey, I want to have the actual tester, the persona Selenium the testing, it can be so varied. And even within the practice, we have to search with tests. We have support people with us. We're closer to testers. But then we have the Thumpers with tests which are very far away from the mental scheme of a tester. And so everyone has different needs, especially if you're trying to mix them together. Because I think that the reality of most teams today is that so we have a bundle of people who can do testing and they're going price off either very expert testers, middle expertise testers, Life-Support people and people who are not expert testers. Not only that, but their approach is a little bit different than that of the tester. And so you're basically trying to prioritize which tests you want to give to each and you give the simple testing to those developers and do more complex testing. So those testers in the middle of it false in their so, but still, you're counting on them to find the issues and to verify that it is working correctly. So we understand that mixture. And so we try to provide solutions for everyone now. But it's totally OK what you're saying, hey, maybe I don't want to report the bug, but I still need to document that. So what's done? And that is where you have the expert testing module, then it should help you. Now, if you're working, for example, on the other hand, you might be working, like you're saying, maybe even safe. That's part of the product that will need to be certified. So if you have certification or testing for certification, as much as you hate it, and I'm sure you do as well as every other tester, you will need to document every step we'd ever expected result and that we actually resolved because that's where certification works.
And so you need to have it and it's going to be off the air or even so to comply and then you will need to put it in there. So we understand that testing is complex, by the way. So much so. That same goes for automation, that it's reaching a place that it's stable enough. By the way, my history is people don't realize that. But not only did I working in Quality Center, but I also managed the cure for QA and Windrunner and it's a runner for a couple of years. So I know what flakey this means. And I remember when our salespeople used to go to customers and say, oh yes, you have codeless automation. And they everyone would just hold their heads within the theme saying, why are you saying that? That's a lie. And they say, well, it sells. So can you say it? So now again, I see. And I have very good friends that working Testim I.O and colleagues that work on pest craft and Maple and everyone. Yes, you have a codeless automation start. But every automation that is going to be robust enough, it will require a code. But we're at a place that is become more robust. So we even see.
The work of automation in testing increasing not only by customers, but I will say if you tell a developer that he's a tester or she's a tester.
Unless you start saying every one of your unique tests, it's going to be counted as a test and they don't see that as testing as well. So in practice, for example, you think this is it's an integral part of the system. Why not? Why shouldn't it? I don't know. Because people so much decided that unit testing is part of the SRE process and SRE process is not testing.
Since who? 90 percent of the functional tests are being run by Jenkins in the world today. If you ask me. So I think that it's more complex than that. And they think that that is what we're seeing now. And if you don't have flexibility. Then people are going to turn back to excel because it sells a lot more flexible than anything else. The problem is that it excels over flexibility. And then when you want to try to organize something, that there's no way in heaven to actually do it.
Jonathon Wright And I think automation is like you said, this is really key. And I guess, you know, so I read. I saw it off in the 90s with X Runner, and then I moved on to Windrunner all the way to nine dots to on CD, all the way log, you know. And then the Astro quick test acquisition then brought into Kiti pay. And then obviously UFT now UFT one which ironically I connected one dot well with 15 dots zero two to my machine the other day where I've got my Oculus Rift attached to it. And it popped up there actually in the VR saying that it is a widget, it's mobile, said would you load it up. Whether it would actually interact with VR would be a massive question. But you know, I know that Rafie and the team, you know quite well, you know, they've obviously come out of Israel, which was always out there, kind of the best companies kind of coming from Israel. So, you know, it's you know, they've always at the Talon. And I wrote a book on UFT, we've called it was a UFT cookbook, which was all about patterns of reuse. And I know Joe called. Taddeo also did a UFT, but a book on UFT. And, you know, it was really interesting because there was such a large following behind that. And, you know, it starts to get into this kind of keyword-driven. And then obviously CI/CD came along and that was kind of a weak spot off. You know, the whole project product lifecycle as well, actually, it worked very well. And I remember the guys working with the dev, the R&D guys when they did what was called coded FT, which was too close to code to do. I say, Ed because it lean FT which is now UFT developer and they were trying to get obviously into the idea so that they can start getting into the Jenkins' pipeline. And again, they were still not really servicing that area. Then obviously Selenium came along and there was a lot of, you know, other ones, as you mentioned, some companies like Test Craft, which has been acquired by perforce literally a couple of weeks ago. And I know we've had a run on the show and he's a good friend, did a book with him on the day, the Digital Quality Handbook, which is great. And we've just finished another one, which it's got hasn't got a title as of yet. But, you know, obviously the Perth. The Profecta guys have been working on the M.I.T. project, too. And, you know, part of it is they you know, Iran's always been a big one for things like the quantum framework BDD to kind of get things going in a kind of CI/CD pipeline. You know, that's always the way they've lived. And then, of course, you know, eggplant has now been acquired, which I know, I work quite closely with their R&D day and Anthony and the team and with what they're doing from IBT. So the be space technol testing technology. And I know you integrate with a plant, which is really good. They've been acquired by Broadcom hardware kind of equivalent. So we were acquired by pretty group tools, was acquired by seeI and then. An acquired by Broadcom, which seems to be the logical way things, which is a hardware company acquires a software testing of pretty, which is very strange. But you know, that is the way of the world. And you know, it is interesting because it isn't like you said, there's not many which have not had seed funding, capital investment in this industry. And more to the fact that .NET not really anyone's focused on what was the biggest part of the application delivery lifecycle, which was, you know, the alarming kind of them took the test management tool, which always sat at the heart. And, you know, people like using juror no. Can you integrate with those guys? You know you got things like Zafir, you know it was all kind of an afterthought of what would fit into a product lifecycle.
And then, you know, I know you do get labas well, this kind of get Ops approach to the kind of well, actually, where should the test live? Should they, you know, live at the CI/CD level, like you mentioned. And that's where things like uptake and stuff built on top of the, you know, the jenkins' pipeline. But, you know, it's not just Jenkins. He could use octopus. You could be using, you know, multiple, obviously different.
Joel Montvelisky But even though I think that Jenkins does have like one of the biggest shares out, just sort of ability and price. But we are seeing a little bit more. By the way, I would prefer a lost circle SRE. I really, really enjoy it. But if you want to use Bambo, go ahead and do it, then I really, really it doesn't matter because of the functionality and the objective is going to be 80 percent. 80 percent saying it's going to be up just around the peripherals. One, what works, which are best, what results, which are better. But it's very I think it's a very commodity right now. In that sense. But it's a good question. What's going to happen to quality in this sense? And where he steps can actually feeding all that I am I'm still out thinking about it in a sense. I love the thoughts of our patient, Brant Johnson. We provide some other testing. I have some reservations about that. And I've talked to Alan about it. And he doesn't agree with my reservations that all good and fine. I feel real, really like their work. And I think that that we're seeing here, it's basically an understanding or a migration from testers into a quality role within the company. We're just again, a very bad analogy. Might be security was once used to have people who were only in charge of security testing, security development. Those were the security guards. Today, everyone is under some limited security. But do you still have those security specialists, security guards, especially to do that part of the work? And I think that is common like that testing. Another good analogy of security here is that security used to be something that used to do before you released a product. But now that we're working on the cloud, then security, something that you do all around. And testing, it's simply not economically good to do it before you released the product. I mean, even if we look at a company like practice, we know whether their sentence. There are some tests in this, Lou, to be done before there are some tests that it's just not practical to do before. So you do as much as you can, but then you deploy very carefully. And instead of testing your product, you start testing your deployment. Then we start testing your rollback and you start testing your monitoring because those are the tools that are going to be helpful for you in order to do the testing, the production. That is nothing more than good monitoring, excellent working production skills. Now, my question is what will happen when we'll start doing development in production? OK. That's got to be the funny part. So people do it today. Now the question is, when is it going to become institutionally instituted that not only testing is being done in production, but also developing speed limit production? OK. And I think that some of those things are already there when you start having, like extremely custom-built applications and wherever you are. The framework's I have a 12-year-old who uses ROBLOX. And when I started to look into that, it's just a framework for game development. And people are the more open games on production right now. So I feel that that's where we're going to go. And we need to understand that, you know, need dedicated testers, but it's not going to be dedicated experts. These people will be your test and experts for teaching, testing, Preysing testing to everyone, everywhere. Some testing. But what I'm saying is that we need to do because we're never going to do all the testing before production. We will need to start this in our frameworks and we'll need to start understanding how to do data analytics in order to testing production and find those bugs faster. And in tonight's Migration, by the way, and that's where automation becomes a very important tool. The nation that you start before production, it becomes it basically evolves into the monitoring that you have in production because you want to have. Those assertions that are measured, you start by measuring them with use. But then you measure them in the real World War usage. And if you think about it, we already did that back in the day off of Mercury. But justice, everything denied. It's a lot slower than it's happening today because we still got locks from customers and we analyze the locks in order to find the bugs were hard to find. And we took the databases from customers and we analyzed the databases in order to find the interesting parts. So we were doing delay testing and production. Because there's no real way. To simulate users, unless you actually work with the users. So we've been preaching that for 15 years, 20 years, maybe even more than that. Only now the tools became stable enough, fast enough, good enough in order to do them live. I know this is just a thought.
No, no. It's just great. We'll have to get you to the testing and production discussion panel on the 2nd of July. It'll be really interesting to get your sit down. Might even try, Alan as well, because, you know, I know when I last caught up with Alan when he was still at Microsoft Word, so now he's moved to game the gaming company now. Unity is unity. You know, partly where I was chatting to last, we were talking and obviously, he was through things like x box, which, you know, that you know, the example where I actually spoke to somebody from the force of team on the podcast, actually.
And, you know, they did this Microsoft, you know, you said Microsoft operations manager of Mumm, which used to be running on the the the the X boxes. So you could see there's a bit of a framework drop. You have this go-round a corner on a particular hardware version and that information or the analytics would come back and they built to fix it. And you know, partly what I was kind of same difference between him starting at Microsoft and ending when he did the book and how Microsoft test was, you know, the fact that that information's in real-time now, instead of waiting a week for to get that information, you can see what the user's doing right now. And obviously with the X box series one kind of coming out and obviously the PlayStation five, you know, you can start seeing that. Obviously, there's no possible way that gaming companies can do the number of hours that I could be done by millions of users concurrently in production. So, you know, there's this kind of two sides of it, I guess. I guess the first one is how do we get to that point where we can actually deal with fixing issues which are happening and self-healing environments, which, you know, I think we're still quite a long way away. Like I said, I was talking to another company in Tel Aviv that do debugging in production, you know, so they kind of got this there is this kind of capacity to kind of be looking at things live and, you know, and, you know, talking through and seeing what's happening. Pretty breakpoints on. You know, that's something that's never really been done. There's this being kind of this view that maybe things that happen in production are a ninja and they shouldn't happen. Maybe we are not encouraging the right behaviors. And I actually like said, we've got to look at this right-hand side as actually a state which you could actually edit. You know, you can do life changes, too, and then we wipe them back them out, you know, half a shadow copy off that rotating in parallel to make sure it works. If it has something to fall back on now, which is what we've done for many migrations for many years, is, you know, your replacement system run side by side. You know, there's no reason why the new code on the alcalde card from an A B expecting or, you know, adopt lordship perspective. We've got the technology now that allows us to do that. But it's interesting because, you know, I actually think what you just say there is that actually feels like we've done you know, we always go rabid large circles, but thinking that the kind of the punch card days and, you know, Dorothy grabs a good friend. And we did the book on experience the test automation, which Alan also contributed to. And Dorothy was Dode as the grandmother of automation. And she was supposed to have written the first line of automation in 67 or 72 for Bell Labs. And, you know, and she knows, when I was, you know, in my Mercury days in the 90s, I know, they software test automation book was my Bible, but it was, you know, telling you how to read flap files and, you know, comma-separated versus, you know, fixed-width and all sorts of stuff like that, which, you know, we don't have that kind of problems anymore. But actually, I think maybe we do in the sense, if exactly your example, which she gave with your son, is that the way that it used to work with a punch card or comput you know, you said the compiler ran overnight with these large machines. How far away is that from serverless architecture? Right. So if you've got, like, to REST, you put your code in, it goes off, executes that code in compute time and then comes back with the result, which, you know, part of this NoOps kind of approach is that, well, you know, there's no operation stuff because it deals with its own infrastructure. It itself manages itself. So why would the operations need to position anything or monitor anything or, you know, how Prometheus looking at, you know, GraphDB art or something? You know, maybe that's where we're starting to see this kind of this serverless architecture potentially be more like the, you know, with the right and compiling and executing. That we were used to backing in the day. And so, yeah, it brings a new kind of a view to it. You know, I like this kind of flexibility, what you're talking about. I see. You know, I see that you know, one of the problems with you know, the TFSA products MTM was this confusing name convention that said everything was a projector, you know, you commanded that. We'll know that's a version or no, OK, that's not version. And everything would just really hard to manage, you know, how do you manage tests from an application lifecycle management perspective that are they you've got live tests in there. You've got staging tests you'd like. You mentioned you've got maybe a particular type of payoff's, engineering tests that are kind of coming into there. And we had cultured Dobb from Gremlin the other day on the show. And, you know, he was talking about how, you know, you can add stuff they did at Netflix and Amazon. You know, you've got that. You've got the security testing. You've got your static code analysis tools. But you've also got on the right-hand side, you've got real-time monitoring of production systems and, you know, potentially issues get in there where they're snapshot time across and say, okay, this is an environment moment in time with this particular security issue. All right, go on and fix it kind of thing. So how do they manage that entire estate without something as flexible as a practice test?
Well, it's not even something that's less well structured, so this practice can actually help you with a number of things. But the only way that you can get away with it, you were just explaining that. And they that you brought up, obviously a very complex scenario. But it's not an imaginary one. You need to have something, slash someone, slash a team, orchestrating.
OK. Because it's not only done you have your food and production, I have your case engineering and you have your deployment and you have your deployment testing. It doesn't matter if you're working service left or law or not. But you have a lot of overlapping and one of the main issues with overlapping is that if you assume that everything overlaps, then unless you're paying close attention, there will be gaps, because if you're so certain that everything overlaps, then you start being. I'm not going to say clumsy. I'm going to say lazy, maybe. And then you start generating gaps. And what happens when something actually falls on that gap? So I still think that that quality architect. It's going to become, especially in these types of organizations. A more important role because that's a quality architect. He or she will lead or even maybe an architecture team will need to come and say, hey, you know what? Let's actually make sure that everything is actually playing the same tune. And we are working in a coordinated approach. Why? Because, yes, Dorothy said you want to find production because it's too expensive to find the means-testing because you cannot simulate that user tests the user environment or whatever you have.
On the other hand, there are still seems to it happens. We're going to see less, obviously, that when you find them in production. It's extremely stupid. Why didn't you find it before? And if the answer was because I was not looking for that, that means that your testing was not good enough. And we should not start to say that, oh, our testing production is so good that we do not need to test before production. No, it's the other way around. Our test and architecture are so good that we know what we tested each one of the stages and we're going to test what in order to do that. So I think that that is becoming more and more important. What one actually brought this to mind is because there are some things that you need to use case engineering in order to make sure that you're redundant. And if something happens, you will still be stable because you're not the one that likes to go down, especially of the pandemic. My wife would kill me while my kids would kill me, too, if I think about it. But in any case, there are some things that you don't need to wait. The testing production, Northlanders sent. Oh, wait a second. We didn't think about redundancy.
Meaning? There are some things that are better if you plan them before. By the way. And. And I think that if we go back to Alan and Brant.
One of the things that resonate to me the most about their approach to quality is that they're not the first one to say they're not the second or the hundred people say the quality is not good. Quality is not a lack of defects.
Good quality is satisfying the needs of your customer with your end-user of the person who's actually clicking the buttons. The one who's paying the paychecks. Once branded, Paychex is a stakeholder. You're using this one doing the clicks. How do you know if the product has quality? I think that what these guys were saying and that really resonated with me is that up to now, let's say back in my mercury days, we didn't know if users like the features we release. We have customer advisory boards and maybe you were in a number of them. We released the product. The world went quiet for a month. Then we got the migration box because large organizations started their migration. So we found the migration of Ops, but we didn't know if the feature was successful until about six or nine months later, when our sales stance would come and say, hey, because of the new feature, I was able to generate decent that amount of sales today. If we release a feature, we can measure it within minutes. If it resounded with your customers. So we're expanding quality not only to, oh, there's no exceptions or the system GitHub break but oh, wait a second. We just don't know. We've turned the bottom from greater to green. While we have 10 percent more conversions. That is quality. And that is what we can measure today. And I think that is the point where. Testers need to understand that what's happening right now is not a threat so much as an opportunity. Having said that, it's an opportunity for those people who will be able to adapt to learn things. Because the world is changing, so not only for the young, I hope because we're not getting any younger. But it is for those people who are flexible now. If you need a similar practice. Yeah. It should help. We are actually thinking about making sure that practices will even be more useful. When the scenes, as we have been talking about it now becomes a standard reality, not so much for the early adopters, but for everyone else, because I think that, again, we see the practice as a quality system. And so that is where quality is going. That's what we think we should be going. And you will need a system, as I said, to help that person orchestrating. All of the testing slash quality operations in order to make sure that we know that we're doing the right thing and we know we're doing the things right.
OK, so so that is the point, but again, it's to understand not where the hockey puck is today, it's actually worse is going to be. And obviously, it's where it's a wager. You don't know what it's going to be.
But I think that most people who are looking at what's happening right now understand that shift right or shift left A is not new. We have been doing it for the last 20 years, but it's becoming more institutional, institutionalized, more structured. So. So that is where we need to be. And again, put practices aside for a second. That's where we need to be as a community. That's where testing to go.
Just resounded with what you were saying over there.
Jonathon Wright No, no, I can't agree more. And what I really like about what you've just said there is, you know what quality is, right? And, you know, so I had Mike lives on the weekend and he was talking about we were we were kind of talking about things like time to quality. Right. And you know what? These different metrics that you could be looking at or metrics you shouldn't be looking at. Right. And, you know, James and Michael are going to be doing a demonstration of their dashboard, which they all seem to be using, which doesn't have any metrics on it, which I know numbers. They were kind of saying it's they've got this idea of confidence or it's got you know, when the test is finished testing, he puts some up to say, actually, it's really good or okay. Or, you know, it's all these feelings of, you know, perception of quality, not OK. We've done eight to two and we need to do 94 to do the full pack. You know, that was my mistake when I was doing testing, you know, when I started testing in the 90s. Now, the first thing that happened was I was given a big test execution booklet, which I was going through. And I take the boxes to say that the net and I Check checked it and it was a phone system. So it was phone A, Causby, hangs up C, then calls D and you know that you know the drill. And I went through the entire park, which was, you know, 700 pages long, went to my boss and said, I've not found a single defect after spending six weeks of my summer internship from the university there. And he said, well, yeah, that's what we expected. Based on the fact that the last 10, 50 years, it's always past. And I was thinking, well, what's the value of that kind of push me into automation?
But actually, I should have thought about that and, you know, looking at it for what it was, was actually this is a, you know, some basic regression stuff which doesn't really prove anything. And if anything, they were giving it to a student to see how well they get on with it. And we know that trustworthiness and transparency and etc, etc. And actually, the focus should be more focused kind of experiments of what I'm trying to test. So, you know, we used to love this kind of the center of excellence of enabled them, whatever they used to call it, back in the day where people go, well, this is systems integration testing phase. This was a user acceptance phase. And this is, you know, business accepters split it all up. And we all had, like, gateways to get in and gateways to get out. And, you know, that doesn't happen anymore. And the book, you know, part of that is exactly what you said is why aren't they kind of focusing my offer on visual testing? Because I've seen some value in that, which is kind of saying, well, actually, I want to see if it's visually regressed or it's really important from a brand perspective that, you know, the logos all at the right place or something as silly as that. And then the focus then changes again, too. Well, actually, now I'm really interested in, you know, cross mobile device testing because you need to Check based on my Google analytics data, what kind of devices are being used. And I'm starting to see stuff happening in production as far as, you know, conversion rates or bounce rates or, you know, the teller, telemetry coming in from a react native client of saying, you know, there have been issues on a particular model of phone. So therefore I'm focusing my energies on its mobile, but I'm going over and deciding, well, actually, that was that's got some value. But I wanted to contact her. I wanted to some contact other QA say contact, but contract testing. And, you know, I want to actually do a payload through the different API is end to end with, you know, some data that I want to see make sure works. Then you say, well, actually, I want to do some negative testing on that. And each time you're learning something new about the system and you're also using the tool in a practice tool in a different way, you're kind of proving a hypothesis which is structured and you providing kind of greater quality of a greater kind of understanding of the system. But it's not just focused on. Well, this is a manual regression task where I'm running through a whole stack of tasks or, you know, here some charts as I'm just going to do some exploratory and it's one or the other. It's actually a blend of lots of different activities that give you lots of different levels of confidence around different types of testing that proves or gets adds some value, which is quantifiable through some kind of metric, whether that be, you know, numbers, issues or, you know, confidence. And, you know, I think this is a really interesting area. You know, the podcast I did yesterday, we were talking about. Port report posted to IoT, which they use, and the lady was saying, well, actually, you know, we use report port Lodeiro because we can spot issues with our system based on how long it takes to execute the tests. I was thinking, OK, so that's what does that tell? Well, maybe it does show that actually, you know, there's a deterioration of the execution time of that one test. So we should look at that, maybe look at it from a nonfunctional requirement perspective. But actually, at the same time, you know, the BI t work that you guys are doing as well is that actually there are lots of layers of that. There's localization testing, there's accessibility testing you know, the other nonfunctional stuff, security, et cetera, et cetera, which people seem to kind of forget about and actually are as important. Right. You know, if a user is trying to use the Kovik contact tracing application and at a certain age group, let's call them the vulnerable age group, over 70. And, you know, they use the phone, but they've got all the screen resolution set to the lowest. They've got the text size set to the highest. And the button just is too big. You know, the one what would say, you know, Check my, you know, exposure history is just not on the screen anymore because of the high contrast mode, the accessibility both turned on there now. Yes. Some visual testing tool may pick that up. But I don't think they're the scenarios that they run. They're reading visual regression, which maybe it shows a few pixels each way, or maybe shows the dom, as you know, the shadow dom has changed, you know, so that obviously it looks slightly different or, you know, at icons in the wrong place. And I don't know, are we not focusing on high-value tests when we should be able to because we should be able to say, OK, we did a bit of a mistake every, you know, cross-browser test. We ran over all the regression, we ran right. Running, you know, property. So it was all chrome. And actually, that meant that we've not tried it on all the latest. Firefox is the new Edge browser that's now Chromium and not, you know, proprietary Internet exploder. You know, part of it is, are we missing stuff that we can't see because we're you know, we're focusing on repetition, regression and more focused, scripted kind of activities and less, you know, imagination and curiosity and, you know, all these kind of creative sides of thing testing.
Joel Montvelisky So that is where you're going to. And again, I have very good relationships both with James and Michael. And again, both a lot of people within the same category think that if one of them is interesting, BI by their own specific points. And again, I've had great conversations with both of them. But here's where they say, hey, there's stuff and there's Check. Both of them are importing. But we need to understand the limitations of both. So when you're doing testing and you're going to take the human person to use their imagination or to find the errors, then it's not going to be very economic to do on a higher hydroplaned ability to do every single day. That's where you actually get a script to do it. But in order to find that issue with the person over 70. BI be my dad, for example, who uses an iPhone, but I'm not really sure how much he knows how to use it or how dextrose he might be with it. So for that, you need to start using, for example, persona's and you cannot teach for so up to any automation framework. You may be able to say, hey, you know what, me as a tester. Let's actually think about my dad. Oh, wait a second. So he uses large fonts, so he uses that small resolution. Now, I can visualize what he is doing. And that is the part where I'm saying that.
Testing is not becoming simpler. It used to be simpler because we didn't have the number of tools, so we were not expected to deliver so fast and we're not expected in production. And by the way. So, yes, you were mentioned in the excellent W approach where for every stage you had a game that you had to integrate, you had the feature test on the integration test him, and then you did the alpha testing and the beta test it and go try to explain to someone today what's the difference between an alpha and a beta? But back then, you have a team of five people who were doing all the different testing in different stages. The new focus a little bit different today. You need to run so fast. We need to deliver in three days. And two of those are going to be like prototyping for developing. So you basically gave some feedback. You didn't do a lot of testing. That's where we need to have that multidisciplinary approach. You saying a multidisciplinary and the fact that, yes, on the one hand, you need like 10 heads. One of them is going to be running your C.I. script. So one's going to be run dysfunctional. The other one's going to be doing the monitoring production? The other one's going to be doing the manual testing where you go and play with all the configurations.
So. It's a lot more complex, a lot faster. Now, the good thing, and that is a good thing, though, is that the price for escaping defect has gone down incredibly. Meaning I remember a box we were delivered in the quality center that literally cost us hundreds of thousands of dollars. The same buck today might cost. I don't know. Five hundred bucks. When you measure everything up. So the cost of a bug is going down. For most applications, obviously, if you're testing a pacemaker, do all the tests you need to do before you deliver it. But most of us are not testing pacemakers, so nor avionics systems. So I think that that's what we need to understand and we need to adapt. And in that sense, now, automation and automation approaches because there's smoke one. It used to be, too. You have low testers and automation testers. Now, it's again, for each one of those, you might do a number of doctorates or each one of them. So we need to understand that the complexity there, it's important to grasp because I'm not sure that people realize that. Definitely not people who need to go through. You were talking about Israel in this room. We have them. I was not born Israeli, but all my friends say that I became an Israeli. I was born in Costa Rica. But basically in Israel, you have a policy that it's called the embassy, that you have a center literally means it's gonna be OK. So that's why when people come in says, hey, wait, this is what this is this. And so it says, hey, you ever said the embassy? Cool it down. It's gonna be OK. Let us do what we're doing. We know what we're saying. And that's that cocky feeling where we don't need pesters. We know what we're doing. Yeah. It's going to limit production and BC breaks. It broke. And that's where you see all the people who have incredibly good technology but extremely bad user experience. And so those companies just go down there very fast. And that's what we need to avoid. I seen it too many times. I've seen all those people who say we don't need testers working IJO. No, you're working devil, so you need very thorough testing, do understand what's going on here. There's understand of the price of your pain. So I think that hopefully, the industry is going to change a little bit on that. And that only means that our jobs are going to call a lot more interesting. And what you're saying that you have six weeks to run a 700-page script. Oh, God. I don't envy you. I remember Rob Lambert. My portcullis with Brown. We don't fight this once in a while. And he always talks about that experience where they had like a hundred test cases. And management says, hey, we need to complete all of them and everyone is taking pages. And the ones which is first that they took like the small number of test cases. When you start counting test cases in here, it will sound like, James, if you can't test cases, then you will have a lot of shitty test cases. I hope I could say that, but no meaning one quality. It's going to become a more loved, more complex than that. And that's a good thing because I like complex challenges. I think that all of us do. And you don't want to work on a job that people think it's. Yes, we want to work on a job that people think it's easy. But there are words that people think it's simple. And you also. Up to now, people said, like, what value do you provide? The system is divided right out of the system to make sure that we don't release blocks. Then maybe I should have been a cop just going Next due to development and making. I don't know. Angry faces and they would have borrowed a better record. But if my job is to make sure that the quality that we were delivering is what the customer wanted, the very time that. And in cases when it's not in many cases, it's not the point where we need to fix that. And BI seeks to pay me maybe not take out the bug, but make the product better. That's our value. We're basically guiding the product towards more value. That's the place to be.
Jonathon Wright Yeah, I love it. I love everything, what you've been saying about, you know, actually, you know, I remember toying with the idea of this value-driven delivery kind of thing of value-driven testing. I think this is really important and a tool like PractiTest. This idea of actually these are really valuable tests and they've had a big impact is really important.
Joel Montvelisky And again, hopefully, we're able to provide this ability and we're able to help people understand what they need to put their efforts into. Let's make order out of chaos, but chaos, engineering about the chaos of having 5000 desks. I'm not going to. Which ones do you need to round today and which ones are not redundant at this point? And that's what we're trying to do.
Jonathon Wright He's pretty to say. You know, those list is out. You know, what's the best way to kind of get in touch with you or, you know, even potentially get a product demo if you know, how would you go? Best get. Reach out to.
Joel Montvelisky We'll just go to our Web site www.practitest.com you can. It's very easy to start an evaluation. We will do a good demo. But the systems are simple. Just starting in evaluation. Start to play. Would you like it? Just set up an accountant. And one thing is we do have quite good. It's our customer success things or customer support. These were very practical on that. If you want to get in touch with me, there's my blog. You can go from the qablog.practitest.com you have to fly mumblings in there. And again, I'm always very reachable. Most people know how to reach me and I'm always happy to help out in any way we can.
Jonathon Wright Wonderful. Well, it's been amazing to have you on the show. And I'll make sure what we've talked about is in the show notes of people. Please reach out to you and get the best e-mail and LinkedIn. And again, it's been an absolute pleasure to have you on the show.
Joel Montvelisky It's been very nice talking to you as well. Thanks for having me.