Jonathon Wright is joined by Niko Mangahas, the global practice lead for quality engineering at RCG Global Services. Listen to learn more about quality engineering.
- Niko spent the last 15 years in quality and process strategy and management. It’s a combination of both software testing and process appraisal. He used to work for HP Hewlett-Packard Enterprise in Asia Pacific. He was the lead solution architect then. [1:03]
- One of the things that Niko talks about in this episode is how the new types of companies, which he called the digital enterprise that was focusing on technology delivery and focusing on speed, being able to be Agile in terms of responding to companies using technology. [2:00]
- Niko also discusses how modern quality engineering needs to be the strategy going forward to future proof, the ability to deliver with quality and higher liability, as well as balancing that with speed and value generation. [2:33]
Quality engineering really needs to be addressed from top-down and also bottom-up.Niko Mangahas
- There is a need for overarching structures for quality, such as a quality strategy and management layer, and an automation layer (automations shared services is what they usually call it). And those two layers are essential. [7:45]
- The quality strategy management layer doesn’t have to be formal. You don’t have to have many different quality managers, but the whole point of this is you need the layer that understands the macro trends and patterns of quality risks in the organization. [8:26]
- One of the things that Niko keeps seeing is that there’s actually a lower overall understanding of risk and one of the more glaring things is that there’s also no quality debt tracking. [10:45]
- Niko recommends that the testers in the Scrum develop their own skills to test in an automated way. [12:01]
- One of the things that Niko has seen as a big benefit for many companies is — if you develop a template framework and use that as a guide for the automation needs of many, if not all of your Scrum teams, then that will basically take away the burden of having to build independent test frameworks. [12:50]
- Many teams actually looked at quality engineering as focusing just on functional. And as long as a function is up and you’re able to first guess the next step of your transaction, then you should be fine. But in Niko’s experience, it’s different. [16:29]
Quality is everywhere. It’s ubiquitous and a big part of that is experience.Niko Mangahas
- Reliable and responsive, Niko sees that as a function of having a system sit as the performance and the responsiveness of systems. And of course, keeping them from being down or from experiencing outage. [17:39]
- Reliability is built into load testing and stress testing and things like that. Responsiveness is about performance regression, doing performance benchmarks and seeing that whenever you push out new functionality, it doesn’t really drive the responsiveness of the experience down. [17:58]
- Niko elaborates how experience is part of what goes on in production. So the way Niko looks at quality involvement and how it needs to evolve for a digital enterprise is that it needs to have both a shift-left and a shift-right approach. [22:48]
- One of the most classic challenges with performance assessing is that you never truly know what happens in production unless you test in production. So you only get that experience when you’re rolling out a new product or a new application for the very first time in a fresh environment. [23:31]
The true measure of a good quality strategy is the predictability of outcomes.Niko Mangahas
- The more predictable a quality strategy becomes, the more confidence that you have and the more ability you have to navigate around that and to solve it if something does blow up in production. [33:32]
- If you don’t have a good understanding of what happens in production, you don’t have a good understanding of what it takes to stand up the right infrastructure to support systems in production. [34:08]
- What we need to try for in terms of modernizing our quality engineering approach is having that balance between value generation speed and quality. Striving for high confidence and higher liability, but also understanding what it means for the business so that you can have a good conversation with them. [36:14]
- Modern quality engineering is essentially taking QA and extending it in four dimensions. [41:41]
Modern quality engineering needs to create value for the business.Niko Mangahas
Meet Our Guest
Niko Mangahas is the global practice lead for quality engineering at RCG Global Services. He has a deep experience in strategy and implementation of Digital Transformation, Process Management and Quality Assurance disciplines.
Niko has Certifications in both Traditional (CMMi, ITIL) and Modern (SAFe Agile, DevOps) process management and software testing (ISTQB).
He has a strong experience in Transition and Transformation, Project Management, Delivery Management, Process and Quality Compliance to highly positive outcomes.
Niko is also skilled in creating info graphics, data visualization, and marketing collateral. He is keenly interested in Tech Entrepreneurship, Machine Learning, and Artificial Intelligence and its applications.
For me, a quality experience needs to be reliable, responsive, intuitive, engaging, and inclusive.Niko Mangahas
Resources from this episode:
- Subscribe to The QA Lead Newsletter to get our latest articles and podcasts
- Connect with Niko on LinkedIn
- Check out RCG Global Services
Related articles and podcasts:
- About The QA Lead podcast
- What Is Quality Engineering?
- 10 Best Quality Engineering Tools
- Creating A Quality Strategy
Read the Transcript:
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Jonathon Wright In the digital reality, evolution over revolution prevails. The QA approaches and techniques that worked yesterday will fail you tomorrow. So free your mind. The automation cyborg has been sent back in time. TED speaker, Jonathon Wright's mission is to help you save the future from bad software.
Hey and welcome to theQAlead.com. Today, I'm joined by Niko. He's the global practice lead for quality engineering at RCG Global Services. Talking of which, he's done some amazing work in hyper-automation, as well as the modern quality engineering e-book. So loads of great material we're going to be discussing today some of the next generation things that you need to be thinking about.
So, don't miss out and without any further ado, I'm going to introduce Niko. Welcome to the show!
Niko Mangahas Well, thank you. Thank you so much. So, a little bit about myself. I've spent the last 15 years in quality and process strategy and management. So, it's a combination of both software testing and process appraisal. I used to work for HP Hewlett-Packard Enterprise in Asia Pacific.
I was the lead solution architect then, and that was back then we had the back when we had the Mercury Tools, which is the Micro Focus tools now. The KTP, the ALM, and the LoadRunner, and a bunch of others. WebInspect, Fortify, and just a lot of the dose that dominated the market back then.
So we did have our pickup clients. And with that, I was given the opportunity to consult for, you know, a lot of the top Fortune 500 companies and who are operating globally. And I had that experience of working within a different types of industries, different scales of companies, as well as different approaches.
So, one of the things that I wanted to talk about today is how the new types of companies, which I called the digital enterprise that was focusing on technology delivery and focusing on speed, being able to be Agile in terms of responding to companies using technology. How modern quality engineering needs to evolve to address the needs from that historic perspective of the very formal release management cycles to highly Agile digital enterprise company.
So, I want to discuss how modern quality engineering needs to be the strategy going forward to future proof, the ability to deliver with quality and higher liability, as well as balancing that with speed and value generation, as well.
So, a little bit about myself that, you know, makes me very passionate about this is as you've heard, my background is really a combination of process and testing. And what I've seen especially recently is that, whenever companies run into a problem, the first solution, especially digital native or digital enterprise companies, they try to tech out a solution when they see a problem.
They say, oh, you know what? I can build up code that patches this up. But in my experience, that's rarely the long-term the right long-term solution to address these, because you need to always tie it back to business goals, and business objectives. You want to understand the underlying process, and then that's when you use technology, leverage technology to address the solution and make it tailor it, and make it fit to address the requirements in a way that suits overarching quality goals.
So, I think right now I, he I am the leader for the quality engineering practice and did, which includes DevOps and experience at quad at RCG Global Services. And one of the things that we're passionate about this and driving this message to our clients.
So, I've personally have worked with, I would say between 40 to 50 clients around the globe across many industries, but because I represent the practice for several hundred quality practitioners, I bring the collective experience of our company across many of these clients around the world, as well, beyond that beyond the companies that I personally worked with.
Yeah, that's what I wanted to talk about today. What my experience kind of brings to, that they will in terms of addressing those challenges. Because one of the things that I see is that when companies look at quality as, Hey, you know, this is a thing of the past, we don't need to test this much. We need to approach things with speed.
The compromise often comes from the quality side of things. And that's why in the recent years I've seen many companies de-centralized their quality of function, their quality, their TCOE or what have you and say, you know, testing needs to live within this, this sprint or the Scrum teams.
And as well some companies have even went beyond that then said, you know what? Let's go all the way. Let's have developers test as well, which I never really recommend. And while that may be a good solution to create things speed and to increase productivity, which is in line with many companies, businesses goals. One of the things that we keep seeing is that creates quality blind spots, major quality risks.
And we've seen this because we've been called into companies that did this, and they're now kind of paying the price for it, but we've also seen it in companies that belong in the, you know, they have been doing this Waterfall style for the last 30 years, and now they're shifting to Agile and DevOps, and now they're seeing that there's cracks in their strategy.
So that's oh, we've established a strategy that, that fully addresses those gaps that we keep seeing. And that's what I want to talk about today.
Jonathon Wright No, that's fantastic. And it's great to have you on the show. And we had a really great call a couple of weeks ago, just to kind of get to the bottom of a quality engineering, 'cause it wasn't something that I, you know, I knew enough about.
And you know, something which I think is credibly interesting, especially when I was chatting to you about and kind of how you've just framed it there around, you know, bottom-up quality versus maybe top-down qualities is kind of, you know, just because you can be responsible and accountable for quality at a individual level, you know, does that mean you should?
And I love how you've kind of linked that back to, well, actually it's the business, right? It's now we got, we're going through this digital transformation and pull off, you know, digital transformation was this, well, we've got to be able to pivot faster. Therefore we've got to take on new processes, new, like you said, moving away from TCOEs to kind of enabling these kinds of eight capabilities, which allow us to quickly release software to our festivals suppliers.
And I love that you've kind of put a formalized structure around that and you've kind of define what that means. And for those of our listeners that are out there, you know, what's your kind of definition around what quality engineering's all about?
Niko Mangahas Absolutely. I think to your, to your point, it really needs to be addressed from top-down and also bottom-up. So you have, of course, you have majority of your testers testing within the Scrum teams, because they need to develop the application expertise, the domain expertise, and they need to react very quickly to changing requirements. Or even you know, delays in the code dropping and things of that nature. They need to live within the sprints.
But in parallel to that, we see that there is a need for overarching structures for quality, such as a quality strategy and management layer and an automation layer, as well. Automations shared services, what we usually call it. And those two layers are essential. Actually, I have a third one, which is kind of the end-to-end regression layer, basically.
And that's third layer I don't want to talk about it in detail, because I see that to be relevant in companies of a certain scale. So, if you, if you have a smaller organization you don't necessarily need end to end, or if you're collaborating really well, you know, that might not be necessary. But the main two things that I would I want to reflect on this, the quality strategy management layer.
So, this doesn't have to be formal. You don't have to have, you know, many different quality managers, but the whole point of this is you need the layer that understands the macro trends and patterns of quality risks in the organization. Because one of the things that we keep seeing is that when there are areas that, that act as challenges in the engineering in the engineering teams, there tends to be some subset of that have underlying problems within a process, or within a certain approach or methodology that the company has.
So, one example of backward be we had this company and they, we had this client and they have, you know, they have I would say a born and bred DevOps engineering life cycles. So they've been around, I think 15 years. They actually predate the term DevOps, but they've been working in a DevOps methodology all throughout.
And when I looked at their process, I saw that there are pockets of testing everywhere, but no one is really keeping track of what the outcomes are from each of those and tying back all of those outcomes to look at the bigger picture. So, they're not seeing that all throughout they're, they're testing, they have about, I think as dozen products.
And they were testing the front end for the X. They were testing, you know, the monitoring, they were doing synthetic monitoring testing as well in production, but they weren't doing testing of the API, for instance. And I saw that was a big gap. They could reduce some of the testing that they need to do on the front end just to be able to test at the deeper layer the test data combinations and APIs.
Because that's far more efficient than testing everything in the UI, right? So while you also need to do UI testing, of course, a lot of the combinations of data can be tested in, you know, in, in the backend using headless testing, using API test automation, for instance.
So that's one of the things, that I keep seeing is that there's actually lower overall understanding of risk. And one of the more glaring things is that there's also no quality debt tracking. So they keep saying that, Hey, we released this massive, you know, we have a massive release. We said yes, we, know that there's some major defects within that release, but we really need to push it out there.
And then just this time let's approve it. Let's say, let's, you know, let's push it out to production because we really need this for our business. And of course, they paid a price afterwards. They say, oh, okay, it blew up as soon as it went live. And we've been, you know, involving our C-levels in talking to our customers, explaining and apologizing why it failed and all that.
And then next release, guess what happens? They keep doing the same thing. There's no, there's no understanding of overarching quality debt. And that's one of the things that they get missed when you don't have a quality strategy and management layer. So that's one of the main takeaways that I have.
A lot of the automation can fit within the sprint and the scrum teams. I actually recommend that you know, the testers in the Scrum develop their own skills to test in an automated way. But also there is some things that when you hit a certain scale that across multiple teams they would have a problem that technology will be able to solve.
But they tried to solve themselves and that creates redundancies. Not only in terms of how many tests, test scripts they build for their products and, you know, the different frameworks that they might come up with. For instance, if you have 10 teams and you let them, and you give them full economy to just, you know, develop the automation framework they leg, then you end up with, you know, essentially 10 different frameworks and that you can't share assets.
There's probably a lot of redundancies. And one of the things that, that we have seen as a big benefit for many companies is, if you develop a template framework and use that as a guide for the automation needs of many, if not all of your Scrum teams, then that will basically take away the burden of having to build independent test frameworks.
You're able to reuse test. You're able to share knowledge as well that, oh, I run into a problem here. How have you solved it? How has Team B solved a problem that Team A has had because they're using the same framework? And one of the things that I remember quite clearly is that I just recently worked with this digital bank and they had, I think, 28 teams.
And after talking to most of their teams, we realized that 60-70% of these teams have one big problem, which is they don't have enough test data to test with. So I said, why hasn't this come to life? It looks like you've been, this has been your biggest bottleneck. And you're saying that over half of your test suite is or you don't have about half of what you need to test with.
But this is the same problem that I keep seeing across multiple themes. So if someone has been looking at that and solving it at an enterprise level, instead of a Scrum team level, then they would have found out that if they created a mock service, you know, a mock service system, then they would have eliminated 80% of the problem right there.
But one of each of the Scrum teams won't have the resources or the skills to build that for themselves. They needed to escalate that to a broader theme that operates at an enterprise level. So that's why I feel strongly about having a strategic automation shared service that helps enable testing for the scrum teams, as well.
Jonathon Wright Yeah, I absolutely love it. And to me, I think kind of two things you kind of said, which I guess are kind of completely new terms. One is kind of enterprise quality or it could be referred to is a scaling quality in the same way that we talk about scaling Agile. And we've seen Agile, especially over the last 20 years evolve, but also people have had difficulty to scale that across their entire organization-wide, enterprise-level, Agile.
The same kind of goes here for qualities that you of define in the, you know, some of the challenges that people are facing with the organizational-wide, but there's no ownership and organizational way. And, know, as you was going through that, it kind of made me think, well, you know, risk-based testing, right? You know, who is responsible for defining risks? Know, and what and is that an organizational level risk, because brand damage could be classed as risk.
And what I really liked when you did your opening intro, and I thought it was also a really exciting area as you talked about you were also responsible for the experience team. And I just kind of changed around some of the words and that, well, actually, is that quality experience? You know, do we look at, should we be looking at CX?
I am, I was actually, that is the quality experience. It's the experience that are in customers are doing, and therefore being responsible for that from live systems. It's as important as if you're scaling quality across the organization. So, you know, talk, tell us a little bit about kind of the experience team that you look after and what their kind of goals of quality?
Niko Mangahas Oh, absolutely. That's a, such a, that's such a, I'm really passionate about that area because many teams actually looked at quality engineering as focusing just on functional. And as long as a function is up and you're able to first guess the next step of your transaction and you should be fine.
But in my experience, you know, quality is everywhere. It's ubiquitous and a big part of that is experienced, especially because for instance, right now we have a lot of hospitality, entertainment, and retail consumer industry clients. We have the top entertainment company in the world as our client or actually first or second, depending on, you know, what Oh benchmark you look at, I would say.
And one of the things that's especially important to them is experience quality because they service a lot of end, end consumers have different demographics, different geographies around the world. And when I talk about quality experiences, I talk about attributes.
So for me, a quality experience needs to be reliable, responsive, intuitive, engaging, and inclusive. So, I know there's a lot in that, but reliable and responsive, typically, I see that as a function of, of having to, having a system sit as the performance and the responsiveness of systems. And of course, keeping them from being down or, you know, or from experiencing outage.
So of course, reliability is built into, you know, load testing and stress testing and things like that. And then the responsiveness is about performance regression, doing performance benchmarks, and seeing that whenever you push out new functionality, it doesn't really drive the responsiveness of the experience down.
And then in terms of engagement, engagement, and it, be, be it engaging experience. We looked at different types of usability testing services. We have usability practitioners. We also have AB testers as well. So the AB tester is there. They're not even, some companies don't consider them as part of the test the truth testing that we know in QA.
But for me, you know, anything with testing is we, we can own that as part of quality. Because I'm in, I'm interested to know what clients think about small changes in the UI and how that triggers different decisions, cognitive decisions that push them to engage more in or engage less. So for me, that's an interesting area and that's why that's part of our experience practice that we look at AB testing.
And actually, that's really fascinating how creative you can get in terms of AB testing itself. But we all still look at inclusive experiences and that's where accessibility testing comes in. So we have certified accessibility testing practitioners as well that basically look at making sure that if you have a, what we call an accessibility statement that you say you comply to WCAG standards version 2.1 or something then you apply those and you commit to those and that we test against the standards that you have committed to.
So we also advise some clients that have never done accessibility testing at what level they should try to get to at least, right? So we've put it kind of a phased approach that, you know, you want to get the fundamentals first and once you have that going, you know, you can far get more and more comprehensive accessibility testing guidelines, as well.
Jonathon Wright Yeah, that sounds really interesting. It's good to include usability and accessibility in there. I know, you know, Lighthouse dropped version nine over the last few weeks, and that kind of tries to address accessibility, but I think that's one of those things that has always been skipped over, and actually, it's more important than ever.
And I liked the fact that you've of linked in the need for testing in production, as well as testing in the SDLC. And, and, you know, this is kind of one of the things I'm really passionate about them out with kind of site reliability testing is what should you be doing from an SRE perspective around testing?
And this could be things like real user testing. But also, you know, those end experiences, which I think again, is really interesting. And I just finished doing a book with Rex Black around, and I, Rex had kind of pushed back around the term, what I kind of call you, just dark canary, which was dark launching combined with canary rollout.
And it was kind of into this kind of chaos engineering viewpoint of, okay, well, like you said, there's an accessibility thing. It might be just adding the ability of dark mode to, or high contrast mode to your product. And you might dark launch that feature out to, you know, your VIPs, which you now have accessibility challenges they've opted in through accessibility.
And then you may do some kind of canary AB kind of launch to kind of understand, well, which version is better, which ones are having a better and more positive experience. And you're getting that feedback quite quickly and say, if it might beat, you know, the functionality isn't available to everybody. It's only to a limited select flood few, and then you're validating those hypotheses, which your teams are building and you're using that feedback to actually feedback faster.
So the fact that you've kind of established this experience team that actually feeds back the information, I think is incredibly powerful. And, you know, I think as people are trying to move from or maybe evolve some of that kind of QA practices to kind of take on some of these modern quality engineering practices.
You know, part from the templates, you now, how do they go through that process? How do they start thinking about moving away from that just enough quality to kind of, you know, a big, a bigger discussion at the enterprise level?
Niko Mangahas Yeah, absolutely. I think you hit on a lot of key points. I mean, honestly, I thought that you articulated experience quality incredibly well. I feel like I should have you know, I should have just copied your definition and let that be how we talk about it, but absolutely.
I think one of the things that you touched on that I like to elaborate is how experience is part of what goes on in production, right? So the way I look at qualities involvement and how it needs to evolve for a digital enterprise is that, it needs to be, it needs to have both a shift-left and a shift-right approach. So, obviously, we all know shift-left, just getting more involved in the you know, upstream activities in terms of requirements quality and things like that.
Testing on code level and things of that nature, but also shift-right entails that you used to touch on the principles of site reliability, engineering and reliability testing. That's part of performance, right? So one of the more classic challenges with performance assessing is that we never truly know what happens in production unless you test in production.
So we only get that experience when you're rolling out a new product or a new application for the very first time in a fresh environment. But any other time, you're doing scaled-back versions of production and you're simulating things, so it's never perfect until you combine that that methodology would having performance regression thus in production as well.
So we feel that a combination of those two need to be the ideal way that you look at and monitor your reliability and responsiveness as to how your clients would experience the end how do you say, the end UI, if you will. And then one of the things also that I want to elaborate on is when you look at what our quality responsibilities are in production, I would say continuous validation is one of the things that I would highly recommend.
So while your DevOps teams or SRE teams have Synthetics and monitoring over many of your features, some of them are, most of the teams that I look at do this fairly superficially. They look at uptime for pages and things like that. But you can actually extend it by hitting, by striking a balance between exhaustive testing and mock monitoring.
What we tend to do is create a subset of tests sanity tests, if you will, that, that do not create data in production. So you're not creating, you know, invalid, you know, dirty data, but also you're testing enough of the past to ensure that transactions can be completed. And there's also new tools in the market, and we're actually working with a couple of partners to expand on this area, but we're looking at taking observability to the next level by involving, by embedding that into the quality strategy.
We're also looking at anomaly detection, for instance, and looking at, can you predict if by looking at trends and patterns as to how your users have engaged your websites? How would you use that data towards flagging things that are seemingly unusual in terms of suspect activity?
So for instance, one of the use cases that we're working on right now is that, if, if we see a spike in users that are not able to get to, you know, their login, for instance, for some reason, then before someone reports it, maybe you should just you know, have a high-level alert that they'll see your dev team to look at this right away because it looks like there's bottlenecks there in terms of the traps.
So that's just a simple example, but we're, you know, looking at more exhaustive examples where you can look at and glean insight from our users engage and then look at the services and how they, how the traffic creates patterns of unusual activity when there's something wrong with the website.
Jonathon Wright Yeah, no, I think it's a great example. It's one that I'm actually working on at the moment for Black Friday, actually. So this conversation just had with around dropping the OAuth and timeout for, for keylogging. We kind of say, okay, well actually, let's make it every 60 minutes to put less strain on the OAuth server during KIID transaction periods.
Now of course, you might say, well, that's a security base, but it's a risk both ways. It's a risk. What happens if systemic failure happens and you bring down your authentication server and no one could log in versus, you know, how do you look at throttling and also spending up additional resources because you're expecting huge 10x times volumes, right?
All of it is, like you said, some of these synthetics or on a happy path request responses that happen in production. You know, I, you know, I had this conversation with Tarek from Tesla AI and said, he said, do you know where should tests live? Should they actually live in production?
Should they be testing at a level whereas you running them, you got this test observability to say, well, actually I'm starting to see this behavior and I need to start maybe, you know, making a decision there. Whether it be, you know, I need to change something from an infrastructure or a configuration point, which I'm able to control.
And I think the idea of SRE and the collaboration between kind of AI ops back to quality engineering, I think is a really interesting composition. So for instance, at the moment we're running, we did for the MIT project, as well as we're running tests in production, but we're tagging in the headers that these are test transactions. Therefore, from an observability perspective, when we're looking at data from an APM, for instance, redacting those transactions, but they're actual real transactions, the real operational kind of actions.
And you may say, well, that's not great because then you've got a 1% or 5%, you know, transactional throughput, which is, you know, test loads. Well, we don't want that, but how do we test in the wild, you know, when we came to COVID, it was, where do we test? Because there is only the wild for us to test at and say, you know, I think we've got rules that we'd go in place for a reason.
Like you kind of fed to kind of say, well, no, you can't look at that. But then you know, like you said, from an observability perspective, the only real place to, to app, to test those experiences, for instance, is production. And you go, you know, one of my rants on this week's video blog was around Black Friday and also kind of, you know, the site reliability testing.
And in the sense that we're very good at saying and, you know, oh, I'll just create a stub or shim that, know, will, you know, deal with that PayPal or that Amex endpoint because obviously, I don't want to be hits in their sandbox with huge volumes. Oh, I'll just step it out, but then the stub doesn't have any network function, virtualization, or network virtualization. So the response times come back instantaneously, say when you're testing it, you're doing your happy path testing and not sad path testing for a second as it go.
You know, part of it is your responses come back, and then your end experiences that, the payment goes through and the processing time is done and no one hits refreshed it, retry the page. Whereas when you come to Black Friday and everyone's hitting Worldpay and the response time is 30 seconds, and you keep on retrying the payment, and then you started see duplicates, which then have to recon it the left at the other end.
You kind of say, well, maybe we should have been testing against Worldpay sandbox API with realistic volume metrics as production light is possible because otherwise, how do we possibly type test if we just test everything in isolation? And I think this is a real interesting point because it kind of takes the conversation from individual teams to an entire organization.
And maybe, potentially where, you know, conversations and openness and transparency and observability between other organizations need to open up. So if you are pull out testing Worldpay, they should be giving you transactional data about how their infrastructure is performing from an open test perspective so that you can introduce that into your observability of fuel system and ecosystem of ecosystem.
So, you know, part of it is we're a very black box when it comes to testing websites that have hundreds of plugins that interact with lots of different systems that are out of our control. And as we would put out a scope and part of it is we de-risk them because we've stubbed or shim them out. But when they don't take into consideration, well, what if this happens? What happens? And that, you know, I'm not got a magic ball, but I'm the date team that there's going to be a hall of issues for Black Friday 2021.
And hopefully, if we're listening now, we'll probably say we could have done some of these things or some of the items that you've been talking about from a quality experience perspective. Organizations should be doing that right now. So like you said, the entertainment company, you know, that's their brand and you know, all that social analytics for people to eat in same.
I can't find Disney's, you know, you know, Disney Plus. I can't get onto their, you know, auth- it won't authenticate on the day of the launch. You kind of think stuff, that's really not a great place. If the Disney wants to now have a brand that's associated with quality and fun and entertainment, and not that thousands of Twitter's messages say, I can't get access to, you know, watch my content.
So, you know, how did you, what words of advice would you give to kind of future proof those kinds of quality engineering practices so you can, they can become, you know, digital experiences or digital enterprises that can deal with these kinds of complexity?
Niko Mangahas That's first of all, very well said in terms of the challenges of today and, and kind of, looking at testing in isolation and testing in a safe space, if you will. And that's not the reality of what we want to achieve in terms of a quality strategy.
In fact, one of the things that I get this question all the time that — what is our end goal here? What does quality's end goal? And for many quality practitioners, they will say right away, well, we want to reduce defects as much as possible. We want to reduce issues as much as possible.
Well, that's well and good, but if I'm truly being honest, I think the true measure of a good quality strategy is the predictability of outcomes. So the more predictable it becomes, the more confidence that you have and the more ability you have to navigate around that and to solve it if something does you know, blow up in production.
So, I think all in all in order for companies or sorry, for, in order for QA teams to evolve in the way that digital enterprises would want to address their quality challenges well is they need to look at to your point, collaborating across multiple teams and seeing the big picture. Because if you don't have a good understanding of what happens in production, you don't have a good understanding of what does it take to stand up the right infrastructure to support systems and production.
And you don't you are not familiar with what could be edge cases versus what are common transactions that, you know, that 90% of users will experience, then you are essentially operating blindly. You're not allocating your resources well, and you're not being, you're not creating value in terms of quality in the organization.
So many of the, I, you know, because we started over a decade ago I still experience the organizations that have total quality management. You know, the '90s concept of Six Sigma and total quality management and all those things in terms of putting that as their goal and objective.
And of course, by enlarge, we've done away with many of those concepts. We said, you know, speed is more important or a value generation is more important, but there is a good way to balance this. And I feel that's what we need to strive for, is when we balance all of these things, by using a combination of technology solutions and understanding business and underlying process, business goals, and underlying process.
That transparency and that collaboration across the different groups in the organization will lead to quality that truly everyone owns and everyone has accountability for. And the role of the quality practitioner is to be kind of the facilitator, the advocate for quality. The one who says yes, we, we want to drive this with speed, but also why don't we make sure that there's high competence in this by creating more and more layers of predictability or more gates for predictability, if you will.
So I think that's the, that's what we need to try for in terms of modernizing our quality engineering approach, is having that balance between value generation speed and quality. Striving for high confidence and higher liability, but also understanding what it means for the business so that you can have a good conversation with them, right?
So you, because I remember this experience I had that I essentially, for one of our clients predicted how many production issues they will face given their historical trends. But I predict that you will get a 300% more issues in production.
And that will lead to millions in terms of support brand reputation in fact, and there's going to be a, there's potentially compliance and regulatory impacts as well. And then our, all of your VPs and C-levels will be talking to clients for the next few months. And I was able to delay the release, that major release with that feedback, but I really saw that it was glaring.
The data speaks to it being a massive failure as they tried to push it out at that particular time. And look back, I think that was a, you know, they were really grateful for that insight of, thank you for, you know, it's not a crystal ball that, that you know, that you can see the future clearly, but it is a very strong, directional indicator of where we're headed if we don't you know, if we don't look at the data that's in front of us. So yeah, absolutely.
Jonathon Wright No, that's brilliant. And I love the kind of the essence of time to value, but time to quality. And I know Niall who did, unfortunately, pass away this year but was on the QA lead. His life work was around plan to quality and it was such a fantastic and fascinating that shouldn't recording it with head because he was kind of slinked passionate about, like at what point just the perception of something which is of quality we realized, right?
Or benefit realization in the normal kind of view point. And, you know, I always kind of try and ask that same question of like, you know, you perceive that some digital quality. At what point does it become that product that you believe it was going to be?
And that what's the time between that with customer perception being the only reality. And like you said, this experience being the only validation of the behavioral change, all the perception of the product change. Is that kind of realization? I think what you're talking about, theories kind of new ways, methodologies to actually measure that and also predict that.
And I'll say more importantly, provide that confidence of repeatability in that process, which, you know, part of the Six Sigmas and the, maybe the engineering practices, which was great for manufacturing. And don't apply to software development, which is very erratic and very chaotic, especially in this kind of, you know, digital, you know way of working is actually being able to formalize approach, which allows you to predict those kinds of challenges and prevent them without them happening is actually kind of where we need to get to.
And I think I know, you know, you, you mentioned in the past things like hyper-automation, and I put down hyper baselining, which is kind of multi-dimensional transactional data of a current state. And to me, it kind of feels like the experience which we've got as a current stay, you know, okay, you know, if we're going to put an energy efficiency, it would be Grade B, but we really want to be a Grade A.
How do we get from B to A? How do we make that change? And that tri-factor of speed, quality, and cost is, what is the balance between that? And if we spend more time or effort, you know what is it going to be? What's the balance to get to that desired state? Actually, I think we're talking about things that probably never happened or any decision to be made.
Because the conversation's more about how fast can we get something three, is pipeline and the decision around, you know, business value or quality probably is something which is a benefit. I kind of a poor citizen in that kind of relationship. But you know, maybe when we talk about things like the three Amigos is actually, we got to redefine what that actually means from a quality or a business perspective.
And that really changes the way that we've designed software, all we do in the new reality. And I think that's fascinating that you know, a lot of what you've put together here is kind of helping build that future.
And that way of rechallenging maybe everything which people have set before to say, well, you know, what is that extra Delta and how do we measure that? And how do we also predict it going forward? So, I think it's a fascinating area, and as far as kind of, you know, tips for kind of the listeners and stuff, you know, what would be your kind of your big tips there and takeaways for them?
Niko Mangahas Absolutely. So, one of the, one of the things that I want to leave with is this visual that, that has helped me articulate this better in the past that modern quality engineering is essentially taking QA and extending it in four dimensions. So, your traditional QA needs to go further from testing in, within the SDLC to testing and production.
So that's one dimension. It also needs to be deeper that you don't test on the UI, you don't, you know, testing is not scandy, right? You have to get into the weeds and the data and the API so that your architecture, you have to test multiple layers of their technology stack to ensure quality at all levels.
So that's deeper. And then I'd also look at wider as another dimension, and that means testing just beyond or beyond the functionality and into true experiences. And that's why we were talking about experience quality a lot because I feel that we should have a wider lens in terms of how we see quality and how quality is perceived by the end-users. And the fourth dimension, which you briefly mentioned is how do we actually scale to, to meet all these challenges, right?
Like I know many QA teams are struggling enough with how little capacity and bandwidth they have to test, you know the what's on their plate, currently. How did they even go and extend three dimensions? And the true answer to that is hyper-automation. So that's when you use AI and RPA technology to accelerate beyond your standard, thus automation and being able to predict, being able to take away some of the decision points and some of the testing dependencies like creation of test data, creation of test standing up of test environments automatically, and things of that nature.
And that is the only really way, that the only real way to scale to the level that quality is expected to operate for the digital enterprise. So, that's a summary of what I wanted to talk about in terms of more quality engineering, but the main takeaway I want to leave with is this. First of all, quality, modern quality engineering needs to create value for the business.
So what that means is that while it's really easy to say, oh, we have a testing gap here. So let's create, you know, a thousand test cases to cover that. The true the true, how do you say, the true effective quality strategy looks at the business goal and the business objectives looks at process.
And then develops a technical solution so that it has, it brings the value of the solution back to the business goals and objectives. So it has a direct contribution to business value as opposed to just a bandaid or just a solution that seems obvious at the time. So and that's where our company helps is that we actually developed quality strategy and solutions for companies that need to solve big quality problems and are undergoing transformation as well.
And then we also provide consulting services where they need specialized skills, like, perhaps they don't have a lot of automation expertise yet. So we have folks that are able to supply that, sometimes in the interim and sometimes permanently, depending on the client's needs. And our hope especially, you know, why we develop philosophies like this is we want to enable companies to get to that level of modern quality engineering for a modern digital enterprise.
So, that's my hope is that they see it as a strategic effort, as a strategic initiative and that they've, they have different benchmarks and baselines now at which they measure or by which they measure quality success.
Jonathon Wright Wonderful. Well, it's been, you know, you are the global practice lead for quality engineering, so how best to get in touch with you? You know, is it Twitter? Is it LinkedIn?
Niko Mangahas I would say LinkedIn is the best. I barely use Twitter, so definitely hit me up on LinkedIn. And of course, you can go to our website and if you know, put the question there and inquiry there, it will go to me if it's quality experience or DevOps related.
Jonathon Wright Well, I'll make sure we include that in the show notes, but it's been an absolute pleasure to have you finally on the show and we'll have to come back and talk about some of the lessons to be learned for 2021. So with that, it's been an absolute pleasure and I look forward to having you back on the show.
Niko Mangahas Thank you so much, Jonathon. It's been a pleasure and it's been a privilege to be with you on this podcast.