- Join the waitlist for The QA Lead online community membership forum
- Subscribe To The QA Lead Newsletter to get our latest articles and podcasts
- Check out Circle Media
Other articles and podcasts:
- What Is Chaos Engineering & Why Is It Important?
- What Is Quality Assurance? The Essential Guide To QA
- 11 QA Automation Tools You Should Be Using In 2020
- 12 Key Quality Assurance Skills & Competencies
- 6 Hacks For Great Quality Engineering In Remote Dev Teams
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
In digital reality, evolution over revolution prevails. The QA approaches, and techniques that worked yesterday will fail you tomorrow, so free your mind. The automation cyborg has been sent back in time. TED speaker Jonathon Wright’s mission is to help you save the future from bad software.
Jonathon Wright Hey, how's it going?
Niall Lynch I'm doing great. This is Niall here.
Jonathon Wright How's it going? Yeah, good. It's good to finally connect. How are things in California? How are you finding remote working?
Niall Lynch Well, I think it's been, you know, a confirmation of an experience I've had for many years, which is the technology is in a place now where it's kind of seamless. And I think that one of the big results of this pandemic is that business is learning that this stuff really works. You know, and when you were the one working remotely in a company, you were like the vegetarian at the steakhouse, right? And now I think a lot of people are thinking, you know, this really works a lot better than having to go into the office five days a week and sniff everyone else's behind all day. So I think I'm having no I mean, I did a year's worth of consulting last year without ever setting foot in the office where the offices of the company I was consulting. Because, by the way, can you see me?
Jonathon Wright I can't see you. Just need to hit the share camera. Start, start video button left.
Niall Lynch I don't want to scare you away, letting you see me.
Jonathon Wright No doubt. Look at you. I can see you, don't I? Yeah. Say a year full of doing a full year reffo, just consulting with your head office, remote or not, based in California.
Niall Lynch Well, this was the company in Portland, Oregon, called Circle Media.
Jonathon Wright OK.
Niall Lynch And their head of engineering is a guy I worked with at Symantec for a while. So. So I was hired to sort of rebuild the QA effort there. Because they were in that horrible phase boundary between being a startup to suddenly being successful. And then, you know, none of the startup QA processes scale. Right. And everyone's kind of doing their own thing and it's all very loose, you goosey, and suddenly you need to have like total analytical transparency into product quality on a weekly basis. And, you know. Guys you hired straight out of college have no knowledge of how to do that so.
Jonathon Wright That was really interesting in the sense of, you know, just standard. it raises a really interesting point around standard QA practices. And then, you know, the scalability of those. You know, I think you start off with building a small team and then that the methodologies and approaches that you use for that aren't exactly good at work. Maybe a more scalable model. So how do you overcome that?
Niall Lynch Well, let me tell you a little bit about myself and my career. I have a master's degree in ancient Near Eastern literature and languages from the Oriental Institute at the University of Chicago, which, of course, led directly to a career in software development. And I left grad school because I didn't want to be an academic and bounced around for a bit. And this is in the mid-80s. So this is, you know, the Cretaceous. And I taught myself software and I got a gig being the QA lead for a soft for a small software company. And I had to figure it all out by myself. I mean, I had no training of any kind. And I guess that's where I kind of had to figure a lot of this out. And then I went to work for Symantec in Los Angeles as the director of QA for their entire enterprise software division. So corporate. Norton, antivirus firewall, email filtering. You know something, I was running a group of 100 people doing QA for eight simultaneous enterprise software projects. And that's where you really have to up your game because you really have to because QA, I think all software processes in a smaller company are very artisanal. Right. You find the one guy who knows what they're really doing. And you trust them with all the important stuff. Meanwhile, no one else is learning from that person. Right.
So when I heard when I went to work for Symantec, I had to realize I have to create a whole department that functions like one really brilliant software tester. So how do I do that? And what does that mean? And it also made me confront. What was for me, the fundamental issue that, you know, QA is not really a problem of quality, it's a problem of knowledge. Because QA does not actually assure the quality of anything. That's up to product management and engineering. And I mean, everyone who's worked in QA knows you don't test quality to anything.
And so I had this insight that the real function of quality assurance was to provide real-time knowledge of the state of product quality at any point in its development. In other words, it's not something that just emerges at the end. So I had to kind of totally rethink how I did my job and what the purpose of my job actually was, that my job was to run the group so that it could create real-time quality metrics to serve rational risk-taking. At the upper management level and when I realized that. All the kind of like operatic aspects of QA disappeared like that. No go, no go ship, no ship meeting suddenly became totally cut and dried because we had all the metrics in place from the beginning of the project. We were not using bug metrics as quality metrics. We were using test coverage as our fundamental quality metric. And, you know, people were kind of disappointed that suddenly this ship, no ship meetings were so cut and dried. It was like, OK if we ship nowhere, the risks we're taking here are the issues. If we wait two weeks, this is how much better we'll be. And here's how much it will cost. You know, it became just a very dry. Discussion. That's a long-winded answer to a simple question. But I mean, you have to really have a clear understanding of what QA really is before you can figure out what processes need to be in place to make it successful. And most people, I find, have an incomplete understanding of what the real purpose of QA is. And for that reason, they may put all these wonderful processes in place, but they're based on a false assumption of what QA actually does.
Jonathon Wright Now, I think that's a great concept. And, you know, I think you said a couple of things there, you know, about the fact that it's about knowledge, you know, and coming out of university with a language kind of degree. I mean, part of that gives you foundational skills. And did you find that applying some of what you did in university actually helps with understanding maybe the language behind QA?
Niall Lynch Without question. Because when I was in, you know because my entire education is what would be considered, you know, useless humanities, as my uncles never tired of pointing out to me, because, you know, I've spent my youth hearing, well, how are you gonna make a living? You know. You know, because I wasn't, like, installing TV antennas and stuff like that. And but I found that my education was really crucial in my success in tech for a very simple reason that in the humanities you're always dealing with situations of irreducible complexity. Right. There's no one interpretation of a novel. Right. There's no one translation of an ancient manuscript. Right. You're always having to choose and figure out what's what works, what's best. And I found that that made me very comfortable. With problems of software development, because there are problems of hyper complexity, of irreducible complexity. And you're having to make choices consciously of this over that. And that not this. And I found one of the and I, and I think, you know, I've worked all over the world and I think this is something very typical of American culture in particular. We're very uncomfortable with irreducible complexity. We're always trying to wish it away. And when I first started working in software, I saw this tech over and over again in a really good example of this is the problem of and performance testing. There are so many variables involved there, you can't just optimize for one. Right. I mean, it's a multivariable optimization problem. Plus, I've also found that software development, when I was early in my career, was very uncomfortable with doing what with what are called emergent properties of software systems. In other words, properties that are invisible until the entire system is hooked up and working. Another, it's like you can't drill down and see the problem in this component or that component or this environment or that environment the entire because I was working with enterprise software. Right. Not desktops. And one of the big first problems I had to tackle at Symantec was load performance testing, which they were not doing. Even though they were selling a product that was rolled out to 50000 thousand seats at Ford. Right. And what I encountered there was people couldn't even they didn't even know how to think about the problem because it was irreducibly complex. You couldn't reduce it to this factor or this data point or like I say, well, you know, it has to. It has to support twenty thousand simultaneous users on the network. And I'm like, okay. Over what period of time? You know, at what cost? In networking. You know, it's they couldn't wrap their head. In fact, I wound up writing the performance and load requirements for the product because product management was completely at sea. So and I think particularly now that we're in cloud computing and particularly artificial intelligence, the ambiguities, and the complexities are just like they're beyond infinite. And I think that's catching a lot of people by surprise because they can't just wish it away. No, they can't just automatically reduce it to one very simplistic thing that everyone can wrap their hands up. But you're absolutely right. My education in humanities was crucial to my ability to deal with these problems, because, like I said, you know, interpreting Hamlet, you know, there's not one interpretation of Hamlet, is there. Right.
Jonathon Wright And I think that's a really good point, a good friend of mine. I wish you used the phrase, you know, English is wonderful for poetry, but not for writing requirements, because it's the levels of an ambiguous. How ambiguous the language is. Multiple meanings for four. Should could you know, just how we speak. Whereas, you know, other languages all formed more structural, like. I'm not going to I'm going to say Klingon is kind of, you know, very much space. Right. And, you know, when you look at things like so I was reading a book on from Tom Gilb on Planguage, which, you know as if you've heard of Planguage, it was a language which he defined for requirement's engineering purposes without ambiguously built into that language. And, you know, part of what your kind of your academic side of things led to your ability to deal with lots of multiple complexity embedded within that language, to then understand kind of how important certain things are compared to others like the, you know, the semantic problem of scale. You know, looking at things like non-functional requirements and look at the different dimensions of that and asking it from multiple questions to clarify that ambiguity of requirements. You know, you mentioned about test coverage and, you know. How do you go about doing that? To give you that kind of confidence about ship. No ship.
Niall Lynch Well, let me first talk very briefly about requirements language, because on the one level, you know, you certainly can expect that the people who write the requirements are using a level of precision appropriate to their role. Right. I mean, you cannot expect a product manager to define every little. Thing about the product requirement. That's actually not their talk. And so what I did was I would take the product requirements and translate them into release criteria. Corresponding to each requirement. And what I had to train my team to do is release criteria that must be empirical descriptions of results states. In other words, if this is working correctly, what would you see? OK, so you get a requirement like. Must be easy to install. No, that's a terrible requirement, right? Because it could mean nothing and everything. And I would say with a product manager and flesh that out a little bit. Right. Like, what is it? What is easy mean here. And then I would turn that into something like, you know, it only needs one resource to affect the installation. If you have to reboot at all, you only have to reboot once. And it can't take more than 10 minutes. You know, those are results. And those could be totally different values for each of those parameters. Right. But at some point, you have to get down to empirical states that can be observed. So getting into test coverage, this is where I had another insight when I was at Symantec. There's I came up with a notion that I call a test context. And what I defined a test context as was an aspect or feature or capability of the system that could vary independently of others. And that sounds abstract, but I mean, there are some really obvious examples of this, like different OSs. Right. I mean, if you're writing an app that's going to run on iOS and Chrome, those are clearly different test contests. Right. Because they can vary independently of one another. Foreign languages are another super obvious example because Symantec was like available in every language. Right. And then things like product modalities. Right. So say you're testing a web server. For performance, obviously cashing on and cashing off. Are two very different test contexts, right? Because the product is going to behave totally differently in each of those. And there are many more subtle examples of this, like what? What user mode are you in? Are you logged in as an administrator? Are you logged in as an end-user? And people habitually don't think in these terms. They think in terms of this feature or that feature. So what I learned how to do was decomposed the capability envelope of the product into its test contexts. And then I would rank within each of those contexts the different variables like, you know, and then you have to come to a decision of how much of the testbed I'm actually going to run for each of these. And that leads you down some interesting paths because I did a lot of localization testing. And people would think about that as well. We need to test in this language that language, this language which you can't do when you're supporting 20. My language is so what I figured out was the actual test context. There was a character set type. And there were three single-byte Western ASCII, single-byte Eastern ASCII, and multiband. And then once you had that categorization, you would choose OK, within each of those, because Western single byte, it is basically any Western European, which is the eastern single Vytas, Arabic, Hebrew, you know, and multi-byte is like any East Asian. Right. And so once you had that categorization, you could make a rational decision of within each of these silos, which are the top ones that I need to test. Right. I mean, obviously, German, French, Spanish, you know, obviously Japanese and Korean. Right. But that and then you could go down that ranking and say so this one down here, I'm probably only going to test 20 percent. by and then once you do that, then you must have traceability of tests back to requirements through contexts so that you can say I mean, but metrics are interesting. But to me, bug metrics are not quality metrics. Their effort metrics because of the number of bugs is going to increase the longer you've tested and the more people you have doing the testing. And so what you have to have, in addition to a test coverage definition, you must have hard traceability from product requirements to a group of tests that validate. And then your quality metric is for each requirement. What percentage of tests have been run? What percentage of tests have passed? What percentage have failed? And what percentage are blocked? You know, we can't run them at all, given our equipping limitations and environmental limitations.
So but I found that test coverage and traceability. If you don't have those in place, you do not have a truly analytical Q&A effort. And using bug metrics as release metrics, I think is the ultimate delusion. But it's but once again, it's wishing complexity away. Right. You just look in the defect tracking database, pull out a number and say, is it high or low? How many open category bugs? How many open category bedbugs bugs, which to me is like bullshit. Because if that bug count only represents 20 percent test coverage and you don't know, you've only achieved 20 percent test coverage. They're utterly meaningless, right? I mean, you're if you're flying blind. It's like a blind person who thinks they can see. They don't know that they're blind. So, you know, but institutionalizing this was the real task. Of course, not just having these ideas and thoughts, but actually making people do it.
Jonathon Wright I think it's absolutely fascinating. It's like you're combining could offer a modern-day risk-based testing approach with quality metrics that are in the context of what's important to the organization. So you know what I liked about that traceability matrix that you're talking about? It's directly linked with a proportion of what make your customer base based on the language. So I remember a really interesting defect with an installer. So I was living in New Zealand and I was using Gload when I which is a back in those days with the.
Niall Lynch Hey, I'm very familiar with. Yeah.
Jonathon Wright And so I came to be stalled on my PSA. And it was fine. And then I went to deploy onto the server and it just would not install. And I kept some things to what is it's the same, same mo as it's the same all these different test context kind of variables.
And the MSO I installed, the only difference was the language of English and US vs. English. New Zealand. So English, New Zealand, and English. There was another variation to fit those MSI word installing the preemie, the necessary prerequisites. They weren't launching and they were in silent modes. I didn't know that they need a dot net or whatever else it was trying to install.
Niall Lynch Right.
Jonathon Wright And that's exactly what you're talking about. Directly linked to say I'm looking at my customer base and maybe only because I was worked with IBM at the time. It was maybe eighty-two percent of our customer base, our base in New Zealand. You know, that's a much lower priority than the US and China, Japan, all the major locations of where your customer base is. But if HP had thought about that and done it in your kind of metrics, they would have had a clear vision of where the risk laid potentially could be. And you know, how to complete localization was for ready to ship. And I guess that's also version-specific.
Niall Lynch Right. And the thing that I did was, you know, one of the dysfunctions of software development. Is from the perspective of the CSO level, the engineering team is always failing. And that perception is created by the fact that they're constantly surprised at the very end. Or worse after release. Well, the state of the product. Right. So if you're the CTO or the CEO, what you're hearing from a lot of software development projects is everything's fine, everything's fine, everything's fine. And then in the last month before ship, everything blows up in their faces or even worse. Everything's fine. We're ready to go. We ship. And then the product blows up in contact with actual customers. And what happens is the CSO level stops trusting the engineering group completely. Because they feel they've been hung out to dry and humiliated in public and constantly being lied to. So what I learned to do was basically go into these meetings and say. And I boil down all these complicated metrics into a very simple thing because when you talk to CEOs, you can't be complicated. Right. And I basically did like a feature ranking and I gave each one a rating of one, two, or three, whereas one ready to go. Extremely low risk of serious problems in the field, too. Almost ready to go. Medium risk of serious problems in the field. Three. Hell, no, Frank. And everyone told me, don't present that to the CEO. He's gonna wig out. And in fact, it was the opposite. He was like, thank you. He said, how much more time do you need to get everything to at least a two? And I got an extra two months for the whole project because I was honest. But I could present it analytically where he could calculate risk in a rational way. The head of engineering was like, Niall, I need to take you did you know, because they were desperate for more time because they knew it was not ready to go, but they were terrified to say, you know, so we don't have to lie if we have the right metrics for the work that we're doing, we just say, hey, here's the deal. Here's the risk you're taking if you ship. Now, if that's acceptable to you, we can ship. Here's my recommendation as to the director of QA and to follow my recommendation, here's the extra side the project will need. And, you know, because I'm not so, you know, one of the roles of being one of the paradoxes of being a QA leader is that it's very schizophrenic. There's a point in the project where you have to be a hardass about quality. Right. But then there is a point after which your job is to get the product out the door. So you can't be a stickler. You know, we're going to add two months to the schedule to fix these little problems because it's just not worth it. And that can be very confusing to the QA team because, you know, for months and months, they've seen maybe this total hardass. And then all of a sudden, I have to be the one to tell them, look, forget about these issues, because they're known. They're known quantities. Everyone knows about them. Everyone knows we're shipping with these issues. And there's no point in taking the extra time money to fix them. And then they're kind of disoriented because of that. But, you know,.
Jonathon Wright I also that's really fascinating and in a lot of different ways, because what are the things that you're talking about seems to be this idea of product engineering. Right. In the sense of we understand a product is product teams, product managers, sorry, product managers and product owners are not gonna know all the answers. And we know engineering need to at least know some of the answers as we go through that process. But what you're doing is you're able to kind of from day one, really be able to push quality.
It's not that you don't know what the final outcome is until a week before shipping and getting all the way through the lifecycle of that product. You have a gauge, a measurable gauge of quality throughout the full three months, six months. You know, it's not something that's done because someone at the end saying, you know, how much time do you need? You are literally able to kind of gauge quality from day one.
Niall Lynch Right. And one of the metrics that I devised, which I didn't really need, but project management did. And I came up with a metric that I called time to quality. And this only works if you have a well-defined testbed and traceability requirements instead and just assuming all of that. Right. Whereas I would report out. And even with all this agile stuff, you still, at least conceptually have to be defining a test pass. Right. I mean, at some point, you must have exercised 100 percent of the test. And you have resulted from that. Even if that pass is divided up into, you know, five million sprints. And what I would report out on regularly was how quickly it's quality being achieved. In other words, for each test pass, what percentage of tests passed on their first run? What percentage of tests required to pass this? Before they passed, what percentage of tests required three or more. Right. And that's a very important trend metric. Because if you've completed your first test pass and only and let's say 80 percent of your test finally passed. But only 20 percent of them passed on the first run. That's actually a code quality metric right there. And I'd like to call that a tripwire metric, because you can use that to say if, if, if, if after you're done testing the first architectural release, the first version of the product under test, and only 20 percent of the test passed on the first try. Guess what, you're off schedule. Right. You need to intervene now. And this is before anyone else knows the projects going off the rails, right, because people tend to wait until the train is hanging off the bridge. To say, you know, we need to intervene and do something. And I found that project management really loved that metric because we could take corrective action very early. Without a lot of drama and fuss and political exposure. But that's the beauty of having the kind of system in place that I'm describing is that you can then come up with these kinds of higher-level kind of meta metrics. Because people don't think in terms of like how quickly is quality being cheeky? They think in terms of when will they be done testing? Right.
Jonathon Wright Mm hmm.
Niall Lynch And and and I found that that time to quality metric, particularly in very large, long schedule projects, because I was routinely working with projects that were 18 months long. Right, and upper management expected you to hit a two-week release window that you're committed to at the beginning. Right. Which is very dysfunctional, but it's the reality. And so one of the things I realized, OK. Given this reality, we have to know as early as possible when things are going wrong, right? You know, we can't wait till the house is on fire. And we're all trapped in the attic. We have to say, you know, I'm smelling some smoke from the kitchen. Maybe someone should go look. But I found like once I realized all this stuff, QA became. Much less dramatic, much less gladiatorial. You know. And by the way, I don't know if you've looked at my LinkedIn profile, but I have a series of essays on these subjects, articles that go into great detail on all this, which may or may not be interesting to you. But what I found is that if you just fundamentally orient yourself that way. The problem of QA is a problem of knowledge. Not a problem with quote, because the product is in a certain state of quality at every point. even if it's horrible quality. But it's QA his job to know that at every point in the development process. And bug metrics me are important, but they're very secondary because, as I said, those are labor metrics and effort metrics, right? Every every bug is a unit of labor cost against engineering. But it's not a fundamental quality by any means.
Jonathon Wright So there's a couple of things that are really interested in picking you preyed on here. I love everything you've said is literally the most refreshing QA talk I've actually heard. But really this decade, which actually I mean, to say this decade and last decade, that's how long it because there's a couple of things that I just love about this.
And it is the first one is this ability to understand quality at any particular time. And, you know, your example they read to me is what I've seen every single time I've ever been involved with a project, you know, 20 percent of testings done. Therefore, the view is that's, you know, they've got eight percent left. Not that the house is on fire because actually 80 percent failed on that first pass. And I know you probably remember the test direct two days quite well as well. But one of the things with the test director, what was always missing from a metrics perspective was this idea of accounting. To tell you how many times you've done it, you've rerun that test because you write if you've read you're rerunning a test timed to quality, you've rewritten that same test fifteen times, then you think, you know, how long is the test script if it's 100 steps?
It's a complex step, you know, and you get into step 22 every time it is breaking. The concept is, you know, it's blocked or it's in progress. There's not this cut off. You know, when you start planning for that six months, 12-month release cycle, everyone's got their best intentions, right. It's the best intentions. If everything goes perfectly, we'll do it. But nobody has the guts. Two weeks in to say, actually, there's a problem. And, you know, we're gonna have to potentially look at what that might mean as far as resources, additional resources that could help, you know, remove that problem. And, you know, part of that challenge of just, you know, negotiate with the business is about, you know, it's not ready. And I've got a great example. A friend of mine, Chris Ambler, who used to be the head of E.A. Games. He went and got what he got fired from a game he went to. He was supposed to release a game which was called SIM City, which I'm sure you've heard of.
Niall Lynch And I used to work right next to their headquarters in Los Angeles.
Jonathon Wright I am very, very familiar with it. So Chris went into his boss and said, do not ship this product. Right. He didn't have your metrics. He literally just said, we've done thirteen thousand hours of testing we've raised. I remember him telling me this story. We've raised twenty thousand bugs. He's not ready to ship. So his boss eight ought to commit it to a hard day. It was a couple of weeks left and they just said it's going live. And he took Raj, who said, look, you do realize that those thirteen thousand hours when we launched this game within the first 12 hours, there could be more hours of testing and production than there is on the entire testing effort. What we've done is that if you ship it, it's going to break. Obviously, it shipped. It broke.
He got sacked. And, you know, part of that challenge was I don't think he had the metrics of how far away were they before it could have been released.
Niall Lynch Right. Because you have to understand how CEOs think. Right. I mean, if they've got a huge financial loss they're facing if they don't ship on time and you just say, sorry, it's not ready. And that's all, you know, how to say they're not gonna listen to you. Whereas if you could go in and say, OK, here's where we are. Here's the risk involved, and here's my recommendation on how much more time we need. And the other thing is you have to train them that this isn't the testing problem. This is a code quality problem, obviously. And what I like about the time, the quality metric is that what you're really identifying when you have. Tests that are failing again and again and again, even though they're patching and patching and fixing, fixing what you're identifying, there is code that's been really badly designed and written. And that's the problem. And it allows you to go in private. The head of engineering and say these are areas of the code that are consistently failing, so they need to be rewritten. And that turns out to be a very no drama discussion. You don't want to be bringing that out in a star status meeting, you know, because engineers are very thin-skinned. Right?
Well, when you can go to say, OK, this clearly shows that this piece of code, this module, for whatever reason I don't care, isn't working. And it needs to be redesigned in recoded. And why don't we just start doing that now? You know, no drama, no fuss. It's early enough on in the project. You know, you don't have to say, oh, crap, you know, we're gonna be late. And if you can. The whole point, as you said, you know, project engineering, the whole point is to front-load the problems and the risk. Right. The whole point is to discover everything that's really wrong as early as possible so that can be fixed and people understand that about bugs themselves. Right. The sooner you find a bug, the cheaper it is to fix it. But people don't understand that about the code itself. You know, the earlier you can find out, like one of the crazy things about Symantec, the Enterprise Division was here as a product, once again, it's rolled out to 50000 seats at major companies. Right. Which means that the most important component of the product was installed.
And guess what was the biggest part of the whole product? And there was only one engineer who knew how to work on the. And I remember saying to the head of engineering, well, did you did it occur to you that's a cause and effect thing? I mean, you get this massively complicated insulation program as a responsibility to one engineer. What are the chances they're going to make all the correct decisions? But the thing about if you look at tot a weather quality is being achieved overtime or not, people don't pay any attention to that. But it's so crucial for understanding where is the code weak, whereas the code just kind of like. Very jittery. And let's just talk about that. Adult to adult fix it. Move on. But if you're finding that out in the last month before a ship, know everyone's going to freak out. So QA is a problem of knowledge, it is not a problem of quality. That's my mantra.
Jonathon Wright So, again, I love this chat. I'd say this there's a couple of things there. You know, I remember chatting to Alan Page who wrote the book on how Microsoft tests and he talked about how when they were doing the Windows XP Vista. We know what the story is helpful for the upgrade systems. You know, they'd really on thousands and thousands of hours with different hardware, different, you know, configuration, because that was the problem, getting the software installed. Right. And, you know, partly what you see is your you're potentially raising here is two things. One, you know, is a resort. You know, what is the correct allocation of a problem to it? Is it does it need a team? Is it a single person job? And, you know, part of it is that we do work in as engineers in silos. And, you know, the idea which you just mentioned as well around, you know, if there's a fundamental problem with a perp, you know, let's call it a code smell for a second. But if you're using code coverage tools, you probably could understand how complex a second metric complexity I can't remember how to pronounce it correctly.
But, you know, part of it is you can start understanding if this bit of code is actually well-engineered or not. But, you know, people used to you know, I remember someone said to be over there. Forget about, you know, if you get some builders in to do your driveway and they completely mess up. Do you invite the same builders to come back and fix the problems or do you get somebody else to look at it? I know this is a kind of a question of, you know, when does it become an art, you know, an architectural challenge where you're kind of saying, you know, from an enterprise architecture perspective is this, you know, piece of code, piece of functionality. Is it too complex? If we over we underestimated the complexity behind it. And is it does it need a team? Does it need a solution? Architect evolves to understand why it's so complex and why there's so, so many issues with it. And so it is part of what you're doing is you kind of you you're identifying those edge cases of potential fault failure before they get hidden under masks of releases where everyone believes that, you know, it's blocked. You can't test it. You can't get integration testing. You can't system test it because you're playing catch up. You know, how do you find, you know, that kind of conversation goes with understanding if a job is a bit bigger than a single person or, you know, if that person is all that, it's just too complex.
Niall Lynch Well, I'm a great believer in role definition. And I have succeeded pretty well in my career because I've understood the boundaries between what I do and what engineering does and what product does. Even though I do feel a responsibility to contribute to their success. So, I mean, obviously, I have ideas about why a piece of code is constantly failing. But I find that one of the success factors in being a QA lead is social. You know, one of my main goals when I come into a new. You know, context is to build trust with the head of engineering.
And I remember when I got to Symantec, the head of engineering for enterprise was kind of on the surface like a classic driver, dominant helmet headed my way or the highway. You think you know what this is about. But let me tell you what it's about. That thing. And I realize that being a QA lead is very much a question of fear management, particularly with respect to engineering, because they look at you and you think you're going to humiliate me, you're going to find out my mistakes, you're going to end. And I was getting a little of that pushback from him, even though I knew he was actually a really great guy. I mean, he was playing a role, you know, as we all are at work. And so I just went into his office one day because we'd been having some static between us over some stuff in. And I said, Robert. How would you define my job here? And he said, well, it's to ensure a product quality. I said, no, not at all. I said, my job is to make you a star. That's my job. Because none of this is going to work unless you're happy. And I said, look, if I have issues with your team's work, I'm going to come to you in private. We're going to hash it out. And we're going to come up with a solution that we're going to agree on and we're going to publicly message the same thing. We're not going to disagree. I'm not going to ambush you in status meetings or staff meetings or I'm not going to tell everyone but you, you know that your stuff is awful. You know, I'm going to just come to you and we'll and we'll work it. And I said and I expect you to feel free to do that with me. If you think QA is not doing what you want, come to me. I'll close the door and we'll have. But like, if that is not in play, I mean, that's something that's not often talked about. But you can have all the processes in the world and all the tools in the world. And that changed everything. I mean, that conversation because, you know, I'll tell you something, it does not hurt to have the V.P. of engineering having your back. Right. Because the stuff I was doing was even more alien then than it is now.
Jonathon Wright And that's like kind of managing up, isn't it? Which is always a skill which everyone kind of misses. Is that actually, you know, I you know, I love engineering practices. You know, I started kind of as an engineer and I found it quite hard to kind of doctor this kind of agile methodologies to work at scale. And,
Niall Lynch Jonathan, I have to interrupt you, because I have to move into another room right now.
Jonathon Wright OK. No problems.
Niall Lynch You get to navigate my house.
Jonathon Wright You can see how good your Wi-Fi is between rooms.
Niall Lynch Oh, it's very good. Come on, this is California.
Jonathon Wright That's I must admit, I am a little bit jealous where our infrastructure quite isn't the same. But last day, actually, last time I was in Portland, I. I was working for this Hachey and I was out there and I would catch it with my friend Ray Ralph, who is based at Hillsboro Inn in Portland.
Niall Lynch I went to college in Portland, so I know the place really well.
Jonathon Wright So, Ray, you I'm sure you probably heard of Ray Arel, but he took it as the HR alliance, which is that kind of direction. And, you know, Intel reminds me very much of Symantec. You know, I remember growing up as a child living. You know, Ed Ed, Peter Nelson put out. Right. Things like Ghost was kind of my favorite thing in the world.
Niall Lynch A ghost was brilliant. Yeah.
Jonathon Wright And the multicast server was just, you know, game-changing idea. When I was when I started as an engineer at Siemens, that's what we used to provision images on machines when we needed to test different versions of iOS. You know, it was a game-changing tack, but it kind of showed how robust that kind of engineered product was. Right. You know the tools that came out of Symantec were not used with military-grade, but they were at a good grade where they were critical infrastructure kind of level.
Niall Lynch Yes, very much so. Yes. But I found that you know, the job of because at Symantec, I was at the director level. So it's pretty higher. And, you know, getting buy-in is so important for as a QA leadership skill, because no one wants us to actually do our job the way it needs to be done. Right. They just want us to take the blame. And I mean, whether they think that consciously or not, in terms of the instability, like the example of your friend at EA right. He told the truth and got fired. So I think that you know because one of my favorite stories is that I worked for Bean at the statistical software maker SPSS. And I remember right after I got there, the security director, one of the engineering managers for one of their products, you know, came into my office and said, well, you know, engineering is going to be six weeks late. And I appreciated that candor because often engineers will hide that. Right. So I thought, OK, great. I said you realize you're gonna have to tell the project manager that the whole project is now six weeks late. And I'll never forget, he looked at me like astonished and said. So you're saying QA gonna be late? And I said, look if you're six weeks late. That means we're six weeks later. I set my schedule is not infinite, we compressible. You know, I can't just lop six weeks off the QA effort and deliver what I'm expected to deliver. But I'll just never get. He comes in and tells me he's gonna be six weeks late. And when I say, well, then the project is six weeks later, he was also you're going to be late. I'm like it. So start negotiating that by. And I think that one of the things that I think is kind of dysfunctional now in the software industry. In terms of how it thinks about QA. Even though I think one of the things that I find very gratifying is that when I started in QA, it was completely professionalized. And I think particularly in the last five, six years, there's been this incredible growth of QA people professionalizing themselves and, you know, having forums and podcasts and really kind of teaching each other how to do things, which was not the case when I started. But on the other hand, I think there's a tendency in the industry to reduce QA to two things. On the one hand process and on the other hand tools. And that leaves a gaping black hole of. Yeah. That's great. But how do you actually do QA? Right. When I look at, you know, when I've had to hire QA engineers to do automation, you know, people are just focusing on, do you know, this tool or that tool or the scripting language that's a scripting language. A lot of these guys wash out when they interview because they say, OK, that's all well and good. And I believe you when you say, you know, all these tools and languages. That's great. Walk me through how you design an effective test.
And then they look at you like cattle staring at a new gate. Right. Because their expertise has been defined in knowing how to automate X. Right. But they have no idea how to design X in the first place. And that's why automation people throw so much money at it and it's often ineffective.
But that's kind of my me. If there's a bee in my bonnet these days, it's that QA is not just a process. On the one hand and tools on the other, you have to have the fundamental expertise, which gets me back to role definition, which is to me, what makes something a role is not a title. Or where you are on the org chart. It's a responsibility that you or your team has. That cannot be delegated. You can't give it to someone else. And so to answer your question about how do you deal with a situation where, like the code talkers, you know, my default mode is to say. The head of engineering, this is his role. He cannot delegate this to me or anybody else. Within that, I'm happy to offer any insight I have on what's going on and why it's going on and work with engineering. When I got to Symantec and. They couldn't come up with requirements for load and testing. You know, the head of the project turned to me and said, well, can you write the requirements? And I said, well, yes. But I can't take responsibility for them because of requirements. Product requirements are defining a market opportunity, not a technical implementation. So you're asking me as the head of QA, take on a marketing role? Which I cannot do. I said I will write some requirements, but they can't be the requirements. Do you see what I mean? Because I find that having a clear understanding of the overall definition as the thing you can't delegate, right? I can't ask. Like, I make a strong distinction between participation and responsibility. Sometimes you need to ask engineering to help you do testing. Right. But that's participation. Responsibility for the quality of testing is still mine. I mean, I cannot delegate that to engineer. I can't delegate that product because, you know, anyone can write a book as I like to tell my QA teams, you know, customers find bugs for free. Why should I pay you for the privilege? Well, I find that role definition creates that structure of accountability, but also of mutual support. That helps you negotiate those difficult discussions, whereas, in a lot of organizations, there's no real role definition. Like everyone. Like everyone's going to jump in and do stuff and you're like, well. Great. But who's responsible for it? I find that that the clarity that roll definition gives allows you to have it paradoxically allows you to offer support. Outside of your role, because everyone understands this is not your role.
Jonathon Wright Yes, I completely agree and what I read like about what you were saying about processes and tools say, you know, I, I kind of, you know, probably done 20 years of automation. That's kind of where I've been. And, you know, the focus has always been on tools. Right. And, you know, I've found it hard to actually ever justify. Automation is a success because, you know, I don't think replacing manual testers with automation is going to solve everything. Right.
And I think, you know, there is this tendency to automate stuff. Not really understanding it and understanding why you're doing it in the first place. And so I think the focus is a law. And I think that I agree with you about the new groups that are coming through. It's more around, you know, learn python. You know, learn. You know, this particular type of framework. And there's a lot of problem-solving spend, understanding, and interpreting those languages. And those tools which are infinitely complex on the road. Right. That when it comes to actually test design, it's a tertiary activity. And what's the primary one is whether they can automate the stack, which isn't the right reasons for it. And I think at the same time, process idea the others is also the same is that people are very much. I want to do Spotify agile or I want to take this new kind of, you know, approach to doing something. And I want to try it because it's new and excited, like the tools of new and exciting. That actually when he talks about getting stuck in and understanding the context of the domain and becoming a knowledge worker and kind of what you're talking about, I think a lot of that is lost because, you know, there's this focus on play, you know, in one of these roles and not focusing on quality. And the reason why I kind of came into this kind of conversation is, you know, I love the idea of actually being calm and cool and understanding where we are with quality. That would be my dream, right, to get to where you are now. I've only seen this done once. And it was, say, a Dutch company. And they. I remember coming into the office and speaking to the head of engineering at the time, and I said, oh, where are the guys at the front. This. No, they've all got you know, they're not feeling very well. And so what's in the office? And now that we've got we've lost six feet staff and it's like, okay, so what we can do about the plant. You know, the project. They said, oh, no, it's fine. You know that. You know, it's not a problem. And I was like, yeah, but isn't what's his name working on this? And, you know, it wasn't like building the automation framework. And he was like, no, no, no, no. He said they were so good estimation. They had this idea of kind of two structures that a team that was doing project-based work and they had this concept of baseline team who were flexible, they could flex in and out to fill gaps. But where the baseline team strengths, where we're actually around first and second level support. And I think this is one of the best, which just seems to disappear when it comes to, you know, modern practices. You know, they're happy to build and deploy. But when it comes to fixing and rework that needs that are coming in from first and second line support. The prioritization against new and getting something out of the door vs. criticality of functionalities is off balance. And I saw this when I was in Santa Clara and I was working for a software company. You know, what happens is there's a backlog of critical issues coming from customers. And, you know, that's part of what you were talking about with that risk profile of you say, well, actually, you know, a number two is going to be potentially there's going to be six months of rework that we're going to have to now plan for. You'll still get your day. You still get it out to market. But we expect the first and second line to be, you know, increased over those period times because of the time quality metric, you know, make it a decision to release and not postpone for three months because it's the same thing. You bought three months resourcing, fixing the first second-line support issues. And that's why we're all my team disappeared to add a team in Plano. And they literally spent their time, you know, not bringing anything new to the market, but fixing all the issues in production. Do you see things like that where this baby is not that? You know, there's not enough planning around a post-release.
Niall Lynch Right. Well, I think that I'm very fortunate in that I have worked with some really great project managers. And, you know, the the the dysfunction of project management is very well known. It was, you know, the bad ones don't really do anything except collect schedules from all the other functions and then throw them into Microsoft project. But I've often worked with, like, really brilliant project managers. And I've never seen a. Like, I'm very good at estimation, QA estimation. And I know what drives project managers crazy is everyone wants to avoid estimating. And I've often bonded with project managers by being the guy who likes. Does the estimates and then takes the time to go over. Line by line, each estimate. Pretty sure they understand it and be flexible in changing things, you know. But. Particularly at one job. I had a project manager and we had a really good discussion about this very problem. And I said, well, we have to explain to upper management that there are two very different types of development. Said forget the distinction between maintenance and the new development. Just throw that up. Because what we're really talking about here is the distinction between commitment based development. And non-commitment based on. And the way those projects are managed is totally different. So by commitment based development. What I mean is you've announced to the market that you're delivering a new version with these new features and capabilities. Right. So you've committed to that or you have a very large customer where you committed in the next release, you're going to have this new feature desperately. And then there's maintenance, which is non-commitment based development. I said, you do as much as you can. In the maintenance release project cycle. Right, you rank, you prioritize, and if stuff has to drop out, it drops out. Right. And you don't stress about it. You say we have enough resources to do, you know, 20 of the top maintenance issues? Well, maybe because of the complexity of the fixes or the cost and the fixes, you can only do 16 of them. OK, great. You know, don't get upset or stressed over it. Just move in. And it's a constantly moving target and people don't want to accept that. But, you know, there's commitment based development, which is very hard, fast, very data driven, very content-specific. Right. People want to see these things. Their maintenance man is non-commitment based. And so you have to have a totally different mindset. It has to go like a Rolling Thunder thing. Whereas I think the mistake people make is that they treat the maintenance cycle the way they would. The new developments and the and once again, this is a question of managing up. Once I explained to the big boss the difference between commitment based and non-commitment. You totally got. I said, you know, I mean, some of these bugs are bugs. We have an obligation to figure out, you know, we have a moral responsibility to our customers to fix, but there's no revenue attached. You know, so this is totally different. Process because it's a totally different way of thinking about the work that we're doing. So I think sometimes the categories people use in the language people use are really unhelpful because they don't really get to what's different about them. So you say maintenance versus new development. People think, well, it's got to be the same process. No, but when you say noncommitted commitment based development versus commitment based development. People see very quickly, you know, OK, these are two different animals by business definition. Right. Not by technical definition, because fixing is fixing and coding is coding, you know when you get. So, I mean, it's always going to be it's never going to stop being a challenge. But I find that, like once project management and I got on the same page, we sort of box engineering in by doing that, you know. And she was so grateful that I took the time to know. And I remember every month I would sit down with this project manager and review the case resourcing and schedule. And I was happy to do that. And I found that you know, when you take other functions, roles seriously and show that you're willing to take the time to help and succeed. All kinds of things open up, you know because it everyone's kind of retreating behind their barricades. And. But I'd have to say, when you find a good project manager, make that person your friend. Really? And be very. Good an estimate.
Jonathon Wright Yeah, I think I think estimation is a lost art. And, you know, I, I always use the Lynchburg estimation method, which is this pessimistic, optimistic realist state, and then do standard deviation to work how you know, whereabouts it should be. And that's kind of you know, you better get your game. Better estimation really helps with less drama down the line. Right. But, you know, I would ask you to the question. So I'm working on them. They are on leading at the QA team for this M.I.T. project at the moment for COVID its safe paths. And so we've just rolled out to Haiti literally yesterday. So we got a whole stack of QA. We're doing what would be classed as testing in production, which is a long story, but worth about. But you know that part of the time to market, i.e. getting out because, you know, it'll save lives vs. it's the product ready to ship. You know, this is actually a geo-based challenge because obviously we're rolling out the product as we'd normally do for new development. But then what really is happening is new regions are coming on. The U.S. will probably be Next and then, you know, other you know, other countries and then eventually maybe the U.K. with contact tracing. Now, what we're seeing is obviously its adoption of the product. Right. And there's this you know, we've you probably seen the Bridgid the chasm book. You know, there's this kind of early adopters versus, you know, early majority and there's this massive chasm between it. And, of course, you know, partly what I'm trying to do, what I'd be doing today is looking at those numbers in Haiti of people adopting this software. And then I'm getting all the crash analytics coming through for people who it's crashing on their phone. Right. Of course, you know, now we've got a call around, obviously, the volunteers anyway. But the development, a finite amount of volunteer dev resource versus the issues that are coming through, which are geo-specific to Haiti as a region.
And that whole feedback of what you've just kind of talked about in the sense of committed, we're committed to rolling out to these countries in these days spaces, you know, maintenance well, issues what's happening. And, you know, I think you're completely right. People throw things like, oh, it's a P1 in production, whatever. P1 should be defined as a terminology of, oh, well, it's a showstopper.
Right. But if there's no categorization or priority because you're getting information, you don't understand what's happening in the in production, i.e., you've not got those metrics, which is what I'm trying to do at the moment. And I'm struggling. But, you know, part of it is things like do we know people are successfully posting information to their health authority that could be seen as a journey, which is pretty critical in this COVID situation. Whereas, you know, the ability to change your health authority once you select today is less of a priority. Do you think there's a priority in quality? You know, gate when it comes to functionality that's operating in production and how that's categorized, but it comes back head to that noncommitted work.
Niall Lynch Well, I mean, that is an institutional discussion. And, you know, QA is obviously going to have an opinion on that. But, you know, the standard, triage, for maintenance issues is impact versus cost, impact on the customer versus the cost to the organization. And I think that often people just start churning. Right. And they're trying to prioritize. But everyone's prioritizing things differently. And this is where if you have a really brilliant project manager and you partner with them. Because, you know, we don't have all the answers each way. But our perspective is very valuable. Right.
And at some point, remember my earlier point about irreducible complexity. People that can be rephrased as an irreducible mess. Right. It's not going to stop being messy. It's never going to become this content, you know, rationalist enterprise and so managing the chaos of it is sort of what we all have to learn how to do. But in a way that is effective and rational in some way. And it's interesting that the problem that you mentioned just now with your coded app reminds me of something very important for QA thinking, particularly with respect to automation. Because you're thinking, okay, so like are people being able to send their data where it needs to go? Right. That's a very complex testing problem. Right. And I think that software development in general is a victim of what I call the empirical fallacy. But we can't know until we actually see it happening. Right.
You know, I read about this all the time where, you know, like this old Boeing plane crash software nightmare where it's like, well, the engineers couldn't see it actually happening in real-time. And it's like, well, why do you need to see it in real-time? No. I mean, do you not understand the code that you wrote? And when I was at Symantec, we had one of the big problems they had in the enterprise computing was that they had all these kind of single-purpose security products.
You know, mail filtering, firewall, virus detection. And they didn't really. There was no sensor fusion behind it. And what we found out when we surveyed our major customers, that they just assumed that there was no difference in quality between those functions across all of our competitors. They assumed we all did the job about as good as everybody else did. And their main pain point was actually being able to get a dashboard. That synthesized all this information across the organization. So they didn't have to correlate it and do sensor fusion and get the big picture of the security. This is 20 years ago. And that's a problem that's largely been solved. At the time, it was like a huge issue. And so Symantec came up with this project codenamed Sessa, which would be that dashboard, that fuzed data from all our different endpoint security products into one overall view of the security posture of the company. And then, of course, my QA team was tasked with developing a plan for all of this. But we ran into an interesting political problem. We were developing this dashboard.
New for upcoming versions of each of these endpoint products, because we needed new versions of each of these endpoint products that could integrate with the. So it was a huge chicken and egg problem.
And of course, it was politicized because the people leading the development teams with the endpoint products were like, well, we're not going to commit to supporting your console until it's ready and we can be sure that it works. And so, yeah, well, we care about our console until we have your stuff, you know. And this was just going nowhere because it was like, well, I'm not going to give you an angel. You give me B and I can't give you B until you get a whole group grope.
So I sat down with my head of engineering who was bringing it, by the way, is really brilliant. And I said, you know, this is all internal Symantec technology. We all own it. Right. There's nothing proprietary internally here. And I said, so what we. So what we need to do is create not an automation tool, but a simulation tool. So there's no reason we can't build a tool that will perfectly simulate the output from any of these products.
Without having the products themselves. Because we have their specs. We know how they're going to implement. We don't have to wait till it's implemented. I said, and the other thing is we need to build into this tool the ability to vary the parameters. Of all this simulated up. And that broke the impasse.
In fact, we finished before they did. And we were able to manage bugs in our simulated Davidek based on their statements, which drove them crazy. Because I remember one meeting where the head of one of these point product teams told upper management. Yeah, but, you know, we're still working on this format and we're still developing it. And he looked at me and he goes QA's already done that for you. You need to use their format. So I think that the issue of simulation is really under thought in QA that we're kind of trapped in the empirical fallacy.
Must we can somehow have an environment that's on some level identical to the environment they're going to be using and 17 data too. And it's like you're not you never gonna be able to do that. All right. I mean, so you've got to think in terms of simulation. And, you know, it's not perfect, but it's going to give you a leg up on at least major issues. But I don't see any discussion of it all around automation. And yet if the structure of the data output is known. Right. And if the structure of the receiving system is known.
You don't need to recreate the receiving system in a test lab. You can virtualize. But yeah, and once again, this is where my education continued because I have an education and philosophy as well. And you're like, whoa, wait a minute. This is just stupid empiricism, right? Like, we can't. I like to explain to people who don't work in the industry how engineers think. And I say, you know, suppose one day you win the lottery, you have the money to build your dream house. You hire an architect and a contractor and they're working on it. And you visit the site one day and you say to the foreman, you say so or to the architect and you say so. When it's done, it's going to have four bedrooms, three baths. Right. And I said, imagine if the architect answered, well, we won't know until we're done. You know, .NET have said, what do you mean we won't know until we're done? I mean, you contracted to do this. And what if your architect says to you. How many houses have you built? You know you don't know anything about this, shut up, which is if you think about it. That's how CEOs here, engineering teams. A lot of.
Jonathon Wright I think it's a fascinating discussion. I'd love you know, I'd like to quote you for the simulation over automation concept because I think that's that is a really great, great idea in the sense of, you know, a lot of the challenges, the Haiti ones, the good examples we use. We use model-based testing tools to generate synthetic data. And, you know, I did a podcast with Hugh Price, who helps us with this. We did something very similar before we created a simulation for a smart city project in Copenhagen where we synthetically generated five billion historical transactions going through the city with the storm. Time dates are right. We knew it was G.P.S. We knew what the format was going to be. We didn't have to wait to find out. I've had four bedrooms. We knew that it was a case of populating the test. And I think this is interesting because, you know, partly we talk about test data coverage in this kind of, you know, well, we know testing coverage to a certain point. And like you pointed out, with localization testing for like installers, that in theory it a variable.
So, therefore, you know, part of that, you know, the simulator is, well, can you simulate those activities to prove that they work and improve that hypothesis? You're building that model to prove that, you know, it's working. And, you know, yes, you may not have a sandbox environment. So, you know, this concept of being able to harmonize at a point where you can do synthetic testing in a production experiment which generates. And I think you've kind of nailed it in the sense of if you came up with a Test API language, which would say for this particular transaction, I want you to give me a negative response in the first case. And it turns out on the other end accepts that it's in testing debugging mode and it plays along with the simulator, i.e., of systemic failure. That message didn't get posted. I'm not going to give you a response back and the client times out. So part of it is this testing mode idea of built into the code level. So you could test those scenarios through simulation instead of fall. We've been Shimbo.
Niall Lynch And the other thing is if it does scales the size of the effort. Because your testing is so precisely defined and targeted. That because, you know, automation is one of the problems with automation is that it accepts the massive scale of the system as a given. And so automation has to be equally massive. Right. Whereas with simulation, you can make descale. I mean, simulation is it's never gonna be the entire solution, but certainly for initial testing. The other thing, I really enjoy talking to you because you're helping me remember things that I forgot I knew, is that when you're doing Data's data testing like this, heavy data, transactional applications, people often focus on the properties of individual transactions. Right. I need a transaction that has a value out of range or you know, people don't often think about. There are transaction properties. And characteristics, but there's also a transaction stream, characteristics, and properties. In other words, attributes of the whole data stream as such. And I ran into this when I was developing testing for the mail filtering product that Symantec had at the time. And we had a good set of individual examples like, you know, mail with attachments, mail with encrypted attachments. Right. You know, mail. But. And then I realized I was talking to the engineers and I said, right, but we need to have data stream profiles. Not just individual male profiles since. So how does the product behave? If three if 70 percent of the malestream is infected? If 70, 70 percent of the mail stream is encrypted. And infected. And that's where we found all the bugs. It was like because I was thinking our products are meant to deal with a virus outbreak. Hey. Which is just like our current pandemic. It comes in a search. So you're going to go from zero to one hundred, maybe ten minutes. Right. So we need to test that scenario because that's the scenario where our product is supposedly designed to handle. Not like one e-mail that's infected with an encrypted attachment. What if 70 percent of your mail stream, which is. You know, 10 gigabytes of data is suddenly having to be filtered by our ballfield. And that's where it just fell over and died.
Jonathon Wright And that's interesting. It's kind of a bit like a ddos attack, really. No, I always found fascinating when you go down to the network level is, you know, just that SMTP traffic is going to be a lot if you think about the processing time from you're mail filter application if you could hear the types of data and the size of those data and the frequency if that data is all really important, asked attributes of what the simulator could potentially do. And, you know, I'm kind of tried to kind of link this back to your original star statement in the sense of your taking quite a lot of things like systems thinking into your thought process in the sense of what does the entire system work as organically as that as it grows and becomes a downstream product that connects to an upstream product? Wherever it's that contact going from all the way from here to here. What's the streams of multiple different transforms, ETL, whatever's going on through there? At what frequency? You know, how does that whole hang together? Not just targeting a single entity is actually targeting the ecosystem.
Niall Lynch All right. I mean, I have devoted a lot of my time as a leader into training my staff. And one of the things that I would teach them is that there is irreducible complexity here. But the way we think about it doesn't have to be irreducibly complex. And so in terms of your example of systems thinking, what I would do is just draw a diagram on the whiteboard that just had a rectangle for the system.
Right. And then outside of it, to the left, I'd have a box that said inputs. Inside the system, box migrate transformations. And then at a rectangle to the right. That said Elkwood. any system follows this by. There might be 5000 variations of input, 5000 different transformations, 5000 variants of output. But ultimately, at a high level, you have to think about any system as one or all of its inputs. And also, what are its methods of input. Right. Right. How does the input get into the system? What happens to it in the system? And how does it get out of the system anymore? You know a name. And that's a very simplistic way of thinking about it, but it's also very powerful because I think that you have to come up. With these heuristic devices.
To simplify how people approach complex problems without simplifying the complexity of the problem. And I think that a lot of materials I've seen training people. So think about these things. They just go directly into the complexity. And, you know, they're not giving their audience heuristic tools. To sort of creating a structure of thinking. That helps them understand complexity.
Jonathon Wright No, I agree with that. I think it's you know, that kind of. How do you apply what heuristics do you apply to a particular give a problem? Is that knowledge which you've kind of accumulated since. I think it was eighty-seven was when you started. You know, right. As you go through, you've got more of her mistakes, which you understand. And you've also got more blueprints to where you've seen something similar work or not work. Is that does that feel like it's a true statement.
Niall Lynch Yes, it's very much a true statement because it's always this interplay between your theoretical understanding and your experience on the ground. Because another thing that's not really talked about. Is like the big challenge for me in my career was not really thinking through this stuff in the way that we've discussed. But how do I make it happen on the ground? How do I institutionalize and socialize this not just in my department. But in the organization as a whole. And that's a lack of art. I mean, that's a real you know, because you can have the best ideas and understanding in the world, but if you can't make it happen on the ground.
Every day. Consistently. It's never gonna be a. And which is why often in my career, when I've inherited a legacy group often filled with amazing people, that they have no training, they are institutionally powerless. I have devoted probably 50 percent of my time to training. Coaching. I think a lot of managers coming with all these brilliant ideas, but they don't want to make that time investment. And and and I've always been willing to do that. I've never been disappointed with the results. But I mean, it's really like, you know, you can read all the books on child-rearing in the world. But at some point, you're rearing your kids. Like today in your home? Right? Mm-hmm. And so you've got to figure out how all of this applies. And I think that's why, you know, I had this really brilliant QA engineer at Symantec who could have just gone and been an engineer. Right. That's the catch 22 engineering when they are really good. They don't stick around. But he stuck around because he was learning things. Because they asked him, I said, why, why have you just like taking a job in engineering school? Oh, because I already know how to do that. I don't know how to do this. So, you know, there's the practical stuff like how do you actually institutionalize this and make it happen? Because if people don't see it happening. Will not have any credibility.
Jonathon Wright And I think, you know, two words which I can summarize from. From that statement as well as is trust and transparency very much like this is Haiti thing. Is that actually without trust, you know, the project will fail. And I think it's the same thing for how you've been able to succeed is you with that 50 percent investment, you're providing trust. I'm giving transparency at the same time to engineer, managing up, and managing your team so people trust your decisions, which help them lead by example. And I think that's the characteristic that maybe people that trust is sometimes broken down because of the stuff you were talking about before. Things as you like, agile methodologies where someone will walk up to somebody and report the issue with them. You know, your your your baby's Oakley kind of approach. You know that isn't, you know, building trust and transparency. You know that isn't collaborating and taking and proving that you can add value to that activity. That is, you know, potentially I know that I'm trying to avoid the word be diplomatic because I don't think, you know, people have to be diplomatic. I think it's more about honesty and trust and transparency that actually builds confidence in your leadership. And I think that's, you know, a key success, you know, for what I'm hearing.
Niall Lynch Well, without question. And I think that when you have that, you know, everything becomes much easier. Everything becomes much more transparent. And everyone I mean, there's this. I was raised in a church and I remember one of the pastor's favorite phrases was, there's no freedom without law. Now, he meant something by that I don't necessarily agree with, but I sort of rephrase that as there is no freedom without structure.
And so I've always seen my role as a QA lead, as I have two goals. I have to put a structure in place that everyone understands that's uniform and most of the time is, um, very. That's in terms of role definition. How we do our work, what we're being held accountable for. How I'm going to judge you. And I find that if that's in place and internalized, I can give my people perfect freedom. Does it take such a burden off people? And I remember one day one of my great leads came to be because, you know, Niall, I've never had the experience at work that I'm having now. And I said, oh, what does that mean? He said, no. He said I've never had the experience where on the one hand, everything is very well defined. Everythings. Out there, we all know how we're going to do our work and how everything is going to happen. And any case, I've never felt so free. To do my work the way I want to do it. I said, well, that's cause and effect. Right. You know, because we all know managers who the only way they know how to manage a team is to like, walk around every day and quiz people to what they're doing. And I think that's what he was talking about, that I didn't do that. I was coming to my role. So what are you working on? What? Cause I. Because I already knew. And he already knew that I. So, you know, and in fact, one of the things that I do, which is kind of shocking at first, is, you know, when I have my first staff meeting with my leads, this I say, OK, so let's. Let's come to a common understanding of the purpose of this meeting. So the purpose of a status meeting is not to find out the status. It's to discuss what we're going to do about status. We already know. So that's another thing that I put in place is what I call passive status reporting mechanisms. Where the information I need just flows to me. You know, I click on a link, I go to a Web page, I go to a tool, and everything I need to know my status is there and everything they need to know about stuff. So we don't spend the status meeting with each other in on what we're doing. We already know. So let's just talk about based on what we already all know. What are we going to do? So my staff meetings were never more than 20 minutes long. And they love that. They just thought that was like, wow. I mean, they hated it at first because I was putting the burden on them to know exactly what was going on. Right.
Jonathon Wright lot of status kind of meetings, which I've been to. You know, part of it is people talking through what's already on the screen. Right. You know, it's there's no PowerPoint view of, you know, here's what we did. And here's this information, which, as you said, you already know because you've you know where to get that you can self serve it. You know, there is no surprise because you're already understanding where everybody is. It's a case of using that time to actually achieve something which you want to work collectively together to do. And that's really interesting because, you know, I remember Ray who's in Portland as well, and he always has this thing about meetings where he literally says, you know, I won't attend the meeting without an agenda. And you know what it means? That he means it's got some structure within it in the sense of what are the decisions we are going to make. And out of the back of that, in this kind of you know, they must be a purpose for us coming together and working together. And, you know, this idea of, you know, giving people the freedom that they're able to, you know, not have to worry about. Well, you know, how should I be. You know. And giving this in, communicating this information, that's all taken care of. They're able to get on with their job and not have to worry about it. Right. That's just semantics.
Niall Lynch Right. And I found that developing passive status reporting mechanisms solves a lot of political problems as well because that becomes a resource for people outside of my department as well. So one of the things I just hate not to be crude, but when I mention taking a leak and the head of another group is standing next to me saying, so know what's going on with X? I'm taking a leak here and you just stop. And what I did at Symantec and other places I could work is I've created passive status reporting mechanisms that are open to everybody. So if they had a project manager wants to know. Do you know where QA is on this project? They click on a link, right? They go to this QA status page for that project. Everything they need to know is real-time. Like every day anyway. And when I showed this to upper management, they just fell in love with it. Because one of them said, you don't have to hunt you down, and I was in the men's room and you asked me what's going on? I said it's all here now. If you have questions about this, by all means, come to me. No, but you should find E! To know what the heck is going on. You know, and I found that that solved a lot of. And it's funny. There were people, functional heads of other groups who really hated that. Because it put the responsibility on them to know what was going on, right? They couldn't say, well, you know, Niall hasn't told me. No. I can never lie to you and I can never get a with say.
You don't need to be. I mean, maybe you do. But over something that only I can answer, right? Because if if if I'm the only one who knows what's going on in my family. Right. But, you know, I said, look, everything is everything that my group is transparent. There's no black box here. There's no you want to know who's working on what BI where we are and test coverage or we are in time because here it is. It's all here. And this was 20 years ago when this stuff was not worn at all. And, you know, people actually want to love it, you know, because one thing I would tell my people when I would take over of because the other thing I think is very important is to be very open with your people, how you want them to communicate with you. That shouldn't be a minefield, that shouldn't be something of a surprise. But I'd always tell them what I said. I have a pet peeve. That's one thing I cannot stand. And it may be irrational, but you need to learn it. So do not stop me in the hallway with a question. Just don't do it if I'm walking down the hallway. I'm going somewhere else. So you want to talk to me? My door is always open. You just knock on it. Walk-in and ask me whatever you want. Do not stop me in the hallway, because to me, that's like grounded.
Jonathon Wright It's kind of unstructured as well, isn't it, in the sense of did you need to just bump into each other to trigger that activity to happen with this kind of self serve? QA is that actually the information's there. Right. And you know what you were talking about before about not having a good quality product manager there, sometimes just a proxy. They just pass the information on, you know. Oh, did you realize QA here, and did you realize we got these kinds of issues? They're just passing information. They're not adding anything to enrich that information.
Niall Lynch Correct? Correct. Right. And, of course, I mean, you know, I really try to train my people that. What we're doing is structured. And snagging me in the hallway when I'm going somewhere else. Is it so random, how could it be important for me?
Right. And at first, some people get outside, but then they learned that actually you can knock on my door and you can come in and talk to me and we'll have a very productive discussion. But they have to make that decision right. If you're going to obligate me, that has to be a decision you're making. Right. And and and I feel the same way about, you know, when I put my processes in place, heads of other functions often feel they're losing freedom in relation to. Because they can't get away with stuff they used to be able to get away with before having sway, people do their job. Now. And had I explained to one of the ones who were kind of cross with me, I said, look. I said, if there are no rules in place, I cannot know when I'm making an exception. And I said, If I say you can't do this in relation to QA doesn't mean it can't happen, means I have to make a conscious decision to make an exception for you. Depending on the circumstances. I'll be happy to do, but we'll both know it's an exception. Right.
Jonathon Wright Yeah, I think that's really hard for some organization. I remember being in a larger organization that had strategic processes and then tactical approach processes which were kind of, you know, if they didn't fit in with the process that they had in place, they would make an exception on that was the case. But then what happened was there were way too many tactical solutions and not very many strategies and solutions. And I think that might be a key reason why, you know, things. You know, this autonomous kind of team can work in isolation and makes their own decisions. You know, that's great. If there are divisions there and there's this current level of transparency and guidance and support which they get. You can make those decisions. But if that's not there, you know, in a lot of smaller teams, that structure isn't there. And therefore, you know, a lot of the exceptions happen all the time on, you know, from small things to large things. And they just, you know, everything becomes tactical and nothing is got a sizable structure.
Niall Lynch And I think that you know, defining a rule or a way of doing things, it's not necessarily a restriction. Because the other part of this goes back to my structure of freedom. I liked it when I have rules in place that everyone understands. That gives me the freedom to just grant favors. Pretty much. All the time. You know, because I'm I've always been amazed at people who are in leadership roles and they need. A colleague needs their help to do something that will take them 10 minutes. And they won't do it. You know, not my table. Whereas for me, it's always like happy to do it. It's ten minutes. I'll never be so busy that I can't take ten minutes out to help a colleague. But then. But then they know. They know. I'm doing them a favor, but I'm happy to do. I mean, I'm happy to do it because it creates no anxiety in me I'm creating an expectation that I can't fulfill. Because one of the situations I've encountered over and over again. I've inherited a legacy team that did not have strong leadership.
Heads of other functions were constantly raiding QA resources. To do special little things that they need. It really didn't have a lot to do with what you were supposed to be working on. And one of the really difficult things I've gone through many times, he's explaining to these people, you can't appropriate my resources. Without my approval, you can't go run to Bob over here to do this thing for you that they've always done. So it doesn't mean it can't happen. But it means you have to come to me. Simple as that. And then I have to explain this to my team, which makes them very uncomfortable because they don't want to piss off some powerful person. I would just say, look, just tell them, talk to me, just come talk to Niall. And that's it. Trust me, 90 percent of the time. If I can make it happen, I will make .NET. But then people learn that they see, OK, Nilall's got his rules and he's got his processes, but he's happy to help. And in fact, that when I worked for her stamps.com, I worked for them for a couple of years and my QA team actually had t-shirts made that said, just talk to Niall. You know, but you're kind of training people and it gets back to role definition. We're not all doing everything all at once. You know, there are things that there are responsibilities I have that I cannot delegate. That my resources cannot interrupt and let you appropriate their labor. That's not going to happen. It's not going to happen that way. It can happen another way. And then they see you actually, Niall's one of the more accommodating. Functional leads that we have here. But, you know, we all have to understand what structure is that we're all working. And I find that's why a lot of process improvement fails. Does that higher-level question or roll definition and structure? It really addresses it.
Jonathon Wright It's I've really enjoyed chatting and it's. We could literally go on for a couple more hours.
Niall Lynch And almost two hours
Jonathon Wright Which is good. Which is some fantastic material in here. it's going to be an amazing episode.
But I want to make sure that I add some stuff on for, you know, for people to get in touch and reach out to you. And you mentioned some of the articles that you've read, which I will check out some of the, I did spot a couple. But, you know, what would you say to people who are listening? As you know, you have been your biggest inspirations. And also kind of what you know, what's a good way to interact with you and speak to you or if they've got questions?
Niall Lynch Well, I mean, you can certainly share my e-mail. And I know one of the things that I got into QA when it was kind of feral. You know, it's very undisciplined, very untrained. And I came in on a train in QA. And this is like 1987. Right. So I've been doing this for almost 35 years. And so I have a lot of compassion for QA people because I know how hard it can be. And so I feel particular now that I'm mature in my career and I'm happy to help people. I'm just really happy to help people think things through. And I do not have all the answers right away. But I think that QA people often do not have mentors. I think that's changing. But often they're just kind of thrown into the craft and have to figure it all out. And I know all about that.
But also, I think that there is very little thought leadership. In a. In the sense of what is QA all about? As opposed to this process or that tool. But, okay, you don't like the understanding that we've been discussing.
What is QA really? What? What does it what is it uniquely responsible for? And how should it actually think about itself? Which is not a question that I see being discussed. It's all very instrumentalize, like, how do I become better at X, you know, how do I optimize my scripts? Right. How do I make Scrum work for me?
But these higher-level issues are and the thing is, I'm not just talking out of my ass here. I mean, I actually have done something. I've actually lived through all of this. I don't just have, like, this good idea, actually. So I think I'm I'm happy to be a resource for people. But I have, I think, seven articles on QA in my LinkedIn, which you might enjoy reading because that goes into a lot more detail about. But I'm really happy to be a mentor and a coach for QA people. It's not a problem.
Jonathon Wright I'm personally going to read those seven. I think we have coverage, the podcast after and talked about some of those discuss- you know, those points. If you're OK doing that, they'd be really good. And, you know, we've you know, I'm doing on Wednesday for the British Computer Society where we're doing a webinar, but we're trying to do find people mentors for with it within the British Computer Society. So I definitely will mention you as well, because we're part of the specialist interest group for software testing. And I think as you said, you know, there's a lot of here's how you can not use the tools. Here's how you can create you know, create your own processes. But no one's really taking that time to address what is quality. And, you know, and how do we actually make. Yes. You know, make it a success. I love the idea. Everything you've talked about today I ever really enjoyed because, you know, part of bringing that clarity into there. And trust and that transparency and that, you know, that the fact that your staff can feel they've got the freedom to, you know, not have any of the stress and burden because, you know, part of it is you've taken away some of that responsibility from leading the team to becoming a success and being able to scale this model, you know, not just focus on maybe the wrong reasons and maybe, you know, that experience is something that the community needs you to share.
Niall Lynch So, you know, I think and I think what's so important about understanding kind of the essence of QA. Is that. That's actually our process. Understand? And so if you have clear if you've internalized the fundamental principles we've been discussing, you can decide how to apply them in any way that is necessary. Right. Because people always ask, yeah, well, how would this work in a small organization? How would this work in a marriage? And it's like that's an irrelevant question because it is what it is. I mean, you know, the purpose of it doesn't change whether you're in a startup or semantic. The way you prioritize that understanding may change the way you institutionalize change. But it gives people because I find a lot of QA theories, very doctrinaire. It's very tied to specific assumptions around the type of organization you're in or the product that you're making. And it's like. Right. But. I mean, no one says that the definition of engineering is different in a startup versus a big company. No one says that right. But people do think that about .NET. Well, I don't see how it sounds like it's really heavy duty and this I know what we're talking about here is fundamental understandings of what we do, what we can do it. How that happens on the ground is going to be different. But once again, if you have the structure, you have the freedom.
Jonathon Wright That is freedom, though, isn't it? That is freedom is the understanding, which I don't think is there. I think that is the key to unlocking all this, is that the descriptions that don't happen and people are afraid to have about quality just because they don't feel they can support that conversation with facts and evidence.
And I think, you know, this also this split within the QA and testing industry that you have to be in one camp all the other, yet you're either in your contact driven or you're in another sad state, you know, thinking group. And, you know, there's no because there's no one really talking about what it actually means, what quality actually means. You know, I think down whether they lose the message in lost.
Niall Lynch Right. And often when I explain to people that the problem of QA is not quality, it's knowledge. I think people feel a sense of vertigo. They hear. Because they don't know how to operationalize that, understand? And that's something I've figured out painfully over many years. And so I'm able to say, you know, I'm not asking you to step into the abyss here. No, I'm not asking you to abandon everything you already know that this is just two different. The way a different perspective of understanding is already doing, which will help you understand other things you might need to start doing, right. But but but people do feel like. Intellectual vertigo. When I say this. Because they're like, well, you know, I don't know how to do that. It's like, well, I do. I had to figure it out. And it was not easy. But, you know, learn from my pain. Right. So. Well, I want to thank you for reaching out to me and giving me this opportunity. It's been a lot more fun than I thought.
Jonathon Wright And I loved it. Love, just loved it. It's actually great fun. And I got to read your articles. Yes. I've got some time and I've got it. We'll have to put something else at the directory to go through some of the reviews of the thinking.
Niall Lynch Sure. And we are connected on LinkedIn.
Jonathon Wright We are indeed.
Niall Lynch Yes, we are. OK, good. So you'll have no trouble sleeping myself. And anyway, you certainly can feel free to reach out to me directly and anytime you feel you need to. And I'm really happy that you're doing this. I think you're doing a wonderful thing. And you know. Take care of yourself, man, OK?
Jonathon Wright You, too. And until next time. And let's keep in touch on LinkedIn.
Niall Lynch OK. And you have my e-mail.
Jonathon Wright I do. I do. I got my diary at the moment.
Niall Lynch Right. I'm very responsive to e-mail. So feel free. OK. Take care of yourself, man.
Jonathon Wright It's been lovely. To and. Yeah. Take it, easy man. And stay safe.
Niall Lynch Yeah. Well, I have no choice.
Jonathon Wright You're doing the right thing. Right. Take it easy, man. bye.