QAL - Podcast - Kate Falanga Featured Image

Quality Is Not An Act, It Is A Habit (with Kate Falanga from Code And Theory)

Related articles and podcasts:

We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.

Audio Transcription:

Agile testing has developed exponentially over the last two decades, with the role of QA continuously transforming. In this episode, I’m joined with Kate Falanga, Associate Director of QA at Code & Theory, to discuss the evolution of the QA role in Agile.


In the digital reality, evolution over revolution prevails. The QA approaches and techniques that worked yesterday will fail you tomorrow. So, free your mind. The automation cyborg has been sent back in time. Ted speaker, Jonathon Wright’s mission is to help you save the future from bad software.

Jonathon Wright:

Hey, and welcome to the QA podcast. Today, I’m going to be joined by Kate Falanga, and we’re going to be talking about the evolving role of QA in Agile. It’d be great if you could just do a bit of an introduction for the audience can hear a bit of your background and where you started really.

Kate Falanga:

Sure. I’m Kate Falanga, I am the associate director of QA here at Code and Theory. I run QA here in our New York office, which is our main office. I have been in QA, I don’t know at this point, I don’t know, 15 years, maybe a little bit less, not a hundred percent sure. It all flies. So at that time, I’ve been in a couple of different organizations. I focus a lot on transformation and having people think a little bit differently about QA, both on the individual level of the QA folks themselves. But also kind of within organizations because there’s a lot of things going on in the software development world and we have to make sure that we’re ready for it.

Jonathon Wright:

Absolutely. And I guess starting your journey out, what kind of led you into the realms of QA?

Kate Falanga:

Everyone. I think a lot of people have this story just because unfortunately, it’s not too many people who go into QA intentionally. A lot of times people fall into that, although I do think it’s changing a little bit now that software development is so prevalent in our society. So, I came in through a television production that is actually what I went to school for. So I had a couple of different careers. One of them was at a company that did kind of YouTube for the enterprise before that was easy to do. So, it’s hard at the time.

So, I was a video encoder at the time, so back to my tells production days, video on the internet was new. I went into there to kind of make a video streaming on the internet work really well. And that company was a Sass and they wanted to start up a support team and so, I kind of got into that and around there 24/7 support team for a while and kind of built that up from scratch. Through that process, we did a lot of UAT and worked with the QA team pretty closely. And I don’t particularly enjoy 24/7 support. And so I kind of went into QA as, “Hey, something that was really interesting to do.”

So, I took a step back, I was in management, but I took a step back to just be a QA engineer. It’s kind of a step back and takes a step forward to move into a whole other discipline, and kind of learn from there, from that job and kind of got me another job, and kind of worked from there, but that’s kind of how I got into it.

Jonathon Wright:

That sounds really exciting because I think we all take quality for granted and I think video streaming, the fact that everyone’s Netflix and Amazon is just a way of life. When you get those errors that say things like cannot connect or can log into the server. I use a little app called Downtime. And you can literally tell your favorite services whether they’ve got issues… they’ve got down. So it’s called down detector. You can get it on your mobile app. And I just think we take it for granted, but at the same time, you associate quality with those kinds of products.

When you look at quality, what’s your definition of quality?

Kate Falanga:

That’s always a difficult one. What I am now is an agency where we have lots of different types of projects. So the definition of quality varies from project to project. And the actual stakeholders associated with that project. Barry. If you think about, “Hey, if we’re going to make something for four children,” than the standard of quality changes versus something that you’re doing for adults. Or if it’s on a native mobile app versus something that is an in-store experience.

What quality means varies and that’s why it’s important to have that conversation for a project starts or even if there’s an ongoing and project that you’re going to stop every once in a while and have that conversation. Because people don’t really think about it that way, that what quality means changes depending on lots of different, the stakeholders, the environment, lots of different factors. So, it’s actually an important conversation to have because it’s so easy to get bogged down, especially in the software development world, “Hi, Jared tickets in my plates or Hey, how many different devices are supporting or the client’s mad or they’re happy.

It’s really easy to think about those things and not take a step back and try to figure out really what does it mean to be successful here? What does good mean? What does quality mean here? And it’s important to have those discussions.

Jonathon Wright:

Absolutely. And I think part of that quality in time to market versus price point. I think it’s so difficult, and no one really has that conversation around, do we want a product quicker to market that potentially isn’t as high quality? How do we want our customers to perceive the actual quality of the product? I always liked this question around, what kind of brand or product do you associate with really good quality? You don’t have to kind of name them, but you can say something like, a German car manufacturer or a particular product that you respect for quality.

Kate Falanga:

That’s hard to say. Because as a QA person, we’re always looking for those potential issues. So, it’s a little bit how we see the world. So, I’m always going to be finding some sort of an issue with pretty much everything you see even in those types of brands. If it’s a project I’m working on, that’s a different perspective as a user, that’s an entirely different perspective as well. If I’m working on a project, my threshold is going to be different because I am aware of everything that’s going on in the building of that product.

And so, I know why things are working the way that they are. But if I’m just a user, then I’m just frustrated by whatever it’s actually happening. If I find a bug on our websites, or I was using trying to watch Star Trek on CBS access yesterday and it was actually not a very good experience, but I haven’t made that up. So, I’m not sure why exactly why, but as an end-user it was frustrating. So, that kind of goes back into that kind of mindset that you might have and like what does quality actually mean?

Sometimes it might just be a frustration for the end-user is probably a big important aspect of this to think about that may or that may again get lost along the way if you don’t have active conversations about it.

Jonathon Wright:

I think that’s a fantastic example. And, I’m looking forward to watching the new episode of Star Trek: Picard. My dad actually looks very similar to Jean-Luc Picard, and he’s got a photo with him because he’s from Yorkshire, which is where my dad and I were born. So, it’s actually incredibly proud to see them do a reboot really well. But I find that the amount of moving parts and we go for video streaming and I think it’s a fascinating area. Netflix, they don’t have their own cloud. So, a lot of the infrastructure they use, I think at one point there was say an Amazon or Azure or I think it was AWS in the US, it uses a huge percentage when they’re using their streaming services.

And I find that’s really interesting because you don’t blame the underlying infrastructure platform for the fact that you’re not able to stream that particular service, but we all know that they’re underlining, they’re going to be run on some kind of cloud infrastructure. And actually it’s that access to the cloud infrastructure. You know, I did some work when I was in Silicon Valley with Apple and Apple doesn’t have their own clouds, so they use Microsoft and AWS infrastructure to host things like their iTunes stores and the podcast that we’re listening to now. So, it’s fascinating that when you associate the quality of that product, to maybe not having access to a particular service, you actually, it’s the brand that takes the damage and not the actual underlying technology that we know we have to rely on, on a day to day basis.

And there are so many moving parts around that. When you finished the project, at the end of it felt like that was a really good, really solid product and sometimes I guess you feel probably the other way off, things could have been done better.

Kate Falanga:

QA, you’re never comfortable. I try to tell folks, don’t ask the QA person if they’re happy with what’s happening because they’re not. They’re not supposed to be comfortable. That’s not who we are. That’s okay. That’s our job. Our job is to be uncomfortable. Our job is to not assume everything is working well. To make that assumption is our assumption is always that there is a problem there. We just haven’t either found it yet or necessarily, or we have been working on this project for a year and we know where all the bodies are buried.

So we know where all the flaws are. So, we’re never going to be necessarily comfortable with the quality of the project because that’s just not who we are. And that’s okay. That constant uncomfortableness makes us good at what we do. But we might be proud of the work that we did, that we worked together really well, that given the circumstances, we’re in a good place. So, there’s still pride in your work, and pride in the people that you’ve worked in, and happy with the product as far as where it is, but there’s always going to be a bit of discomfort and that’s a good thing.

Jonathon Wright:

Absolutely. And I think that’s what we strive for as humans, we strive for perfection, but we have to make compromises along the way. So, it’d be fascinating to kind of get an idea of what does a typical day looks like for you? How do you start, when you finish at the end of the day, what is success?

Kate Falanga:

Well, one thing I’m going to say agency a lot just because that is my world right now, but I’ve worked in other types of places as well, and I’ve gone back to agency life because of the variety. So there isn’t too much of a typical day. There might be some days where I have a lot of meetings, and maybe I’m having a very big manager day where I’m doing a lot of manager things or I have to do reviews for someone or one on ones or whatever. Or I might be having a big project day where I’m being more hands-on with a particular project and I might be actually be actively testing, or I might be prepping another project and trying to help define what we do in that project.

If there’s a performance we have to do performance testing or we have to do accessibility testing, what does that mean? What are the requirements? Let’s set expectations to try to figure out what we need to do, what we need to deliver. That’s part of my job too. And then there’s just like resourcing. Who’s going to do what? So there’s a lot of variety and there’s a lot of variety in the projects that we worked on. Again, we talked about like in-store experiences, I just worked on an augmented reality project where I was the only tester on that project.

And then I was working on another project, which is a bit more of a more financial company that is doing pretty more standard kind of website work but on a very large scale. So, there isn’t too much of a typical day, and that’s what I like about it. And you also just get to learn a lot of different things, just because we’re involved in so many different industries, so many different types of technology, it goes really fast and sometimes that’s hard, especially for my QA person’s point of view. And again, we have to be a bit more flexible from the QA perspective, than you might be in more of a product company because we are working with clients.

And the biggest difference in an agency is our projects start, they have a middle and they kind of end, and then we kind of hand them off to people. Not all the time, but that’s actually the majority of the time of what we do. And that means our approach to QA testing is a little bit different. And so, we do have to be flexible on how we work and what we do just because there are so many different variables in front of us at all times. But I like that.

Jonathon Wright:

It sounds incredibly intense. I guess we’ve both come from a few decades ago when we started with waterfall and you did things like requirements engineering, you’d have a product that you would deliver over a period of time. They would dedicate centers of excellence and capabilities that were incredibly mature and what you’ve just said, it just petrifies me in the sense of the agileness of what you’re able to do. You’re able to jump in and get involved. It doesn’t matter if it’s augmented.

Reality, it doesn’t matter if it’s an in-store app, you will get in there, you’ll understand what it means. You’ll help set the foundations down. So, understanding things like just putting the strategy and the good practice in place and how you define that. That’s a lot of work. Do you find that leadership in your leadership activities from a management point of view is different to say for instance, how you deal with getting hands-on and maybe get involved in some exploratory testing or RBT or you know kind of context-driven, what kind of toolkits do you have at your mercy?

Kate Falanga:

I always have the idea about tools don’t drive what we do. They should support what we do. So we do have some standard toolsets that we use just because just for money purposes it’s useful to have like services available to you. But for the most part, we’re somewhat agnostic. So, we are a bit more context-driven/agile with a little bit of waterfall if we have to occasionally, again, this kind of speaks to how we work on each project might be different. There might be a project that is a lot more waterfall, there might be a project that’s actually very agile. And you might be working on both of them. We do try to make sure that for especially for bigger projects that people are dedicated to one project at a time, but occasionally it might be split across one or two.

Myself, just because of who I am, I might be split across. I’m actually responsible for all the projects that come out of the New York office. Even if I may not be actively working on them as a QA person. So, I have to know a little bit about all of them. Which, is potentially a lot. So, it’s really just, again, kind of goes into flexibility, which is hard for a lot of QA folks. When we are the people who want to have the process, we want to know what’s to do. We want to know where our requirements are.

That isn’t always 100% true here. But the good part of that is that you can be part of the conversation and you can help create the process where quality can occur. And I do think that’s part of what QA’s role can be in an agency or a product company or wherever you are. Probably more so than a lot of organizations already do. Realizing that QA folks don’t necessarily have to follow this one track. If you give them room, you’re going to have a lot more capability to really kind of create different types of processes where it makes sense to do so.

I do get concerned when the default is, you only have two tools in your toolbox. I want to have a million tools in my toolbox, and I want to pick the right one for the right circumstance. And that’s a bit of a struggle sometimes, but that makes it a lot easier to do what you need to do.

Jonathon Wright:

Do you have a particular technique or tool that you enjoy working with? Something in your backpack, something that you’d like to roll out. So, using a bug tracking tool or some kind of a little plugin for your browser. Is any kind of little tips for tools that you could recommend to listeners?

Kate Falanga:

Everyone’s a little bit different and that’s okay. Me personally, I do find myself when I’m testing using just Google spreadsheets a lot. Just because it’s easy to share with folks. Everyone can use the same thing. And I’m keeping track of my work. I’m not a big believer in those kinds of step-by-step manual test cases. There are a time and a place for those and some circumstances. But too many organizations use test cases as a default. You can’t possibly test unless you have step by step instructions in front of you.

And that just puts blinders on a tester and so that they’re only going to do those things and they’re not going to be looking outside of that, and you’re going to miss stuff and it’s boring. And it kind of defeats the purpose of the skill of testing. So, I kind of prefer to test at the moment, but I’ll keep track of what I’m doing. And so, there’s going to be some element of documentation for me personally, how I personally test, I document as I’m doing it. But don’t necessarily have to. If I have to like think about what I’m testing ahead of time where there’s a lot of different permutations, I might use a spreadsheet for that. So, there are lots of different tools available, but sometimes the simple things are the best.

Jonathon Wright:

Absolutely. And I think helping things like exploratory testing, I find the tools that I’m using, screen capture, basic functionality where you’re putting notes, you might make notes and mind maps. You’re just taking down information that you think is going to be useful or actionable. And I think that’s really interesting because I think that going back to the days when I started testing in the 90s, I had a really bad first experience. I had this huge book of tests to run against a system, and I was there for six weeks, and I tested every single test it was for a large communications company. And at the end of it, every single one of the tests passed. I felt like imposter syndrome, right? 

I literally went to my boss and said, “I can’t find anything.” And he said, “Well, you won’t do nothing’s failed for the last 30 years. It’s PIBX. It’s a telephone exchange.” They’re built to last. And I just thought to myself, “Ouch, that’s taken some of the creativity away.” And I think that’s what really good testers have got, if they’ve got that creativity, they’ve got the curiosity to go out and boldly go where no one’s tested before I suppose.

Kate Falanga:

Absolutely. I think that the most fun question is how do I test this? And I love that, and I have that in front of my face every day. It’s how do I test this, and given these circumstances, what do I do? And that’s kind of goes into a little bit of flexibility, but you do need to have a certain element of skill, of exploratory testing in order to be able to answer those questions. And it’s not easy to do. And it’s not… Everyone can test a little bit, but to be able to do it really well, having that real depth of skill, that’s really where we come into our own as QA.

That is our value. And that is what we’re bringing to our products, to our clients, to our companies. And the problem is a lot of that skill can be very invisible. And it makes it very difficult for people to see because a lot of it’s in our heads. Things like exploratory testing in itself is you can’t really see what they’re doing. Especially since I’m kind of doing it as I’m actually testing. So, it’s important to talk about testing in a way in your company to make sure that the skillset is seen.

Jonathon Wright:

Yeah. And I think it’s so hard, especially for the perception as well. I think, one of the most difficult questions that I always have to answer around, well, what is the difference between QA and testing? Because sometimes, organizations think that they’re essentially the same thing. Do you have a definition of how you split QA in testing or do they harmonize together? What’s your viewpoint of the difference really?

Kate Falanga:

I mean I don’t want the word QA, that kind of gives me hives sometimes it’s because quality assurance is not really a thing. But you hear that quite often. There is a difference between quality assurance and testing. Testing is an activity. So you’re not QAing something, you’re testing a thing potentially. College shares are a tough one and I think there’s a lot of conversation about what we should call ourselves as QA even make sense. How can we as an individual actually assure quality because it’s a group activity? And I believe that. And that’s true. Just because your title is QA does not necessarily mean you’re actually assuring quality, that is evolved.

So many different factors, so many different people. And that’s true. But I’ve also always wouldn’t have the opportunity to, what’s our title should be always used QA mostly for judicial purposes and then agencies our clients expect it, so they have an understanding about who we are and what we do. So, it’s kind of up to the actual QA teams themselves to kind of define what that means. But there’s a bit of a district in quality assurance and testing because quality as we just discussed before, isn’t something you can necessarily assure as an individual. But you can help create an environment in which quality can occur through different types of skillset that you have, testing being one of them, but also creating process, communication, collaboration, all that kind of brings into play as well because it can’t be up to the individual.

Jonathon Wright:

Actually, that’s a great example that you’ve given there. And I think, it’s everyone kind of saying, we’re all responsible for the quality, but then not having that structure and guidance, which you’re talking about there, about how to encourage quality within the organization. I think that’s… The description as well for what do we do? Are we testers, are we QAs, are we Devin Taft’s sole, manual testers, UAT, there are so many kinds of maybes mislabeled ones. And I recently started moving to kind of more persona-based roles just within the organization. And there was a good friend of mine, Dr. Emma Langdon, she’s a changed magician.

So she used to say that it was whatever you want to do when he was a child. In that case, she wanted to be a magician. And what she did for a living, doing change. So, I started calling herself a digital therapist. So, I sit down and listen to people. I say, how did this happen? Bit of date or archeology trying to work out, dig up where the bones a bit buried, what did you want to be when you were a kid?

Kate Falanga:

I wanted to be lot of different things. I wanted to be a zookeeper for a while. I think until I really realized what a zookeeper did, which is, “Yeah, you’re hanging out with animals, but there’s actually quite a lot of physical labor involved to put it nicely of taking care of animals. There’s the not so nice part of that, and I’m not quite sure how I fell out of that world, but then kind of went into television production, and then eventually software developments, and software QA.

So people change over time. I’ve had a few different careers and that might be this case for lots of different folks. One thing I like about QA specifically or testing or whatever you want to call it is because we have so much overlap into so many different disciplines, that you can kind of fill all of them, and still be really strong in one particular area. Be really strong at testing. But there are elements of project management, there are elements of client services and development. There’s obviously quite a lot of engineering involved.

There’s user experience, there’s graphic design. All of that comes into play. And you get to be a professional user. So like, “Hey, if you love technology, you like playing with stuff, I can do all day. It’s my job.” So that’s kind of what I like about it is because it isn’t one thing. It’s many things. And it’s kind of that balancing act is really fun.

Jonathon Wright:

So I know we were going to focus a little bit about on the evolution of the role of the QA in modern software development. So Agile practices and this kind of bimodal, trimodal, fail fast, learn rapid, do quick experiments. We’ve got all sorts of new practices coming in from Chaos Engineering to Site Reliability Engineering and different disciplines. How do you see the role of QA evolving in the modern-day and is there any kind of tips you’d give to listeners of the direction they need to be thinking about going forward?

Kate Falanga:

Well, actually I haven’t seen as much involvement as I’d like to on the QA space. There’s a lot in software development in general, you just listed many. My concern is when those companies are doing those experiments, that’s mostly with development or products, and QA still has that expectation. Again, going back into, I harp on this a lot because it’s important. But the expectation of those manual test cases being the only way to test, or we have to automate everything. That seems to be the only two options in people’s heads. And not realizing there’s gray area everywhere, and to kind of do what makes sense given whatever circumstances you’re in and making sure you have that flexibility and trust in the QA folks.

It kind of comes down to trust a little bit. We need to see what you’re doing. We need to see progress, which kind of goes back into exploratory testing is in people’s heads. You can’t see it. So how do you know that they’ve done it and done it well? There’s no KPIs attached to that. When you’re kind of focused on that, all that pressure of quality in KPIs metrics is mostly on the QA folks and not necessarily on the rest of the product team. And so they don’t get to evolve because they have this… They’re just not trusted. They aren’t thought of as skilled or they’re not thought of as part of this evolving process. 

So it concerns me that so many companies are changing the software development practices, but not changing QA in any significant way. When you think about automation, that’s pretty much developers doing the work as opposed to having that really strong user focus kind of thought process. And really, not that I’m against automation, is very useful. It kind of frees up manual testers to really do that more exploratory testing area. So there’s absolutely important place for automation, but folks don’t balance or think that through. It’s kind of all or nothing. When there needs to be a lot of gray and you need to empower the QA folks in your environment to be part of the team wholly. 

And that means maybe you don’t need to write test cases here. Maybe you want to just sit with a developer and talk them through that. Or you want to just do exploratory testing really fast. And larger organizations, they want to have a lot of structure and everyone follows this process, but that process may or may not make sense given the circumstances that they’re in, and folks aren’t empowered to shift when they need to shift. And that’s really what this is about. If you want to fail fast if you want to get fast feedback, take the change off the QA folks, let them go, let them test. 

If they need to write test cases, let them write test cases. If they want to do exploratory testing, do that. Give them more tools in their toolbox to use. Empower them to make that choice, given the circumstances that they’re in and they’re going to give you fast business-focused feedback.

Jonathon Wright:

I think that’s great advice. And I think it’s you’re very true as well in the automation space. A good friend of mine Dorothy Graham, who’s seen as kind of the grandmother of autotest automation starting back in the ’60s with Bell Labs in the U.S. And then putting some of the foundation work. And I remember she had this quote around… That once you’ve created an automation script, the likelihood of finding another bug or issue going through the same happy path. I actually heard a developer called it a sad path the other day and I was mortified. 

But part of it as they go through the same route. And they’re not really adding any value. And I literally sat down with this developer who said sad path and he said, “Yeah, so we write all their tests. We do all the component integration. We strip everything out. We use stubs and shims, and we do literally end-to-end testing. So, therefore, we don’t need any testers. And I was just horrified about even engaging in that conversation any further. 

But part of that was their adoption of say, for instance, DevOps. And I think that’s really interesting because we talked about Agile, you have to deal with projects that are maybe Mini Waterfall, some that are Agile. And I guess some projects will be moving into this kind of DevOps landscape. Do you see there’s a change in our part within the DevOps life cycle as well?

Kate Falanga:

Well, sure. I think it’s the same situation as far as we can work with developments. We know how to test things. Help them write their scripts better or write them ourselves, as QA engineers can actually write code as well. To be part of that process. I have found when developers write automation versus people who came up through manual testing, approach automation very differently. And the people who come up through manual testing are better at writing test scripts because they’re thinking through tests a lot differently. 

There’s a lot of folks talking about the difference between testing and checking. I don’t like to correct people if they get it wrong, but I think understanding that difference is important and that kind of checking is algorithmic. That’s what automation is doing. Is this button five pixels? Yes or no? That’s it. But a tester is going to go, what happens when I do this? And it’s an open-ended question. You can’t automate that. And that’s where you miss things. That is where your users are going to have a bad experience. You can pass all your automation checks, and your users are going to have a bad experience on your application because no one’s going in and checking and looking and touching things. 

Like on a mobile application you can say, “Hey, the button click works.” Yes or no? But is that button feel right? Is it too big or too small on a screen? Does it work in different types of browsers as consistently? How does it feel? That’s an importance, and it gets missed when you think you can automate everything.

Jonathon Wright:

Yeah, no, that’s a great, great, great example there. And I think that’s either an extending kind of our role into maybe even a bit of UX design. Part of understanding and I guess they call it world-class the kind of the Lean UX view of the world, A rapid prototype. That yes, it’s got a button but, is it in the right place? Does it allow you… Should it logically be part of that flow? And there’s a lot of tools that are coming down the line at the moment, which are trying to optimize that UX, or even digital experience and say, “Well actually at this point you should be able to change your address. You don’t have to go through the menu and go back because people will drop off.” 

Starting looking at the operational tools to say, “Well, why is everyone leaving when it gets to the checkout page when they’re using an Android device, on the version six, on this particular size of the screen resolution on these devices, on this network?” It gets down to that granular level where we need to be connected to operations to actually think about quality for everyone, not just the people who are lucky enough to own an iPhone or, have a standardized device or a browser. They need to deal with lots of different phone factors, smart devices, Alexa, your Google Home. There’s just so much variety of devices now. It must be virtually impossible to get to provide quality across the entire board for that.

Kate Falanga:

No, you have to make smart choices, because it’s just too much otherwise. But going back to some of what you’re talking about with UX and design, absolutely. I think we partner. I mean these are people not necessarily tools that we partner with. And we will have that discussion before development even starts. And this is what I talked about before about QA touching lots of different types of roles. And it’s not just from a theoretical standpoint, it’s like, literally. We can be in design reviews and give feedback. I’ve been in a design review, where another QA tester went, “Hey, you know what? I know where you’re going for here. But I’ve used a module like that and it never works well on iPad in portrait mode. You may want to rethink that module.” Though it hasn’t started yet, it’s just a design. But we’re able to give feedback that quickly, and cause a bug to not even happen at all. 

So having QA folks even more abstracting, like the Marsh, the buzzword is shift left there. But that’s true too. We’re professional users. Let’s come and be part of that feedback loop, potentially sooner than you think. It frustrates me sometimes when I’m trying to do resourcing, or talking with people and like, “Hey, we’re not ready for you guys yet. There’s nothing to test.” “Are you sure?” There’s probably something to have a conversation about. You don’t need code to be written and to talk about quality.

Jonathon Wright:

Absolutely. And so that’s that kind of design ops approach, right? With the Lean UX is, you get a prototype or even just a wireframe, and you’ve got some quick feedback. You can even think about starting how you take tests, even from model-based testing, which is kind of my background is. You can start thinking about applying context on there. So if you’ve got domain expertise, you understand the derivatives industry or your use of the healthcare, you can think about, “Well, what are the different codes that I might need to be putting into that? What would I see as negative testing? What is important to me?” And to understand various different journeys before the actual application even exists. And I think that’s a fascinating area.

But kind of pulling back to the DevOps kind of approach. I think one of the things, well I have as a bit of a pet peeve at the moment is around Ops/Dev. It’s around getting operational people in those teams as well. Whenever I speak to Dan North, he always says to me, “The idea with DevOps was blurring the lines. It was getting the operational staff involved from day one, and vice versa.” Right? And using those operational APM tools, using Google Analytics, and use all those great tools that they’ve got on the right-hand side to understand what the end state should be. 

And I think when you were talking about your agency staff, there is an operational state where you need to be able to hand it over to the customer to be able to run with it. But how soon are they actually involved? They may give you some requirements of what they would like, but are you understanding what their environment would be looking like apart from… And maybe even putting some of those Chaos Engineering kind of questions in place around, well, what happens if the service goes down? What is that experience? What you get with CBS if something goes down?

It’s all those kinds of scenarios where I think collaboration across lots of different work streams, not just dev is… And I’m bringing the business in, is maybe something we need to keep on evolving as far as going forward. What’s your kind of view on that?

Kate Falanga:

I mean it is just this is part of what we do. That’s why they come to us is not just to do the thing but also to talk through the best way to do the thing. And how they can best incorporate that into their environment. Whether we help choose tools, we help choose environments to put in and we work really closely with their tech teams, their marketing teams, their business side. We’re working with all those types of folks. Most of the time the people who are hiring us are usually the marketing folks, but we still have to actually work really closely with their internal tech teams because yes, we have to hand this over. 

And as far as client engagement, that depends on the client. Some of them are very involved and on a day-to-day basis, they’re on calls with us, and we’re what might even be a blended team or we’re working with their developers and their QA. Or Waterfall, we six months later give them a thing. We don’t prefer that because fast feedback is helpful. I do get concerned when we’re so focused on, I think you had a lot of those different kinds of buzzwords. Like Lean UX and DevOps and things like that. When you’re focused on, “Oh, we’re going to do it this way, in this process. And this process says we have to do these six things and that’s it.”

Rather than doing really what makes sense and picking and choosing from those different types of environments to be like, “Hey, we’re going to do things a little bit more UX over here, and a little bit more Agile-ish over here, but this, we’re going to do Waterfall because it makes sense.” And it’s good to understand those different kinds of techniques, and operational styles, and software development strategies. It’s good to understand the positives and negatives associated with them, but I get concerned when too many companies are just trying to do it by the book when that book doesn’t necessarily make sense for the circumstances at the time. Going back into the importance of flexibility, and really making sure we have a lot of tools in that toolbox. 

Jonathon Wright:

I absolutely love that. I love the idea of being able to switch between those different techniques and approaches. Because it isn’t a one size fits all and I think exactly what you’re doing. You’re like a boutique QA resource that understands and tailors it like a fine suit, for exactly what the client needs. Instead of saying, “Okay, we are going to follow this manifesto, and we’re going to do these particular types of software development lifecycle activities, and we’re going to release it at this cadence of two weeks or something.”

I think it’s great to break down some of those kinds of forced viewpoints of what needs to be done, and really think about what should be done. And I think that this entire podcast you’ve really helped define that for maybe some of those QA engineers, or leaderships, or anybody who’s in the QA profession, to understand some of the challenges that we have in the industry. Do you have any tips for listeners around? Maybe some resources that you like listening to, or reading, or books, or courses, or anything that might help them get started and really understand how to explore the QA landscape?

Kate Falanga:

For me, I didn’t really know a lot of that existed until I went to my first conference. And that was very helpful. And I had an idea like, “Oh, there’s a whole world out there and people have ideas and some of these people have solved these problems that I thought that I had. And they know how to do it. Or, they haven’t solved it yet, and maybe I’ve solved their problem.” And that was very helpful just to be part of the community. It can get difficult sometimes, because whenever you have the community people, there’s always going to be, “Oh my way is the better way”. But you can ignore some of that and get just a whole lot of ideas.

There are so many smart, awesome people out there. I’m not going to necessarily name individuals, because I’m going to forget someone and be sad about it. But just going out there, there’s Twitter, just looking for different kinds of community. Just looking for different QA folks. If you find a QA folk person on Twitter that is saying a lot of QA things, figure out who else they’re following, and follow those people too. That’s a good way of kind of doing it, or just going to a conference. 

Usually conferences, there are lots of different testing conferences. It almost doesn’t matter which one, although some are better than others. But if you’re going to get started, get started to whatever’s easiest to go to. And just start talking with people. Either in the individuals and talk through what you’re doing. I think people would be surprised about how much is out there. And that’s kind of what was very inspirational for me, and how I was able to, at one point in time, I had to shift the whole company from Waterfall to more Agile way of working.

And I had no idea how to do that, because I came from that old school world like, “Oh no, we have to write test cases.” I didn’t even know that not writing test cases was a possibility until I went to a conference and talked to just so many diverse folks. People coming from different types of organizations doing different types of things, and people have different ideas. That was just amazing to me. So if people are shy, some people are, that’s okay. You can listen. You don’t necessarily have to talk. Or if you’re really into talking and there are lots of… You can meet lots of really awesome people, who do the same thing you do across the world. 

It’s an amazing different community and it’s also very supportive a lot of the time. And you can learn so much and you can help teach because we’re looking for new voices. I think that’s part of the problem too, is that if we’re too focused on the company that you’re working at that particular moment, if that’s your world, it’s a very small world. And you’re not going to realize that there are different ideas out there that can help you and your organization. And the fact that maybe you’re doing something that can help the testing world and we need your voice too.

Jonathon Wright:

Fantastic advice. Giving back is really important and finding a mentor as well. Do you find that we reach out to people on Twitter to talk to them, to engage, and follow some of their content? So as far as, stuff that they publish, is that how you reached out and found those answers? Or were they meet-ups or what advice would you give to people who are completely new to this?

Kate Falanga:

Well, I got sent to a conference for my job to start. And then I got some names, followed those people, who were just speaking at the conference and then kind of went from there. That’s kind of a place to start. But blog posts. I mean, just google stuff. Start googling stuff, start googling people. If you have a particular problem that you’re trying to solve, don’t just read the blog post for the tactical solution. Figure out who wrote it, figure out what they’re thinking, and figure out who they’re following, follow that particular path. Especially if you’re finding yourself reading the same types of folks over and over. There are different schools of thought in QA and testing. It’s not necessarily a bad thing. 

But realize that there are different schools of thought on QA and testing, and you might be at one particular school, but it’s important to know these other schools. There’s kind of the factory school, there’s Agile is very different than context-driven. Different types of people think different ways, and it’s not necessarily a bad thing to learn all those different schools. Because again, you want to pick and choose, you want to have a lot of tools in your toolbox. 

So also just be careful about making sure you’re not in too much of an echo chamber. If you find some people figure out who are they disagreeing with and read that. Looking through and trying to figure out what makes the most sense for you. And just kind of explore. I mean, explore the testing world as much as you explore a product.

Jonathon Wright:

Yeah. I must admit, this week I was doing a bit of work for the British Computer Society, with a good friend to [inaudible 00:44:24] Reed. Now, he was part of the new ISO 29 standard. And that is a difficult read. Right? And you might be thinking, “Oh yeah, well, I love classification trees, or I use these types of techniques.” But it’s dry, it’s really hard for people to understand. And they buy it, go and do a course, an ISTQB Course, or they might go and try and read the ISO and not fall asleep. And it’s really difficult. 

And I think that advice where you’re saying, “If you’re reading a blog post, all these different techniques have value. Really commit to it. Look at what else they’ve written, add them to LinkedIn, either reach out to them and ask them, who would you recommend?” Because most of those people in the industry will get back to you and really help you out. So never fear, kind of asking for help. I think is a great idea. And also follow those kinds of people who align with some of the thoughts that you’re going through.

Because I guess over time, and you mentioned, you started off with test cases. And where you are today, you’ve gone through so many different processes and approaches that, you’ve got a wealth of knowledge that’s behind that. Whereas if you were going to start that again today, you start your career over, you’d probably choose lots of different decisions all the way through. And I think that’s really important because there is no wrong answer when it comes to QA I suppose. So do you have any final advice, any key takeaways that you’d recommend for new listeners to go off and challenge themselves with?

Kate Falanga:

I think is if you’re not happy with how QA is at your current company, take it upon yourself to help change that. And don’t assume if you’re not in a leadership position that you can’t do that. Individuals absolutely can make a difference. I have another talk somewhere called Testing Is Your Brand. Sell It! You can Google it, it’s around somewhere. Where I talk through different ways of advocating for yourself. That’s very important. And if you’re in a leadership position that makes it a little bit easier, but sometimes hard too.

And if you’re in a leadership position already, look outside your current company and, lookout about what else is out there and bring in what makes sense. It’s that shifting from… QA needs to shift from being told what to do to tell. And that can be a hard shift, but that’s something I think we need to do.

Jonathon Wright:

Awesome. Now that is a fantastic session, Kate. And I recommend to everyone who’s been listening don’t forget to subscribe. And it was an absolute pleasure to have you on.

Kate Falanga:

Thank you so much.

Join forces with other innovators in the quality engineering world.
Get on the waitlist to be part of the community that’s forging the future of quality engineering and leadership in tech.