- Subscribe To The QA Lead Newsletter to get our latest articles and podcasts.
Related articles and podcasts:
- Introduction To The QA Lead (With Ben Aston & Jonathon Wright)
- Communicating Quality (With Conor Fitzgerald)
- Your Data Quality Sucks (With Huw Price)
- A QA’s Guide to Bulletproof Quality Planning
- Best Load Testing Tools For Performance QA
- 10 QA Automation Tools You Should Be Using
- Everything You Need To Know About Performance Testing
- What is Quality Assurance? The Essential Guide to QA
- What Is A QA Analyst, And What Do They Really Do?
Read The Transcript:
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
In the digital reality, evolution over revolution prevails. The QA Lead approaches and techniques that worked yesterday will fail you tomorrow. So free your mind. The automation cyborg has been sent back in time. Ted Speaker Jonathon Wright's mission is to help you save the future from bad software.
Jonathon Wright Hey, welcome to this episode. Today, I'm going to be joined by Theresa Neate all the way from Melbourne, Australia.
But before we start, this podcast is brought to you by the Eggplant. Eggplant helps businesses test, monitor, and analyze their end to end customer experiences and help continuously improve their business outcomes. So I'm going to hand over to Theresa to give us a bit of an introduction about yourself.
Theresa Neate I am a QA practice lead or a test practice lead or a QA lead. It's quite fluid, so I'm not bound to the title. But what I do in my day job is I look after a group of people of about 160 or 170 products and take people who produce a product called real estate.com.au. And as the QA practice lead, I look after the quality and the dedicated quality analysts or QAs. My day job involves meeting with people, ensuring that quality is built-in as opposed to just being tested in, because of course, as we all know, you can't test in quality.
And I also in my spare time have a very strong interest in systems and infrastructure. And I have been a bit of time doing things in "DevOps land." And that includes DevOps Girls, which is a function I co-organized with three other people in Melbourne.
Jonathon Wright Sounds fascinating. You know, it's amazing to have you on the show and it's amazing to see what you're doing within the community and kind of inclusion. And I guess, you know, I like to kind of start I would just with the, you know, what kind of got you into the kind of stem. And they think the idea of, you know, doing the DevOps scales and how does that differ from a STEM kind of initiative?
Theresa Neate DevOps Girls is something that I helped co-create because I felt I was excluded from the conversation of DevOps, the entry-level, or the entry criteria to being allowed at the table of and I say quote-unquote DevOps because I want to be very careful that we don't make it a cult. It's more about the inclusion of developers' operations, people in the conversation of continuous delivery, post-deployment conversations, and so forth. I felt I was excluded from those conversations and creating DevOps goals with my colleague John, who also noticed this and he couldn't hire any women to assist him in engineering roles. We discovered that women felt that unless they were perfect, they wouldn't apply. Whereas men had more gumption and applied for roles they weren't perfect for. When John and I noticed that one, I wasn't included. And two, he couldn't hire any women with our other colleague Havi. Yet we created DevOps skills as a free event, as a complete volunteer activity and doing that. I also brought up my own skills, so I haven't been very, very, very busy getting my quote-unquote DevOps skills. Upskilled in the last three years.
Jonathon Wright Excellent. And what kind of age groups are you looking to? You know, and what kind of a size of Groups are you talk targeting at the moment, and what kind of attendance do you get?
Theresa Neate Interesting question, because girls indicate to some people an age group of under 18 or maybe under twenty-one. We surveyed the group that we were targeting were people who had either just finished a university degree or people who were later in their careers or career changers. And we asked them if they wanted to be called DevOps women or DevOps girls. And they chose girls because they felt that it didn't cut them off. They didn't get classified as aging and maybe out of date women. So we chose that.
Jonathon Wright Fantastic title. And you know, I love the fact that you know, you're kind of opening the conversation up to, well, why inclusion is a big thing for me. I've just taken a role at the British Computer Society. I'm heading up inclusion. So you're part of where I found, you know, real challenges was getting people like apprenticeships into the industry. Do you have you know, obviously you're based in Australia. Do you see that? That's also a trend there of how people get started with DevOps?
Theresa Neate Yeah, very much so. Getting started is probably the hardest bit. It is really easier when you have become accustomed to some of the language, some of the cultures, some of the conversations to then find your own way. And as your network grows, you also become more confident to continue reaching out and meeting other people. But the entry-level is definitely where we find people struggle. And so we, for the most part, focus on entry-level and maybe sometimes some mid-level courses.
Jonathon Wright And what works does that kind of include as far as kind of content?
Theresa Neate It's extremely fast, actually. So because we at the time when the three of us co-founded this activity, we were very invested in AWS Amazon Web Services at our employer. So we use that as a springboard to create content in the AWS ecosystem. So anything to do with the AWS services. But we were very clear that AWS does not necessarily equal DevOps. It just facilitates DevOps.
We began with how to spring up your own server, your own to how to deploy a WordPress instance.. to it, how to automate some of that work. And then I took it further as the testing in the QA person into how you can test in production, how you can monitor what you have deployed. And we've kind of left it at that. We've done a little bit of DOCA as well because it's a buzzword and people need to know about it if they want to be able to have conversations with colleagues and friends.
And who knows, we might find more technologies along the way. And then there's another thing we did is we also created a cloud networking workshop, which our fourth member Franka created, and networking being something I'm very fond of. We spent the whole day teaching people in the AWS. This ecosystem with cloud networking looks like.
Jonathon Wright Fascinating. I literally just got back yesterday from Bordeaux and I was there with the idea of AWS teams, and it was really interesting because I didn't realize that AWS had 32 percent of the market. And actually, because of as we all know, they started six years ahead of most of the cloud vendors. They've got so much knowledge in this area. And I find it fascinating that you know, you're creating content there that helps with the ecosystem of AWS, but also to a wider group as well.
And it will what will have to make sure we do is is add those links that people can find out more about the resources that you've mentioned, because I think people are wanting to go in into that journey and also probably go past some of those buzz words and really understand what will. How do you start? Do you have any kind of recommendations for people who are wanting to start out and where to kind of start looking?
Theresa Neate Yeah, well, we have published all our material on GitHub and it's free for everyone to look at. So if you go to GitHub and the DevOps Girls repo, we will fund all our workshops and we made it purposely very, very easy to read so that if you are further away, you are not in the room, you are able to read and execute the instructions. That's a beginning point. But I will say, in addition to that, find yourself a coach, which is what I did. I found myself a coach, a colleague who was willing to spend an hour a week with me and then with those two combined. I got up to speed.
Jonathon Wright Fascinating and I think that's a really good tip if you have a finding, a mentor, but also, you know, finding somebody that can help you on that journey, and I noticed from one of your roles, you had the title of developer advocacy. Did you find. Did you pair with somebody who was from a development background or was that more of a focus of how do we interact better with development or how did that start?
Theresa Neate Yeah, the developer advocate role was an actual dedicated role that I held for a year at my employer. We are a fairly progressive company. We like to be on the front foot about technology and we also build our own internal tools or we tweak some of the vendor tools to our needs. The developer advocate role is one that was created to support our internal tools and what we call our platform. Now, personally, as you've clearly noticed by now, I have a great interest in infrastructure, cloud, infrastructure, and networking, and this role was based on that team and I was able to support the tools that the companies.
And we have about five hundred eight hundred technologists in Melbourne as well as in China and across the world. And I was supporting those tools so that those people could use those tools. I didn't need to be a developer, but I needed to have empathy for developers to be able to speak technology and also be able to take their feedback and make the products better and also be able to bring the product to the developer's side was a two way street in a two-way conversation.
Jonathon Wright Excellent. I think infrastructures is something that's a lost art and you know, I was chatting with some of the aid of us guys because, you know, other vendors are, you know, that provide cloud services like Google Cloud. And as you're you know, part of it is they'll recommend a particular setup, but you don't know if that's optimal for what you want to do. And there was a team of guys from Alchemist's who do tuning for cloud infrastructure. And we sat there kind of going through would say, well, what's the advantages as well? You know, if you wanted to optimize what you're doing, there's a certain amount of eye ops that you're trying to, you know, do you've got a certain type of kit, certain types of locations. You've got some networking components. It's so complex. And, you know, unless you've gone off and done a Cisco network engineering cut of the courses, which is like 15 of them or whatever it is, you know, how can you possibly understand that landscape? At the same time, you're kind of taking some of that away by giving handing it over and saying we're going to use your infrastructure as code or we're going to use your platform. Maybe people don't get down to that level of well, actually, you know, the to correct schema registry is giving this amount of cause and this amount of memory to this particular docker container. And maybe that's not the right thing for our production vs. our prepaid card versus our performance environment. You know, how do you how have you personally found that upskilling and learning about all that lower-level network surfers and what kind of, you know, kind of training have you gone through?
Theresa Neate Well, I was very fortunate that my employer provided me with a flexible schedule to do a two-year networking deployment, which I just finished in December last year. I was so keen on the topic that my manager could not hold me back. And I was literally at the door. Well, not that we have doors. I was at his desk every other month with ideas of things I could be learning until I found this particular course, which also involves quite a bit of Cisco networking. And I made a deal with him about my working hours and what I wanted to do. And he at first didn't like the ideas. So I went to our H.R. representative and got some advice on how to convince my manager and got the advice and paid the HR representative with coffees, which is something we do in Melbourne. It's our currency. And then I ended up getting the approval and I started the course this January two years ago, and it included two Cisco modules as well.
Jonathon Wright That's some serious dedication. And I noticed from reading your last blog, you talk about site reliability, engineering. Do you see the Asari being a kind of an extension of what you're doing within DevOps skills and also of the work that you're doing within your control?
Theresa Neate Yes. SRE is complementary to DevOps. It doesn't replace DevOps. It helps make as the name implies, the site reliable, and then engineering associated with that. And this site in most cases means the web sites of the web frontend. So anything with a web, possibly a mobile, but mostly web front end is what we will include inside reliability. It doesn't necessarily and I am not the expert on this. I've only watched videos as everyone else has. But when you think about the infrastructure and the systems behind the web, that's where I see a more DevOps see conversation occurring and keeping the site performing and available and the engineering around that is where I see SRE coming in. And in both of these topics, definitely, QA, which I will say again is quality analysis, not quality assurance. QA definitely plays a part in both SRE and DevOps.
Jonathon Wright And I'd love to kind of. I have a great example in one of your blogs around work slicing and this idea of looking at the thin vertical slices instead of kind of maybe looking at the individual components in isolation and you know, do you find that when you're doing your quality, the deep, do you think that's the way you do an inspection is across those less or how would you want to you know, how would you recommend to somebody to look at those vertical slices?
Theresa Neate Yeah, I'm a big fan of vertical slicing, not just in testing, but also in designing and implementing we. What I encourage people to do is apply at a depth of lens, depth of view. And we definitely want to see the full sac or the full vertical slice sometimes. So you want to spend a little bit more depth at a little bit more detail on one particular layer. That's okay. But I discourage that we make that a complete project. There are a complete phase of a project more. So we zoom in and zoom out as we need to into the detail. But we always consider the full vertical slice, not just in the software aspect, but also in the infrastructure aspect. That way we're always building incrementally onto something. And the big bugbear that many testers will share is system integration. Testing is a complete pain. So we remove a lot of that pain of integrating and causing mayhem, which I will say also is why continuous integration works, is because we are continually integrating not one big merge at the end where everything just goes kaput.
Jonathon Wright And so a kind of building on I'd love to have the continuous integration kind of idea of, you know, the continuous integration to continuous just the delivery or deployment, you know, within that dev DevOps kind of landscape, all that kind of quote-unquote DevOps kind of releasing more frequently. And, you know, also looking at things I know, you know, end to end is it seems to be something that people, as you know, are focussing on. You know, can you do a particular transit, you know, end to end transactions at maybe just the UI layer? And obviously, we've seen the advantages and disadvantages of focussing, focussing on that kind of pyramid. What balance, too, you see now as more of moving into a low code, as code kind of landscape for protesters and also for full automation purposes.
Theresa Neate I'm not entirely sure that I understood that last bit of your question. Do you mind clarifying that for me?
Jonathon Wright Sure. So. So for instance, in any kind of an as code kind of view, you would you'd look at infrastructure as code, you'd look at maybe something like testing this code. So, you know, the example we were doing it with a team Got to have which works with AWS called Captain, which is doing Unbreakable Pipeline. So currently what they're trying to do is they're trying to you're building the YAML scripts up that include things like the infrastructure, includes that your definition of tests and what that means. So in that case, they would have performance tests, they would have integration tests, which could be API testing. But they'd also have you know, you took about visual inspection, but things like UI test. So your visual this for tools that would inspect cross-browser testing. But actually there's lots of different testing that is a different phase throughout the release site and the release pipeline. So you deploy something into maybe a dev environment. You'd run some component tests and you'd look at your static code analysis. And I notice you took a little bit about coverage in there and then you if that's okay, you then progressing to the next one where you get, you know, maybe less API focussed component tests and you move into more kind of UI based and maybe a bit of security tests and maybe even then moving it into another environment where you're doing a bit more scale of performance or more cross-browser testing with more UI focus. Do you feel that kind of the as code no-code/low-code movement is? Is it helpful for kind of this level of quality kind of inspection or is this is not something that is possible with just the checking and automation side of things? It's something that needs to be combined with the human touch.
Theresa Neate Yeah, right. So the. Infrastructure as code movement and everything to do with code has. It made life easier for us, but I remain a sceptic, which is why I am, I'm a QA. QA is naturally curious and sceptical. So we will continue to want to see the code, even though the code produces wonderful outcomes. We may want to see the code, but then again, it is a question of zoom. So we are the zoom in. Always zoom out. And if you have a suspicion which QA is exceptionally good at, when you have that old twitching nose and you think something's going on, you start inspecting. You may in old school testing terms, look at the white box, not the black box. You may want to inspect the white box and see what's under the hood and see how the infrastructure is created or designed or the configuration files are created. You don't necessarily need to do that. If it's a low-risk project. So we always apply context and you determine whether the context is risky both technically and business-wise. And then you adjust your approach to that. So I would say that, yes, we're interested in code. We may want to look at the code, but we may not necessarily need to or have to.
Jonathon Wright Sure. And I think kind of the Yamal kind of low code natural, the readability kind of this idea with where, you know, people like Dan North were trying to get to with the BDD movement, you know, this ability to be executed, with specifications or Gojiko with specification by example, was this kind of actually we could read this human-readable. It's not just, you know, code that's in PowerShell or something that actually doesn't make any sense unless you understand the logic and also the coding language that was written in. You know, this is actually something that we can understand by reading it. And I think that is obviously really helpful having that in a kind of a pipeline. But also, you know, I really like what you talk about when you talk about, you know, testing does not assure it kind of just one being quality just kind of shines a light at the current state of quality. And obviously, within a CI/CD pipeline, you know, you're iterating fast, you're providing ultrafast feedback about with it with the inspection. Do you find that actually that isn't the time to be looking at. And it's actually when those TARP talks start. You know, we did the kind of the kick-off side of things. Do you find that's a more important phase than, you know, codes already been delivered?
Theresa Neate Hundred percent. Hundred percent. I'm convinced that if you are really, really stretched for testing personnel and you only have one, the best place to employ this person is at the front at the beginning where they can look into testability, they can look at the design, they can look at the static analysis, they can find out what is going on. I mention in my article that I really support the idea of kick-off and with a kick-off, we find a lot more bugs than people tend to think until they've done a kick-off. People tend to think that, oh well yes, we can handle those things ourselves. And what we need the test for is plunking them on at the end where they can just verify that it works. That is so wasteful. That is so late. You definitely want to do some form of testing, but you don't want to do that with a specialist who could be doing so much more at the beginning of the project. So if you had to use them, only one place I would say at the beginning, at the kick-off, at the analysis, and then they can possibly figure out who is going to be doing the testing later on and how they might be doing it as well.
Jonathon Wright Absolutely. And so, you know, I remember spending doing a lot of speaking around DevOps about five years ago. And I know I got to go through some of your slide shares as well. You know, part of it is that we see there was this big kind of confusion in the industry. You know, it was DevOps, a toolchain was DevOps. I actually more of a culture. You know, I saw it change from place to place that I worked with. You know, I'm from. From country to country. So I remember doing a lot with a piece of the manufacturer. And it was just fascinating to see how different it was in Europe than it was in Australia. You know, how they did things, even the underlying stock that they had. You know, whether they were actually practicing some of those kind of behaviours, like the trust culture and, you know, or it was more focussed on technology, which, you know, is the easy way to kind of go. And I think, you know, things have matured slightly. Five years on. But one of my pet peeves back in the day was the lack of operational staff in those kind of kick-off meetings because that was the definition of done, too, or declaration of operational is kind of. Well, they involve you. Shouldn't they be involved in the operational end? Stay and be asking those questions early before architectural decisions have been made that they can't maintain in the operational state. Do you see that same kind of, you know, trend of lack of operational staff involved in the definition phase?
Theresa Neate Oh, my goodness. Yes. You have hit the nail on the head there. I'm such a big fan of having a system engineer or operations staff member next to me while we are kicking off a piece of work because they will ultimately be able to tell us whether it's possible, whether the proposal we are suggesting is scalable, maintainable, whether it can perform, whether the resources that we have available and I mean physical resources. I never talk about people as resources. So the infrastructure resources, will they actually meet our needs? Do we have the right things that we have that we want to see occurring in production at the end? We learn so much. And this is another point of diversity. Diversity is not just gender. Diversity is multifaceted and it includes roles. So if you only have a bunch of for argument's sake, UX's or product managers coming up with a design at the beginning and then kicking that awful perhaps throwing it over the wall to the development team. There will be problems because of the lack of diversity in the solution design upfront. So yes, to long answer, I need to see Coupé and I need to see operations involved in design and kick-off.
Jonathon Wright And this kind of this idea of being an advocate for the developer advocate but also a quality advocate. You know, what kind of message do you find yourself repeating yourself over and over when you are trying to help the internal team to improve what they do and get better and learn from what they're doing? Because, you know, the idea of retrospectives, you know, did they do they really learn?
Theresa Neate Well, there are a number of things that I repeat. One is it's not my job to tell you if you can release it. Unfortunately, I still get that much to my surprise. What do you think, Teresa? Are we. Are we good to go? And that is definitely something I keep having to educate people on. Is that quality is confidence. So if your confidence isn't high enough, then we have a problem. So let's rather have a look at your confidence. Some of the other messages that I certainly repeat over and over again are that the word testing is not synonymous with all testing and therefore they are not the sole person of execution of testing. And you may want to include them in the design of testing, but not the execution. You may want to involve them when you are having trouble, but not necessarily if you need to check some browsers and how something is rendering on browsers. So the other conversations I have is how can we include other thinking upfront? So the elites maintainability serviceability, how do we consider those criteria, all those qualities at the beginning as opposed to just what size the button is and what happens when you click it? There are so many things we could be doing upfront that we're not, and those are many of the conversations that I keep having.
Jonathon Wright And I think, you know, also part of your blog is a fiction or a kind of a nod to system thinking and you know, I love system thinking and I did the most enjoyable conference I've ever done in Australia and it was called Fusion. And I don't know if you remember this, but it was the idea of bringing developers, project managers and testers and everyone else within the business b.a.'s all into one conference, but not have separate swim lanes of different tracks of b.a.'s and testers. It was, you know, actually breaking those barriers down and talking about communication, talking about ways to change the culture. And my good friend, Dr. Emma Langdon. She's read a number of books on systems thinking and she did a presentation around it. And I will make sure we share on this podcast, which I found fascinating from a position. And I remember she got stuck in to talk to a few people in the industry, say, well, why are we bickering about this? You know, shouldn't it be a wider conversation, shouldn't know be involving other people, you know? And I think maybe system thing is another thing that we've kind of we've lost as well. And, you know, it's such a shame. And we've actually I've actually reached out to Tom Gilby. You know, it's a to me is one of the grandfathers of Agile. And he was talking about Evo back in the 80s. And, you know, he wrote something called Planguage, which was a language which was less ambiguous with the idea of kind of this requirements engineering idea of all having more clarity in what we do. Do you find that kind of this, the Agile and the BDD, you know, that maybe there's not as much detail upfront?
Theresa Neate Absolutely. And, to continue the systems thinking conversation when we describe only the behaviour, which I understand we need to. So there's the BDD, the behaviour is driven development. That's fine. And when we specify by example, that's okay. We still need to consider the system. And I want to see us using this system. I am a huge fan of systems thinking and a really good way is someone who can think of systems and the consequences. What happens outside of the box that you're looking at. When we start thinking about those things up front, we design a far more intelligence and good quality system from the outset because we thought things through the whole stack of the whole say seven-layer stack. We are thinking about not just the application as a system, but the underlying aspects of their communication, the passing of messages, and so forth. So yeah, for me I have personally possibly removed myself from just specifying behaviour. I also want to see that we specify criteria, that we want to see success behaviours in underneath the hood as well, because sometimes load time, performance time and those things should be measured from the outset, not measured at the end.
Jonathon Wright And thank you. I think you've just perfect in the sense of you're right on the money there. You know, I love what you say about the system. Think you know, everything I've learned from the last 20 years of doing automation was around upstream and downstream systems. I read any, you know, your diet. There's that. There's a picture there which you quote in your blog with the three flowers, which is saying, well, actually, we're all systems. And then maybe that's going around. That is also part of the ecosystem. So I talk about ecosystems of ecosystems. But the idea being is that there's a greater amount of data that flowing upstream and downstream. And part of it is we've got to think about how that's consumed. And that's to me is where most things seem to go wrong at the moment, is, you know, you do a Disney plus launch and it goes down. Disney doesn't have an underlying cloud infrastructure. So underneath that will be an AWS or Amazon. And then there are these that they will be, you know, some kind of code that's being written and some third party system, whether it even be a payment gateway. And something's gone wrong there. And it's most likely not Disney. It's probably, you know, the underlining platform which isn't built by Disney or, you know, big Disney doesn't make the code, you know, as far as, you know, the language like Microsoft, you know, so therefore but from a brand perspective, they're the ones that get hit.
So similarly, when you're defining those initial kinds of outcomes that you wanting to see, do you go all the way up to maybe the end operational stay? And what that means as far as what good looks like and like you said with the kind of the performance metrics, is, you know, is it more important that actually some reliable Disney, you know, service than it is, you know, got the latest 4K and fastest, you know, refresh? You know, it's actually more about delivering something, which is a quality product.
Theresa Neate Hundred percent. This is something that I feel we miss very often when we define quality, we talk about how I'll use Jerry Weinberg's definition. It's you build something that matters that adds value to someone. So if it adds value to someone. And I think Michael Bolton extended that to say that it adds value to someone who matters. So in this case, the person who matters, if you're a Disney consumer, you want uptime. I think uptime is going to be more important. Therefore, we probably need to speak to our customers who in this case matter. And we ask them what matters and what value do you want to see? And we build that in. And when we use vendors or third party integrators, we include them not necessarily in the conversation, but we hold them to the task and we ensure that we can provide the systems that our customers need, our consumers need. And if that means we test infrastructure and performance and behavior throughout the entire stack early on, and that's what we do. But yes, is 4k that that fantastic when you can't see it? I don't think so. So uptime is going to matter first. You'd have a hierarchy of needs, I'm sure, and we'd need to be able to spec that out and test for all of that and test for more of that. Also, you always test for the things that matter more. You do that more often than you do for the things that don't matter as much.
Jonathon Wright Let me. No, I completely agree, and I actually got a message from a good friend of mine, Ray Ralphie's, who talks about solutions, thinking so around. It's kind of a value-driven delivery approach where ideas come in and they look at the Harris takes them, you know, they look at the behaviors, the biases, the ethics, the values, the cultures. You know, I'm doing distrust is probably much better explaining it. And he's going to come on the show. But, you know, it's fascinating that these. I think there are so many different undefined components that we kind of has to talk about something like, well, somebody who matters. But, you know, actually, we can break that down a little bit. You can break down what usage looks like vs. we know. What does Brandau experience look like? What do you know that capability actually mean? Is it something that's more important like 4k to some people, but for most people, actually, it's about an offering or a promise that you've actually provided, whether it's affordable, you know, whether it's got the level of satisfaction that you're expecting. And I think maybe that's something that we don't quantify. Maybe we should be thinking about it.
Theresa Neate It is actually something that ties in very beautifully with the developer advocate domain or discipline is we need to measure satisfaction. We cannot assume that when we are proud of something, that the consumers of the product are enjoying it as much. And that ability to provide feedback and improve on it continuously really, really matters. And satisfaction, in my opinion, matters more than the net promoter score, which is loyalty. I'd rather see that somebody is a satisfied customer and care less about them being a loyal customer. And when they are satisfied, you know that you're on-brand, you're on quality, you are doing the right thing. When they're not satisfied, you'll ask them why and then they can tell you why. And the best thing about a customer who tells you what's wrong is that they are still engaged and you are learning from them. And I will happily accept their feedback, harsh or not, then make it impossible for them to provide feedback on things that they don't like and say that we don't want to hear this. We just want to need to know if you are a loyal customer. You probably have seen this meme doing the rounds in the social media sphere about an operating system who shouldn't be named, who popped up a little radio vox pop up and asked you what was the words not? Are you satisfied? But how likely are you to recommend the operating system, blah, blah. And the funny thing is, no one is going to recommend an operating system, blah, blah. If that is forced on you through your employer or your enterprise or something, you just wanted to work. So I'd rather be asking instead of how likely are you to recommend something with how satisfied are you? Are you a happy customer? For me that measures quality not necessarily loyalty.
Jonathon Wright Yeah, I think the. I've recently learned this, I've started doing some work in the fashion industry, which is incredibly fickle. And what are the things that I learn from being in product engineering before it is that actually it's a kind of a mistake that's made by the product. The product team not working close enough with the development teams is that you get this kind of gap of GitHub, a gap of communication, a gap of kind of what the vision and the mission is, and that eventually goes to your customer. And, you know, I was told that by the recent customer that actually, yes, you know, we don't care about personas and what we do care. You know, what we do care about is is is loyalty in return business. Obviously, it's the fashion industry. They want to make money. And I was kind of saying, well, actually, it is a great book called Bridging the Gap, which I recommend everyone to read if you're especially if your product tone or a product manager. And the idea is this this this gap. I don't know if you've seen this diagram before, but it starts off with the early adopters and those earlier adopters. You know, maybe they're the fashion influencers. Maybe they're the ones that love the new app and the new app. Does it really cool? They keep you know, you can take a photo of a pair of shoes. It'll tell you wake buying from or something like that. But then you go to the early minority and they want something different, less tolerant about, you know, a crash or the fact that you've had to you've been signed out and you've lost all your history. They're less tolerant and they move away, they're less satisfied. And then you get into the late majority, which is the gap which not many organizations seek to make. And that's because, you know, that it's short-lived. You know, we're in a disposable kind of society where, you know, actually, yeah, that was good for a couple of weeks. And how many apps do you have on your phone now that you just don't use anymore? Because actually you've got over that initial cut off. Well, yeah, it does something really cool, but that's not part of my daily life. But I think that's what's really important is how do you get to that point where you can get bridged, the chasm go over and actually get to those late majority and the early majority and actually with the mainstream public who want a product that they can rely on and it works. You know, they don't have to say, are you happy? Do you want to give us a five-star rating on the app store? People just use the app because they love it. It becomes part of their life, that digital life.
Theresa Neate Yeah, the conversation on quality changes clearly depending on the audience. And you would need to adjust yours. This is me putting on my developer advocate hat now also helps me be a better QA. We need to adjust our questions. We need to understand our audience really, really well. If we want to provide something that meets their needs, which means that we need to be very agile with a lowercase A and we need to be adaptive and we need to be flexible. So whereas we might have been able to attract some people in a certain segment of adaption, adaptation or adoption, we don't necessarily know the laggards. We don't understand them and how their minds work. I would say that we'd probably want to speak to them and understand what matters to them. And again, we're talking about value. So we want to understand what is the value that they will see or enjoy without necessarily bombarding them with radio buttons and survey questions on the application itself. We're going to have to find a way to make contact with them and understand their needs. I will say in the same breath, though, the misquoted quote about Henry Ford and the carriage. And if he'd asked people what they wanted, they would have asked for faster carriages or faster horses. When he came up with a car. Now, apparently, that's a misquote. He was also quite innovative. So we have to remain innovative in what we do and possibly experiment with some ideas on people and see how they respond as opposed to just take their feedback word for word or their advice for word-for-word because they don't know what they don't know. And therefore we might need to present them with ideas and see how they respond to that. And then we'll come out with multiple audiences addressed through the same solution.
Jonathon Wright And you know, it's the same as the AC/DC kind of if you had a question around how many years it that innovation put us back based on. Well, actually, the rush to market with a product that, you know, everyone believes is the right thing to do versus, well, actually redefining the carriage and making it into a car. You know, I think they give organizations that are really, you know, redefining everything, whether it be, you know, to the cliche, Tesla or something like that. You know, I think it helps to start from scratch again and again. I saw a kino the other day. The guy said you know, everyone wants to do these latest patterns. Everyone, microservices. They all want to do low code, you know. Was that is that the right architectural decision or is that just the trend? You know, are we just building products that all look and feel very similar to each other? And therefore, we accept that as long as it feels cool. Looks good. It's great. Maybe we do need to have a fresh idea on what is it what the customer really wants. And, you know, I found this with the project we're working with. And we used a tool called a vision, which is a free tool, which allows you to literally deploy an application that's had no code written on it. And it gives you can have 20 different versions of the app, 20 different, you know, UX designs of different flows. And then you can start doing A B. You can start actually doing it. You know, I think crowd testing is something that's highly unutilized. I think from what you were saying about, you know, should you be giving mundane tasks like cross-browser testing or should you maybe get, you know, enterprise gamification or enterprise crowd test that so that everyone does a hackathon. Everyone has a go at breaking it with across the business. You know, do we make those activities something that we are we do internally? Do we put that out into a crowd where we get a split of different types of age groups, different types of backgrounds, instead of having the same view of the same people? You know I think this is something that I'm hoping to see a change within the testing industry is the reliance on. Well, actually, I've got mine. I measured by my throughput, and that could be the DevOps test. The number of books I've done changing from that to, well, you know, the satisfaction side of things and how we might, you know, measure satisfaction. Is it through the operational tools that say the behaviors that we're expecting are happening? And they're transitioning, they're turning into transactions or either next turning into, you know, people consuming more videos and watching more videos for longer? Is that success, you know, changing it from revenue generation to actually know customer happiness, you know? You know, I think the idea of stuff that you're doing with definitely with DevOps skills and some of the kind of the content that you're putting out, you're obviously really passionate about making an impact. You know what? What recommendations which you have to people out in the industry on what to not take the norms and to kind of really go in and learn for yourself. What would you recommend to those people kind of starting out and one 0 people who just want to get better and learn more?
Theresa Neate Well, the first recommendation I always make and I've even done conference talks on this is to not expect anything to arrive on a silver platter. Do not expect for one minute that because the industry is moving in a certain direction, that it's your boss's job or your manager's job to bring it to you, you will need to be the proactive person and go seek it out. And this means that you'll need to be a little bit vulnerable and a little bit uninformed. And that's okay. You can start there. But if you don't venture out of your comfort zone and you don't start exploring unknown territories, you will be left behind and you will feel threatened. So instead of feeling that. Possibly the world is changing and the automation going to rule everything. And then I will take over and we will lose all jobs where we involve humans. Why don't you rather inspect some of these solutions that are coming in and understand how you could be part of that? And then this is again about the comfort zone and the willingness to fail. You mentioned earlier on about the great successes that we know of, for instance, innovation successes such as Tesla. We tend to hear about and we celebrate their successes. We don't know. And Richard Branson is one of the people who said this very often. You don't know how often they fail and how they learn from their failures. So I'm going to suggest to people and highly recommend you leave your comfort zone, do your exploration. Don't use buzz words because people will see through it. So don't put yourself in as an expert of blah blah if you don't actually know what it is because the interviewer will find it out. But go and learn those things. Be willing to fail. You have I don't know about you, Jonathon, but how many times have we failed to get to the point where we are now? I failed. I think almost it feels like a million times. Gotten back up again, tried again. Readers find my role, redesigned my career. I've had to restart my career a few times after failures. So in a nutshell, venture out, be daring, try things out, look for answers. Don't wait for the answers to come to you. Be willing to fail and be willing to learn from your failures and get back up again and keep on going.
Jonathon Wright Oh, that's wonderful advice. And you know, actually, I'm going to add a little picture with something that I easily put the end of my slides, which is the ad that I think is the Sydney conductor who talks about celebrating failure. And he's like, you know, part of how he encourages his team is, yes, we made a mistake. Let's try again and do, you know, get better. And I think it is incredibly wise words that organizations don't celebrate their for their failures and they should, because, you know, you learn from those failures. And you're right. You know, the big names like Dyson. Dyson is famously known for iterations so many times that everyone thought that that product sucked. No pun intended for them for the type of industry they would do. But it was it took something like three hundred iterations before we got a failed. So many times we learned from it. And I think that is great advice. And I think also, you know, the stuff that you're talking about with the T-shaped tester stuff with you know, the breadth of the kind of understanding of what you need to go across the top as well as critical thinking, curiosity. You know, the idea, if, you know, really challenging everything is I think he's is incredibly important. And you're right, they QA and testers are a finite resource and they should be utilized in the best possible way. And, you know, I think that's where we've got the most value we can add. So, you know, I think it's it's great what you're saying. And on another note, which is kind of something slightly closer to my heart recently, is, you know, I read your blog around the kind of taking on too much, you know, having to, you know, baby burning out because you try to learn so much and, you know, learn too much about technology. And, you know, it's you know, you've got to pick your battles and stuff. And I felt that same kind of thing. I've just written about an article for The QA Lead around the kind of, you know, the challenges that we get with the industry, the QA industry around burn often and maybe pressure and stress, you know, do you have any advice on, you know, in your case, you found you know, you went and spoke to somebody and kind of talk to them about it and got some advice from them. What would you recommend to people who, you know, all kinds of fatigue, Dow, QA and all kind of. You know, I was stressed and feeling that way is on their shoulders. That is their response, responsibility for quality. And if something goes wrong, it's their fault. You know, what would you say to those people who are listening?
Theresa Neate I would say to them that even though you may not think it is, the ball is in your court and you will have to be selective about what you engage in. And if you are finding, for instance, a work scenario where you are being burnt out and you have tried to resolve it and it's not working for you, then it's on you to leave and find something different. One of the best jobs I ever had was working for ThoughtWorks Australia and I only found that job by accident when I had a terrible experience somewhere else and was made redundant unexpectedly on one 1 Friday and given a package and the entire testing team was let go on the same day, which of course made perfect sense to them only. And in that state of distress that I was in, I had to pick my battles and I had to understand what I needed to do next. And I found by accident ThoughtWorks Australia. And as I said, it was a phenomenal job and a phenomenal company to work for. So when you find yourself in distress, you have to realize that there is something good waiting for you on the other side of the distress. You just can't see it. And if you continue to concentrate only on the distress, you will make the distress worse and you will not necessarily open the doors for the other opportunities. If you do find yourself in that position where you really don't know what to do. Yes, you should reach out. You should definitely get help. Whether that's professional help or assistance from a friend. And you need to understand that this too shall pass and you are going to find something else on the other side if you yourself are proactive and find something. If you wait for it, it's not necessarily going to happen again. It brings me back to the message of we are creators of our own path, even though we don't always have an understanding of what is coming. We need to understand that. We need to be proactive about things that we want to change.
Jonathon Wright So the message is being prepared. And, you know, really think about your career and what's important to you and what you love doing and make sure you that you don't waste that passion and that curiosity and you go out and you take your own life. So, you know, I think it's been such a wonderful conversation, you know? Thank you so much for being on the show. I've got to make sure that all of the links that we've talked about today are available on the podcast. And we'll have to get you back on or either need to get you to write some content for the website.
Theresa Neate Oh, I'd be happy to. Thanks for having me. It's been a great pleasure. Fantastic.