Lewis talks about how QA approaches are changing as NoOps begins to replace DevOps. He talks about the tools, training, and culture of QA automation, projects he’s led, and the future of automation in QA.
- Subscribe To The QA Lead Newsletter to get our latest articles and podcasts
- Check out Cancer Research UK
- Check out LewisPrescott.co.uk
- Connect with Lewis on LinkedIn
- Follow Lewis on Twitter
Other articles and podcasts:
- About The QA Lead podcast
- Unit Testing: Advantages & Disadvantages
- Help Test COVID Safe Paths MIT Project (with Todd DeCapua from Splunk)
- How I Prepare And Test For My Releases
- QA Tester Jobs Guide 2020 (Salaries, Careers, and Education)
- What Is ‘QMetry Automation Studio’? Detailed QAS Overview & Explanation Of QAS Features
- Automation Testing Pros & Cons (+ Why Manual Still Matters)
- How To Keep Up With Change In The Testing World (with Joel Montvelisky from PractiTest)
- The Digital Quality Handbook (with Eran Kinsbruner from Perforce Software)
- Developing The Next Generation Of Testers: The DevOps Girls Bootcamp (with Theresa Neate from DevOps Girls)
Read The Transcript:
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Jonathon Wright Hey, and welcome to The QALead.com. Today I have Lewis Prescott, who's a QA automation lead for Cancer Research UK. He's also a fellow Udemy course creator. And he did a section around contact tracing to check that out. And today we're going to be talking about everything we did over Covid with the guys at M.I.T. and how we challenged ourselves around automation and what's coming down the line.
In the digital reality, evolution over revolution prevails. The QA approaches and techniques that worked yesterday will feel you tomorrow. So free your mind. The automation cyborg has been sent back in time. TED Speaker Jonathon Wright's mission is to help you save the future from bad software. This podcast is brought to you by Eggplant. Eggplant helps businesses to test, monitor, and analyze their end-to-end customer experience and continuously improve their business outcome.
Jonathon Wright Yeah. Exciting times. You know, you've got a sufficed interesting background. You know, coming from kind of starting into the testing world out of university and then, you know, working some for some really interesting companies like a source doing some really like challenging. I guess, you know, it's something that we've not had on the show yet, which is kind of this kind of clouds testing or testing in the cloud where you've got like you said, auto-scaling for simply at like an Asos on out on a Black Friday in Azure. And then now you're talking about kind of you know, the serverless architecture, which, you know, is really interesting because, you know, I don't think anyone's really talked about how you test something like that. You know, part of it. Yes. You know, there's a lot of talk about NoOps and this idea that only if serverless it's going to you know, you going to put the code in, it's just going to execute the compute in the build or the resources it needs, and it's going to destroy it afterward once it's got the results. How does that change your kind of approach to testing?
Lewis Prescott Absolutely, yeah. So I guess that the traditional model is you deploy your application and you test it and you test out all the stages throughout that process. So you've got your unit tests and you've got your integration test. But ultimately, in the serverless architecture, you don't have that concept of integration because there is no integration point—it's just a snippet. But that will run. And so, yes, you can just test it at a unit level and you can mock the trigger. So your trigger for your lambda function is just a mock event. And so you can test all the logic within your lambda function without actually having to execute it. And that, yeah. Simulates the integration testing that you need to do. And yes, it's a really nice way to do it. And everything's in code. Like once you write your test, you've written a code, it's ready to go. You can just deploy up to AWS and you can be confident that it's gonna work.
Jonathon Wright It's fascinating that I would, actually would really interesting article from the Microsoft Visual Studio team. You're talking about low code solutions in this kind of idea that you might want to visualize that, because, you know, I think that's to me, that's the big challenge. When I go into something like ADBUS terraforming is that I get the infrastructure roads code and the platform's code and I can read the Yamal scripts that I can read the term forward scripts, but I can't visualize it. You know, and exactly what you've just said is, you know, you may have like ADBUS cloud what could be said, you know, okay, we're going over 80 percent. We're going to increase the speed up more capacity and then scale back down when capacity is low. Now, to do that, you've got to better simulate that in some kind of way. And, you know, we believe that the Visual Studio low code solution was literally dragging and dropping. What was A.W. as your functionality into like a flow so you could understand? Well, actually, now these are the triggers that you do it. And to me, it's chaos edge again. It's aha. It's a kind of understanding. Well, what happens if, you know, we get five people dropping images in to ask three awesome storage and you've got to then pick those up, remove them from the folder structure, and then process them. So if it could because of that, like computer vision, someone's uploading a scan, you know, and then it's good to trigger that. That serverless architecture is so many different scenarios there of how that works and what, you know, what good looks like as well, because, you know, if it runs, you know, how long should it run for, you know, depending on the complexity of the problem. So, you know, how do you know to visualize? Do you find that you're drawing diagrams or, you know, using like wikis to kind of explain stuff to people who kind of question about how the architecture all fits together?
Lewis Prescott Yeah, absolutely. So on the genome project, we have scientists assigned for a project. And here's the kind of core of painting that picture of what it actually means, because when you're testing use datasets that your kind of you understand and you can expect the output. But the scientific data that we're looking at is it's very complex and very large. So, yeah, he is crucial in drafting that. What is going to look like and how many levels of processes are going to happen before we get to the actual output? Yeah. That was one thing that we had at the start of the project was our landers were timing out because of the size of the data that we were trying to process. And we couldn't just throw the scaling up to it because it is a set which needs to be processed. One after the other is not a case if you can just throw laser processes at it. So, yeah, we're we came up with the caching solution for that. They are ultimately all you're doing is re and retrying your lander based on the cache. And so it's definitely a tricky solution. But having the context really help with that.
Jonathon Wright Well, that's interesting. Well, where you're around the caching because I've been a member of the World Community Grid since the early 90s. So I think some might say Fourth of the World now, which literally I've donated 40 years of C.P.U time and I always use it. And it was around projecting cancer. And, you know, part of what I. There are huge datasets. And, you know, what it would do is I'd see exactly who is left to serve, run in the cloud and I jump on to several want to do some performance on something. I'd sit there and obviously a pause. The world community, great aged. And I'd look in the folder and it had like seven, eight gigs of just data, which is six SEO paused. And then I always felt really bad that the result wasn't gonna get posted until maybe my, you know, my 24 hours. So test was finished. Right. Is that kind of time out thing where you've got that? Well, actually, one node is doing the processing. And I guess this is the problem with distributed parallel processing of activities is if you don't get a response from one of the components, it's going to have knock-on effects are going to have to kind of resume at that point. But I guess it's just, you know, I don't even want to go into the infinite level of complexity around that. I've got, you know, like you said, the having a data, you a scientist, a data scientist for what is a real scientist. Who's gonna create the data and give you the clear? Well, what does good look like out the back of it? The complexity is absolutely huge. But I think he said to me, you know, what you're doing is kind of the next generation of testing because, you know, this kind of concept of no Ops and kind of autonomous capability of being able to spin up serverless architectures off with a purpose to do some activity and then spend back down. And it really changes the model of how, you know, some organizations work. You know, there's a lot of only organizations that are used to the nine o'clock loads of people look again, five o'clock, everyone's logging out, you know, and that that that hardware provision, that it's there. And, you know, yes, it is about hotel scaling, but it's always on whereas. And also it's kind of you're building on that. You know, you're patching you're building on top of things. You know, it's kind of an evolving beast. Whereas what you're doing has a specific purpose. And it goes off. It delivers that purpose, then comes back and it kind of changes that mindset of how organizations could potentially work in a sense of, you know, OK, we need to do some, you know. You know, SEO with Tessa Rank, you know, we'll use this serverless architecture instead of building services, an API that is going to be sat waiting for, you know, the one scan a week which, you know, might be coming through from a patent suffice or something, and they need to scan it. So, you know, it's really interesting. And I know you've got that kind of background where you've used you know, you mentioned Cypress and some of the automation stuff. And I guess you with the work that you did with a source and more kind of, you know, the assure stack, you know, do you find that it's a lot of API kind of less stuff over automation now? Or is it, you know, or is it even further now? It has a kind of US code.
Lewis Prescott Yeah. I mean, it's in different context. I think people take different approaches to context research. That kind of approach seems to be not just go for a high-level UI test just to get the coverage and then it's gonna going to support model where the tests aren't really important. So the emphasis doesn't go into building out your API layer tests. But, yeah, in other contexts we've got payment provider and obviously, that's all API level stuff. So yeah, depending on the context and also the level at which you want to. Yeah, difficult to describe it's the level of importance on that comes from the top I think. Because I haven't seen an environment like cancer research. Well, you've got a lot of your high-level task and not the lower layer. But it seems to work and ultimately it's on a budget. So I can understand where they're coming from.
Jonathon Wright So it's a really interesting concept around, you know, value of tests. You know, we talked to one of the previous podcasts. We were talking about the kind of, you know, this value-driven approach. And like I know you're kind of saying that, you know, you do a lot of training and training up people. And, you know, I think this kind of also and I'm working on the biggest projects at the moment. And, you know, one of the first things they said to me was, well, you know what? We've got to support this. So, you know, it's. And when they support their main I.T. operations management, It didn't mean development. Right. You know, part of it was you know, we're going to release something and it's going to go into an operational state. And then they're the ones that are actually going to be, you know, let's call it synthetic monitoring for a second. But you know what? Well, in essence, you're talking about is actually your Cypress tests and stuff just become you know, everything's good. You know, people can upload the file. You know, people can use the UI, you know. You know, it's just a kind of a Check of everything. And I think that's where the industry seems to be going in the sense of synthetic, make you more realistic. Synthetic tests is, you know, is going to be a big step forward because a lot of organizations would kind of get an API and APM tool and they create some very simple Happy Path kind of API, see if it connects or, you know, you get a token back or something really basic in our day. And that would be the confidence that that API gateways up and it's fully functional. And like you said, you know, from a payment gateway perspective, you know, you might have a third party in there. You might have someone like Stripe or PayPal or whoever it may be and or will pay. And you've kind of you've also got to make sure that links up and running and that it's getting everything's looking good. But from an operational stay. And then also prioritizing what's important based on really the criticality of what it is. If you're talking about health care and medical, you know, then suddenly becomes incredibly critical. But at the data level of what the outcomes are more than potentially what the interface does.
Lewis Prescott Yeah. So one of the things that was very evident at Asos we used our users as our UI tests. So we built the API layer test to a very high level and we had good unit test coverage. But ultimately, what we could do because we were multi-region, we could release our software to a very small percentage of the users and ultimately get that feedback from thousands of devices, thousands of different setups. And so then the output is so much more valuable than the UI test we could have ever written. So and it's just a small percentage of users that are actually getting potentially a poor, poor experience. And so then, yeah, Bluegreen deployment, we can just slip back if we see the error spike or anything like that. And yeah, is about, I think adding that value. And if you can't add it, then there's other methods to get that value. And ultimately we want to pay for as is brand and we have thousands of users using it all the time. So we can get that feedback. But yeah, there's no reason why you can simulate that elsewhere.
Jonathon Wright Yeah, no, I'm saying and this is kind of a game. What is the kind of talking points? I talked to the founder of 21 Labs, the IO yesterday, and we had we came to the same kind of conclusion about the fact that actually, you know, once the chef ROI aspect comes where actually you've got a small sample of realistic scenarios that are going through your know your system, that you should be able to pull those cross and actually leverage those as your UI regression because you've got at least an idea of what's happening in the real world. And then as a level on top of that, you know, this kind of cascade area kind of deployment idea makes a lot of sense in the sense if you can roll that out unrestricted to a certain amount of users, it could be on a user level. It could be on set a region, you know, or a DNS or something that you want to be able to control. And then you can instantly see from the different tools, you know, is it affecting conversion rates? Is it. You know, engagement. And, you know, I think this is fascinating for any organization, whether you're a fashion retailer or, you know, your health care or even, you know, financial services is really important. Beltz understands this behavior. And I think that's something we've always kind of missed, is we've kind of assumed the behavior based on the functionality sat and tried to provide Cookie on that. Not really looking at. Well, what is the actual usage patterns looking like and how do they differ across device crossing geolocation? This was a really interesting example. You know, I have my understanding I could be wrong here, but, you know, I just had to guess that they had different geo regions because that global organization now, you know, part of it is obviously that it's going to be different workflow, logic. So if you're trading in the US, these different US laws, you got to sell things that a slightly different way. So, therefore, the site slightly different. You're not going to have one single site which has 15 different geo-based configurations. You've got separate sites to manage and it's those layers of complexity. And then behaviorally there's a change. You know, we I dealt with a pizza, a large pizza company who, you know, in the US, the behavior was completely different to what the UK we are. So when they had gluten-free rollout, they saw similar kind of patterns. But in the UK, that dropped off a lot quicker than it was in the US.
You were actually quite committed to having gluten-free products, whereas actually, it was just a passing phase in the UK. You know, I think that's kind of very similar with, you know, the fashion industry is it can be quite fickle in the sense of your you know, your different age groups, your gen's Alpha's and your gen Zs versus your millennials and you and those kinds of challenges. Is there going to be using the app in a slightly different way? And then obviously I sourced a taking it to the next level with things like, you know, where you could use augmented reality to actually see people walking around. And that, to me, seems like an opportunity for this kind of serverless thing for, you know, if you're uploading a picture of a photo and you want a cross-reference, say, across to your catalog, you want to able to get that computer vision to reckon recognize they it's it looks like their style and then, you know, search return the item back. So there is a lot of complexity and all the kind of projects that you've got. Maybe this is the most complex project ever for an amazing cause. And I think you're right.
I think you've got this kind of you to know, what does testing in the wild look like? Because, you know, eight of us is a life service. So it's always kind of you to know, a lot of people recently have loved the idea that you know, you can spin up containers and you could potentially, you know like I've had my, you know, Neo4j and Kafka and, you know, at my API are all running at the same time on my laptop. And I can test that like your contact contract taste and what you've done before. But, you know, we need to be us. You're having to use the real services. You're either going through the SDK code level or through maybe the console or a combination of both or directly to the API. You know, there's a lot of, you know, things that you have to learn that are above or beyond just the standard testing landscape, you know. Yeah. Is that what you found with, you know, the. Because obviously you come from as you're a source and then not eight of us, you know, have you felt that you've had to kind of read learn cloud pass systems?
Lewis Prescott Yeah, they are. They are kind of going in the same direction, I think now. And yeah, as you say, the different configurations. Definitely there's a learning curve involved mainly just around terminology and things like that, because they always want to have their own unique name. And there are nuances within the kind of documentation and stuff. So yeah, that can send you in the wrong direction at time. But the transition has been actually quite smooth. And just based on the fact that infrastructure code, it kind of makes it a one. Once you know how to do it one way you can kind of you kind of transition across there has been quite smooth, actually.
Jonathon Wright Yeah, I must have been. One of the things I'm still very unhappy with Geoff about is this, you know, their definition of security roles. You know, to me, a role is a role, right? Is it me? Are IoT or something? Not some kind of, you know, policy document that, you know, you can retract. Allow people to do it. But it is that kind of well, at the moment, I see why they're doing it. Is former security testing prospective? You've got that kind of well, how do you do kind of a privacy by design kind of approach? The kind of thing, you know, at the lowest level of access granted, because at the end of the day that you want to make us secure as possible, especially if there's anything there which is sensitive. You know, do you find that the security kind of thing adds that extra level of complexity or something, which is, you know, deep? Does your team do any kind of security testing as well?
Lewis Prescott We do it currently and how they are on the AWS, which is mostly factor and stuff like that, you have to account for that in your tests. And so, yeah, you need to go through the different levels of security before you can even hit a different server. So, yeah, we're we've had to account for that. But this concept of assuming roles in AWS. So you can assume a more privileged role for read-only access and things. So, yes, that kind of configuration takes a little while to settle. But actually, once you've got it stop. It only stops once and then you can use it across the tests.
Jonathon Wright Yeah, it's interesting. So we do. I'm working on an M.I.T. project at the moment for perfect safe paths. And it's the same thing we've got with the mobile app. It's got to two factors altercation. And so initially when the automation scripts, we've put the automation scripts together again, it's like, well, this is a pain. But I see, you know, we were lucky. We're using an image-based technology stack, which actually meant that it could go and grab a code from a, you know, in another device and then pass it across an end track. And it is like that kind of capture kind of problem, which you used to do with Website that, you know, it's great, but it kind of then it hands-off, you know, it makes it so we have to have a human involved, whether it's even just sending you a message to say it's just been sent to your phone. Can you confirm the code? I know you've got this extra level of involvement for a security breach just makes things more complicated. And also there kind of how long that actually before it expires, you know, all those kind of things which, you know, it feels like a bit like the LPA world is kind of coming out in the sense of it's preparing this for this UI path to presume power automate from Microsoft to cart to get to a point where now that we know this kind of stuff, we can start to piece it together operationally where this kind of automation could also take advantage of. Okay, well, we get these files sent to me by email every week. I want to pick up, you know, I'll drop. I do like to say here is the source files. I've got to put it into three buckets somewhere. I'll go pick something up from there and process. And, you know, part of it is that's just an activity that becomes automated through RPA. So you do see you're potentially you could be transitioning at some point to be more kind of domain, context-focused, and be more kind of. Well, how can I help across the organization, not just in a testing domain?
Lewis Prescott Yeah, absolutely. So it's already happening in cancer research in that, as described earlier about the Spot model from working with the landing zone team, where it's that infrastructure team setting up all the permissions across the organization to access these services. And we've got some tests in there which are basically this is the YAML file and the YAML file should contain these values. That's not a deep level of testing or anything. It's just basic human error that we're checking against. And so, my expertise is valuable, but I wouldn't classify that as testing. It's more like configuration management, really, and automating that configuration management to be more robust. And as you say, the testing skills are valuable in lots of different areas within this context.
Jonathon Wright Yeah. And I think you've just brought up a wonderful point. You know, I remember when I started the 90s, I used the use of a deployment team. Right. Set up a demo team, deployment team, Ops team that as things came along and, you know, as agile and DevOps kind of came along, you know, that deployment team, which used to heavily rely on just disappeared. Right. And it kind of led the boundaries, which was what it was supposed to do for DevOps. Maybe not as much operational input early on, but it should. Which should be there. That's kind of what should the end state looks like and kind of be thinking about that from the sprint planning kind of stages. But, you know, that deployment team used to do with black magic, right? They used to do all sorts of configuration management tasks, you know, and you could literally they could look at any file in those days. But maybe just Next about conflict file or a schema. And they could literally go, yeah, you've got it wrong. You're still using you know, you pointed to the IP address, which is in SCDPM and kind of database, a subset, and it's incorrect. And you'd be like, I could never spot that. And, you know, part of it now is actually that is what you're doing is you're kind of getting to the point where your kind of your eyes are on everything. And, you know, you have to understand enough to be able to kind of switch between and operate OPs. Deployment engineer and I did a podcast with somebody quite recently. You talked about this. Q A release manager or a release QA. And I was like, this is quite a fascinating idea of somebody who is just there to kind of help bridge that gap a little bit. And, you know, on the other right-hand side a little bit, we've got the site reliability engineers. Right? Those guys that are in production who are kind of looking at, well, you know, how do we make this more robust? How do we avoid these issues?
And also from the kind of like you said, from an Ops point of view is if something goes wrong, you know, how do they switch the right configuration back? How do the green light back to the last, you know, reliable build without losing anything which they've got? And I think this is kind of where DevOps was wanting to be, you know, five years ago. But it wasn't didn't make it. And it kind of this OPs dev approach where actually you're getting more input from operational endpoints, points of where you're thinking, well, what should it be? How do I maintain it there? And also, you know, not just a Wiki that's in the test reposts of where it's actually. Okay, I've got this YAML file in front of me. Oh, I know that. This should be set to whatever. And if I want to roll it back, I need to set these flights here. You know, the committee or something. And, you know, I think it's a kind of a bold new world. And maybe we are getting to that level where it's there is we won't need these silos off of a few DevOPs and task teams. And maybe, you know, this is the next generation of, the Next way of evolution. We should be going to.
Lewis Prescott I think with shift Right. I think that will kind of cater to improve people's knowledge of the operations side because putting things behind faces, flags, or changing configuration production so that you can pass something. Will take a fall for that. Right. And then you kind of hopefully lose some of the environments that you have. I mean, we have four or five environments before we get to production. Why? Why do we need all those when we can deploy the production behind a feature flag? I think hopefully the Shift Right move will transition QAs into being more like belief, focus on production.
Jonathon Wright And this is kind of a really good example I mentioned with the gluten-free kind of challenge. You know, it was really interesting because I was part of the team that kind of help with that in the U.K. And so we see it in the US. And they Rieslings at APM, Splunk at the time. And they were looking at the numbers. You know, they have reports coming out every couple of weeks. You know, of course, they were looking at the gluten adoption. And, you know, the original concept came from for marketing. Right. So marketing kind of said, yeah, well, we're seeing other organizations. We've read articles that saying it's potentially going to be eight percent growth in new products based on opening this new line. So it got approval. You know, there was a business justification for it. And then on the other side, they saw this growth and, you know, they kept on looking at and, you know, someone was responsible to kind of make sure that, you know, the cost of the gluten-free products didn't overweigh the cost of the additional revenue generated and didn't have to be revenue-focused. But just in this example.
So in the U.K., we said we'll do the same thing. You know, we did. We did. We kind of 12 weeks introduced the backend front end e-commerce, all the stores around the U.K. and we're all high five each year. They're kind of going brought this new line to market in twelve weeks, really. And when our flawlessly. And you know exactly the same thing. Same adoption rates, same numbers coming out at the back end. They're on different stacks. But yeah, we still could get the same measurements and we did. They were so happy. So we were going to do this is great DevOps presentation. You know, it was a six months, eight months later and I kind of pulled the I.T. director in and we're going to present it. And I kind of said to I said, because our team had finished on that because we finished on that particular piece of work. And I said, al-Sahhaf, how is it doing? This is you know, it's fantastic. You know, it got up to, you know, 14 percent. You know, we've, you know, really great initiative, great time, the great case today. So let's go to have a look. And we went in and, you know, went into kind of their teams are all nicely mixed up. Can someone let me have access to Splunk? And I was like, oh, this is not looking gray as like, you know, this the numbers on what we're talking it took it about. It was like they were he's kind of pulled it back and looked at the end of last month when he looked at it last night. What that was saying, 12 percent. And he was happy. And I said it's a what? Why is this dropped off? And he was like, well, I have no idea. And I said, well, you know, what's change? He said, well, we've actually just launched on the front page a new stack of, you know, exciting products for the World Cup or whatever it was. And, you know, the gluten products on advertised on the front anymore. You know, there's no kind of, you know, messaging on there is no emails going out from marketing. You know, it's kind of that's done as far as we're concerned. I said, so what does that mean? And as far as cost. He said, well, you know, we'll have a look and went into the back and realized that when they can like tens of thousands of pounds of loss from each franchise. Gluten free products which were going off. But I was kind of at that time, I was kind of obviously, you know, I was really excited that I'd spotted it. But I was kind of thinking, you know, at what point does something go to the end of life? And, you know, teams who know feature teams will appear and disappear. But, you know, how does the business continuously understand where they are? And all that functionality and behavior is still relevant. And it's still appropriate. And I think that's where the last mile. What you're talking about here is, is that's the last mile of where operations is kind of continuously looking at that and kind of going. Okay. There's something wrong there. We're seeing a decrease in if it was a source if it was trainers or something. What is this related to and what does that mean for the hypothesis that we had, which was if we bring this new AR capability out, which lets you take photos as traders or match it to the newest trainers? Well, Jay-Z's wearing whatever takes to kind of go make it quickly. But that's not the case anymore. People are looking at the trainers as a competitor who's brought something different and they're doing kind of a SWOT analysis. You know, it feels like that's the last mile of the business side of things, which we've not thought about. It might not be kind of brilliant with the Cats research guys because, you know, they're kind of in it to win it. But, you know, we definitely fair for organizations to continuously prove the value of what was being delivered historically. And that could be days, weeks, years ago. Know no one ever does. That is like the old. Worth saving when Bill says, I will take you loads of features out of words like mail is enough because nobody's ever using them. If you don't know, don't you know why you support them and testing them?
Lewis Prescott Yeah, I think being much more aligned to marketing, is crucial for a business. Like, marketing teams run A/B tests all the time to test different paths and workflows, and then that gets back into the dev team. But why aren't the marketing team your dev team? And then you close that loop, right? I mean, they're going to be constant constantly looking at what is actually happening to Adduction because they're the ones that care about it.
Jonathon Wright And that's what happened. Actually, in the UK, the I.T. director, which was one of the big things, put marketing from the floor below into the teens. We had this thing. We had to mix everybody up. So nobody was sat next to each other who had the same roles. Yeah. And I thought at the time I was like, this is nuts. And then he got on a board, which kind of said if they got if any of the teams had got extra capacity within their sprint, that they could come in and prioritize what was kind of value for them to get in next or absolutely in there. And that was brilliant because they kind of had seen how a campaign informs. Like, if you're doing a sneaky campaign and they kind of looked at something a little bit differently and said, OK, actually, if we can, we'd like you to say you can add your sneaker into you know, your profile. You know, like you with a Nike appositive, then they can we do that within the last three days and people go, oh yeah, that's just an extra you know, an extra Field and extra problems. Leave it with us, kinda say and it feels like actually, Urai is, you know, marketing slashed the business this big Ops kind of approach, which you could have added anything on Ops or Devs just to get some traction on it. You're absolutely right. It's like marketing like, you know, like you said, free kind of whatever the activity is or the is, you know, having your actual scientists there is, you know, part of it is you need to flex with what's required from the business. No I.T. for I.T., but I.T. to serve a purpose for the customer.
Lewis Prescott So. Well, we ended last year both running a code club. So going into schools and teaching nine to 11-year-olds how to code. And I think that's the future, where all these kids are gonna learn to code as if it's another language like we did at school and then every job role will have the ability to speak to programmers. They'll have the ability to write code themselves. And so it will just be one of the core sciences, really. And then you don't have that barrier anymore of someone going, oh God, speak through development, work as a time. I don't understand the code. You won't have that anymore.
Jonathon Wright Is fascinating. You know, the whole STEM kind of concept is kind of preparing the next generation. And I think it's fascinating. And also, it must be really rewarding to kind of get you probably get lots of feedback on things that you hadn't really thought about. That may be to put that perspective kind of said, well, why is this? You know, do you feel that that was kind of the case, that there were some ones that had, like, great insight into kind of what you were you lessons that maybe you've learned over your career.
Lewis Prescott Yeah, the kids have a completely different view on it. And at code club you gamify everything so that they keep their interest and the kind of things that they focus on, you just wouldn't think about, like you're thinking about the end and goal. But half the time, they just want to color in the character or make the character spin around in a circle infinitely. And things like that, like their creativity, is just huge. And I think especially working in QA, we can get stuck in the "Okay, this is how it's meant to work. And this is the output that we're looking for." But actually what we should be doing is thinking about those hundreds of users using this in different ways. What's a random thing that they could think about? And yeah, Zoom the kids really gave me a good insight into actually the capability of what you can do is endless. And again, it's not just what it says it does on the rules and instructions.
Jonathon Wright Then I think, again, a fascinating part of topic there in the sense of, you know, I guess part of having a role or a persona or a silo is you kind of feel fairly restricted. You know, if you're looking at something and you said, you know, I think this you know, if this was done in a different way, it would be better. And, you know, is it your position to get to go and speak to somebody like the product owner? And I guess this why a kind of product engineering is going to come together is that the product team has a view and that view. And I when I was based in Silicon Valley, you know, I was a product turner. And as you know, I believe that I knew. Right. I knew what, you know, people were asking for within the hour customers. But, you know, that didn't mean that I would try. I could have been taking the product in a completely wrong direction because, you know, I'm taking it from a sample customer base. And, you know, that's kind of when can you who should be challenging who? You know, there's no longer any structure is kind of idea who, you know, is this kind of I guess a few years ago we would have called it innovation labs. Right. Where maybe within the organization could say, you know, I know, why don't we give free coffee and then, you know, see what happens. Right. You know, I actually went in to see those guys in jail paid for the Ops, that same problem. And it's really interesting because the guy who suggested this from and we've all heard the story at the warehouse said at Waitrose or John Lewis said, you know, because anyone was able to suggest any kind of new innovation. So they brought it. They rolled it out and then they're rolling it out back now, because if they're now the third biggest supplier of cup of coffee beans in the U.K., so interestedly, you know, not Starbucks study to make the top five. You've got McDonald's in the top, just beer by sheer amount of things to now, Waitrose is a way. You know, Waitrose is now buying coffee, which is a commodity. And then, you know, it was the interesting, I think the end point, which kind of finished the boss was the Canary Wharf that famously at the top actually got out of Canary Wharf. You go straight into Waitrose. They had six machines which were always spinning off, you know, coffee where people were just going in and saving themselves six pounds from their normal coffee restaurant before going into work. And they were the first branch to ever make a loss. And, you know, actually, even though it was a great idea, there was no way to prove the hypothesis like a footfall in crazy or braw because of coffee versus, you know, how much money they were spending on free coffee. I mean, that's the thing is that even though it was a great idea.
And so what also made a great idea, which was they talked about free Wi-Fi and that, you know, that's a normal kind of thing we request. But then the implementation stage team put it on the same Wi-Fi as the tails or brought out the tails. So it clearly backfired and customer a huge amount of money because people were downloading like the latest episode of Game of Thrones or whatever while they were going to do with their free coffee. So there's certain things which kind of all sound like a really good idea. But actually, until you can prove it in a controlled environment, you can continuously monitor that. There's no way you can actually kind of validate that. That actually is a good idea and it should continue. And I think, you know, this is the next challenge is brands. And I think brand is so important and much like cancer research is is so important to keep that brand integrity to be there and not be, you know, on the news for doing something that they shouldn't be doing, but actually doing this. That's good. And I love the ideas. What you talked about gamification. I'm on a path of gamification. Workshop team, what we have every week of every Friday. And it's really interesting that they one of the examples is for a similar kind of charity who are trying to get people to do more exercise. Right. So if it was pre-COVID, they were, you know, giving people points for going swimming. So, you know, if you went they are barths. You'd get like X amount of points, you'd scan them or, you know, you'd organize a T group where you'd all go or whatever it was. And they gave me find the app to make people want to keep on coming back. And unfortunately, you know, due to COVID. They're now looking at other ways. Can we get people to, you know, exercise and be healthy? You know, I think gamification or enterprise gamification is essential because people like game mechanics, and they if you're looking at, you know, engagement rates, you want people to keep on coming back like it. Donors that are donating to cancer research, which is a great cause. You know, part of it is giving them batches. You know, I don't know. Angie did her fiftieth member of the AU Test Automation Universe, and she did a live event yesterday and she was handed out badges. Now, these were physical badges. These were literally just a limited edition, fiftieth-anniversary badge. Right. And so in the same kind, I think people will do that. They'll keep coming back. Yeah, because the badges or, you know, school boards have been, you know, they're, you know, anonymously being in the top 50 or something. You know, it's something that keeps people going. And I think that's a really interesting dynamic.
Lewis Prescott Yeah. So you said that we're rolling out cancer research is we're adding stronger integration to the fundraising pages. So now you're game of fire, right? Like now your fundraising pace doesn't just say I've raised this much money. That shows you I've done this much activity. And here's what I can. And Strava perfectly simulates that, right? Like you get the league tables in front of you. The top ten in this section and things like that. So, yeah, hopefully, we'll bring that kind of competition to our fundraising paid for.
Jonathon Wright Yeah. And I think this is again, a really interesting aspect of the game of fire is also sharing, you know, on social media. It's kind of one of those kinds of I would call it the Wild West for a moment. But, you know, if I was in an app like your app and I kind of wanted to share that, I'd say, you know, share it with my friends, say you should go and do this just giving or whatever platform they use it, you know, trucking that to then say, actually, you've got the most amount of things or, you know, gave me flying. It just feels like when you're looking at so marketing division, they're incredibly social analytics is you know, they've got all these dashboards, they understand influencers. They understand the reach of your network. And but we don't look at that as a final frontier. We don't kind of look at it as well. We're capturing all the times when, you know, someone gets a bad error message and it kind of it seems a bit strange. So we don't really investigate. Someone's just taking a screenshot and somebody and maybe someone from mugging and passes it into a service ticket or something. Not very likely. Awesome operations. We'll be monitoring it from the kind of a champion perspective on the community. But still, you know, there's all the information out there and it feels like, you know, that actually helps should be helping each. No events happening. No. Automatically replying to so and say, well, why don't you donate or, you know, do this.
You know, it feels like there is this social kind of aspect of it where I think we can all do more. And it's getting that engagement. And, you know, I don't think email is the future for, you know, getting people involved. So, you know, it'll be really interesting to see how that especially when she rolled that out, what those dynamics looked like when people can do school boards and when they donate and how that just keeps people going. You know, like I said, I've done, you know, since the 90s, I've been doing this 30 years, as I've said. And I will keep on really well community group because I don't want to lose my standing in the community. You know, does that make my office incredibly hot in summer? Yes. Because all I see is that everybody all the time. It's something I'm happy to do. And, you know, hopefully, it makes a difference. It's that same kind of thing is people will do it if they can see some kind of benefit of what they're making a difference. And I think something like their cancer research, UK people are making a massive difference by donating by the work that you guys are doing. And also. Kind of how you engage with, you know, how you take the technology in the next generation. So it's been absolutely fascinating to chat with you and we will you have to get you back home when you've got a little you know, you've done that. And you may be going to talk about the gamification and what you're kind of next big challenges are.
Lewis Prescott Yeah. I'd love to come back on and talk about that.
Jonathon Wright Fantastic. Well, it's been an absolute pleasure. Lewis said for those listeners out there, what's the easiest way to kind of get in touch with you? Is it LinkedIn or? Well, how to best to contact you?
Jonathon Wright Or tell me your coding workshops. If you, that this is the age gap, I guess, of there.
Lewis Prescott Yeah. Yeah.
Happy to do adult sessions as well. Yeah. Yeah. Just reach out and I'm happy to offer my expertise.
Jonathon Wright I say, well thanks though. I see you doing some fantastic work and it's been a pleasure to have you on the show. Thank you.
Lewis Prescott Thank you so much.