Related Links:
- Subscribe To The QA Lead Newsletter to get our latest articles and podcasts
- Join the waitlist for The QA Lead membership forum
Related articles and podcasts:
Read The Transcript:
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Intro In the digital reality, evolution over revolution prevailed. The QA approaches and techniques that worked yesterday will fail you tomorrow. So free your mind. The automation cyborg has been sent back in time. Ted Speaker Jonathon Wright's mission is to help you save the future from bad software.
Jonathon Wright This podcast is brought to you by the Eggplant. Eggplant helps businesses to test, monitor, and analyze their end to end customer experience and continuously improve their business outcomes.
Hey, welcome to the show. I've got Brendan all the way from sunny California. Oh, not so sunny. I'm hearing it's raining out there.
Brendan Connolly That's right. Yeah. The first time in a long time we got some rain.
Jonathon Wright Lovely. Well, you know, it's been great. I know we're talking on Zoom today in and you've got the backdrop of Prague. Were you over in Prague for - for star conferences.
Brendan Connolly Yeah, I was over there for the Eurostar conference. Yes. Our Rising Star Award. I did get to go to Prague. That was a big benefit, especially for someone from the states to go to a city like that and say a little bit different than California. And that's a nice change of pace.
Jonathon Wright Absolutely. And the beer is great as well. So that's always a bit of a benefit.
Brendan Connolly It certainly is.
Jonathon Wright And it's interesting because obviously you kind of found backing in February. You've got the Rising Star Award from Eurostar around your blog on increasing automation. I'm sure the listeners would be really interested to get a good overall view of what that's about.
Brendan Connolly Sure, yes. I don't know about other people, but what I kind of started to see part of my journey into testing and automation was I didn't feel good enough to code coming in. I wasn't ready at college to learn computer science. I was scared. Didn't think I could handle learning to code. Intimidated by the idea of having to do all the math. But when I entered into tech space, I kind of sort of was like a lot of testers do a kind of falling into quality, going into quality assurance and part of their job.
More and more, I wanted to kind of learn more about what the developers were doing. So I learned to code. But you can't push to automate, and I don't - I'm not opposed to technology. I'm actually a big fan. I like to code, got so far into code that I was pushing in features as I taught myself enough to be productive on the team and shipping my own design features. But I kind of never lost that feeling that we're leaving people out.
So, like, if you kind of think about all-inclusive automation thing, that the real statement is the opposite, that automation is kind of exclusive. It excludes this test that we don't want, that we don't think are good candidates for automation. Also excludes like third party options and whole categories of tests that people take to try to run.
But it also excludes the people a lot of times doing the testing. There's a big disconnect between the automation that gets written and the people that execute tests and the hands-on exploratory or high-quality testing so the inclusive automation, so it is kind of a big improvement towards bringing testers into automation and building automation that supports testers in that journey towards quality.
Jonathon Wright You've got this idea of a creator that executes over a consumer. I really like that because, you know, They, we've blurred the boundaries, if, you know, should all those activities are done by a single person, you might be the Automation Specialist or you know, what is the team involvement there.
Brendan Connolly I really feel I was, I'm kind of embarrassed by start talking to people because I've been in testing for about 10 years and I was still kind of confused about the real value. I was really I had real suspect ideas about the value that automation was adding in people's approach to it.
So part of unpacking those three kinds of personas was hoping to get clarity on it, because think about who the consumer is of the automation, you can start to get to what answers people are trying to get from the automation. Like what questions are being asked and then what is that automation doing and how it's being resolved? Because part of it, I could feel is I understand why management would want automation.
I didn't understand why a line-level tester would want automation, because I'm going to end up doing a fair amount of the test, it's automated as part of my exploration.
It might not be one of the tests, not be the exact flow. But I'm going to still need to do basic crud operations probably as I go through exploring a new feature. So this idea that it's going to take tests away from me and make my life easier, didn't really sit 100 percent with me. I understand like there's a water line, but it kind of establishes like, well, this can run and be OK and we can kind of go from there. But I didn't see it like this.
I don't have to run this category of tests anymore. So breaking it down so you can kind of start to see with those different relationships where the black box around automation starts to lie. So if you're kind of creator is an automation team that set your testers. There's a big neither side really understands what's going into that, and everybody's doing a faith effort. So there's a lot of places where communication can kind of break down.
But the opposite, right, if ever if a tester is writing their own automation and executing their automation and then kind of using that to solve that, they're their own black box. That's why you see so many tests. Automation efforts fail because, by one person, it's run by one person. So there are all those ideas with the information and the answers that it's providing and how it's run. It's all wrapped up inside there. So there's a part of the information that they need.
Jonathon Wright Yeah, I know when going through looking on your Website blog, you know, you talk a little bit about test-driven testing and this idea of, you know, thinking about it a little bit differently so that people could understand what test you're actually writing and the value. So do you feel that the TDD approach is I know you're a big fan of the ATDD stuff from Elisabeth Hendrickson for exploring, you know, do you feel that there's this somewhere in the middle that works best for you?
Brendan Connolly I think so. I think there's kind of two parts. I do I value TDD as a test, as a test practice for developers. I think having that safety net for them and that rapid feedback is important. I think is the same level of feedback. I think that's something. One of the things that a lot of automation can get to where it's kind of fitting that internal testers like the way we want to see the developer's feedback cycle so you can get information faster.
But I think where testing often falls down is we can't provide insight into our actions. We're owned by can keep saying, by FOX. We're owned by FOX to external people. They don't know what we and we can always unpack what our motivations are behind the tests that we're doing. So some of that testing was kind of getting testers to try to start iterating through testing rather than having these kinds of massive journeys that are hard to unpack for themselves.
Or you just kind of wandering through an application and hoping to find something where you had more clear expectations upfront, you can kind of time kind of more of a session-based testing angle, if that makes sense.
Jonathon Wright Yeah, that makes loads of sense that I know you were just reading through the test automation, by Dorothy Graham, the patterns. I remember putting, helping with putting the Wiki up together and, you know, part of it is those design patterns were for all different stages. So they were from, you know, the angle of design all the way through to kind of the reporting angle, which, you know, is really important when you're using this inclusive automation, because, as you said, the managers want automation.
Proving the value to everybody else is a difficult one. You have to be an ambassador of that kind of automation. You know, and I completely agree with you. You know, part of the challenge really for automation is, well, what are you doing? What are you creating? What are they proving? You know, that whole minesweeper view of hitting things straight down in the middle and, you know, you may or may not get anything, you know, from reading through the Test Automation Patterns, did you come to any conclusions or did you find it helpful in what you do?
Brendan Connolly Yes, it does. But I actually really struggled with the Wiki kind of at first because I was kind of overwhelmed and I was coming at it from the kind of a developer side of it where I was expecting, like the Gang of Four design pattern and that kind of approach, but I can't recommend Dorothy and Seretta's book enough to kind of walk you through the using that Wiki because it uses the narrative style.
And I was a little curious how that was gonna play out. But you can really kind of show you can use those patterns and unpack what you're doing. And it really is, it's treating automation as a tool. We've had kind of this problem and this is what we work as a team are encountering. What are some potential solutions? I think that kind of help because so, like, we kind of think about automation as a cure, and it gets pitched like it's going to solve the testing problem. But it's just a tool.
It's a piece of tools and the success and failure are more likely than anything had to be based on how your team is supporting it. Your team is taking it on. So anything that pushes you through that mindset and being able to kind of be home centric and how to tackle that one step at a time. Like the Martian, right. Like, solve the problem that's in front of you now. And then we'll address the next problem to save yourself from this automation dream. Yeah. I've been really, really enjoying reading that book. And the Wiki as well.
Jonathon Wright Oh, you'll have to check out The Experiences of Test Automation as well, by Dorothy. It's a great book and you know, she, partly, if we've talked about Dorothy before, being kind of the grandmother of test automation and original kind of software test automation book as well, and the patterns that were behind there. And it's interesting how things have changed over time, I feel.
We had Milan from Perfecta on the show last week and he recommended a Slack channel, which is just set up called test automation expert so you knew anyone can join. I'd recommend listeners to go and check that out. I know you can if you've been using a lot of techs, you come from a tech background. So using Jupyter Notebooks, A.I, and Airbel and all that kind of good, cool stuff. But, you know, are there any cool tools that you found are really helpful to kind of articulate this as far as kind of sharing your ideas and sharing your concepts?
Brendan Connolly That's actually been some of the struggles I have to kind of explain what is, full of a concrete idea of what automation is. So anything that doesn't go along with you, click it and it runs everything is take some kind of mental math to kind of set people to get more comfortable with. But actually, from the book, that other Experiences of Test Automation, I think Mark, one of the people with one of the stories in there has a story about using a circuit board where they've automated part of it and there's some stress testing on the circuit board of manipulating it so that they've got to be a part of that.
A human came in and adjusts the circuit board and then the automation picked up again from there. And that's kind of how it was, it was really interesting was having Dorothy, I had a chat with her and she brought Mark on with us and we were going through some of that. And it's exactly how well, it's not, the Jupyter Notebooks, the way I'm using them, we're not, we're don't have circuit boards.
But what it does let you do is automate certain steps along with your kind of journey through it, through your application. So there are lots of tools. I think this one, Jupyter, is really exciting. One, because it pulls you into the data science space and the other visualizations there. And it kind of keeps you modern with technology for the kind of where trends are going. But it's got a real, kind of fun side effect of putting a tester in control of their automation. So you can use kind of you can still use it.
I guess for people that don't know what you Jupyter Notebooks are, you can kind of mix executable code and markdown so you can write up descriptions and then read executable code underneath them or alongside them so you can have, it's almost like having an executable Wiki page. So if you want to have set up like a certain persona that you want to test with and you want to guarantee that people use that test, that that test persona where you can have a description of what may be that mindset is and that's how it log in as that user and set up the state of the system, and then you can start exploring cause the browser is live with you, then you can do some steps then that there's another piece in and there you can have a grab bag of functions.
Maybe you have to make holes in there, so it's kind of manipulate data while you're driving through and you could be doing that in step without having to go through its context because that's a big struggle from testing and it's like I have to test a little bit, change over to look into some things, maybe do some tool, set some data up using Postman or some other system and then come back and try to pick up where I left off. And if I can keep everything kind of in mind, I can have like a ripple or like a read execute and loop through it so I can keep changing, I can modify things as I go.
So I'm really, really effect using code to kind of make my journey through exploratory testing faster. And then you can still kind of get the benefit. So it's like I think I like to think of it as you're stretching the ROI, cause now more people can use that same automation. So for Jupyter, we bootstrapped some of our page objects and we have a single file. We're using Ruby at my current company. So you use category in there and we load that up. We had full access to our API and page objects inside of the Jupyter Notebook. So we can ad-hoc script or use existing templates. It's really, really useful. I would encourage people to check it out, got some videos too, on the test. Hopefully, more coming soon.
Jonathon Wright That sounds really good, actually. I love the idea. It kind of, to me, kind of blurring the lines between kind of business process automation with this kind of idea that, you know, especially with RPA, if you start out as an organization to start automating things throughout not only the lifecycle but actually in the operational space. So the idea is more lean engineering kind of reuse those automated components.
So if you're doing something which is a shim and a stub at the start, and then it gets all the way to the point where even the data flowed into the back end or create an environment, you know, you've got this idea of neat reuse and I really like the idea that you can you know, you can mix them together and then everyone can use and become more of a power user.
So, you know, that's a really powerful kind of message around the kind of because ROI kind of when, when the term test automation kind of comes into mind, it just feels like it's shooting for that kind of testing only activity which can't be reused. Not the wider business opportunity of automation, you know. Do you feel that you know, you could see any to see any of the kind of capabilities used to, say, synthetic testing in operations, you know after you finished executing?
I do think it falls really, really well. It's super flexible and it makes everything, it kind of goes, kind of breadth of code for you there. And then you could point to any kind of environment that you need to as long as you have the access that you need and you can funnel data through that. We've been using it for scenario type tests, or we can kind of mockup because people always want to have test cases.
They can make people feel comfortable having an idea that we're gonna document the steps somewhere. And what I think it really is, is like a behavioral store. We want to feel comfortable that this is how our application functions and we want some kind of document to do that. I guess we don't trust like product or customer training materials to do that. So we want to have a testing centric version of it. But if we had to use them or the way we had to document the behavior of like your application in a separate, get to run the test separately from that, developers would write it. Right.
You wouldn't have you wouldn't maintain the separate system with just lists of steps that get how the applications built. So I think if anything we can do to make our testing executable at the same time as documenting, we can actually have some real kind of value in the artifacts that we're creating. So we can create out kind of like our test cases they act as living documentation because they can step somebody through, because you know what, the idea that we can hand off some testing to some other people, like in terms of high load. And this kind of gives you that scaffolding to start now.
Jonathon Wright Yeah, I love the idea, kind of one of the pet peeves we had, and I think I mentioned when I was based out in Santa Clara, the, we had on our AD team when they were extremely bright, but the thing is, they hate doing documentation. And I felt that that idea of living documentation, you know, that became an act- an activity we have to afterward was OK with someone gonna has to fully do the technical documentation against each version.
You know, as part of its, let's call it, release notes, but it shouldn't be. But, you know, the idea is I've seen a lot of really cool products, products that just lack the support and documentation. And it's so difficult, especially if the version is changing so quickly, that you can actually get, you know, an understanding of how to install it, set it up, configure that or even use that feature. And we ended up having the def definition of deployable, of shippable, was this idea of they've got to record a video of the feature, which they did.
And all the assets, which they do then goes into the document repository and that technical document, people would then put it against the version. And that seems like a really sensible idea because at the end of the day the customer is going to want to use it. And if they can't find out how to use that feature, you know, then they may never know. And then the telemetry side of things says, did they actually ever use that?
Did they need support through using that activity? And it seems to me that there's still a disconnect between kind of our activity through the software development lifecycle. And then the operational say of software, once it goes live and the feedback and learning from that. And, you know, this could potentially be a really good what you're suggesting who is such a great idea for reuse.
And even within that technical documentation, you could accelerate the scripts that help the end-user by guiding them through a build, building up a sandbox environment or virtual environment. They can do their education and training. It seems like a really good idea. You know, within what you're doing in your current role, is this something which you're going to keep building, you said you're taking it a step at a time, do you feel that you're still building on that idea and concept?
Brendan Connolly Yes. So I use it. I use every I'm kind of a shameless promoter of it across our teams. I use it every day. I work kind of like a microcircuits, kind of enabled squad. And we work with us, we're a construction management platform. So what we deal with, with integrating accounting systems, with our financials back, back end.
So you can imagine some of we've seen COBOL kind of in the news lately, was like what kind of legacy systems and challenging to work with. Well, you can imagine accounting systems where we have to actually go in and log populate data on the financial backend on that side. It's challenging. So I have two systems. What I have to do with our main application to Microsoft is off we are in the middle and that's a translation kind of piece.
And then the third party system. So that would be a very bad fit for automation. So I can't control the third party system. And, but there's a lot of complexity that we have to train people on. And I've got kind of a back and forth over REST and Webhooks through the application. So there you're crossing certain boundaries and it's, you wouldn't, people were trying to do some automation, but they're always, almost always gonna be felt like because we're dependent on the speed and reliability.
Ultimately, the third party system. But using the Jupyter Notebooks, I can compress some of the complexity of how to do things in a third party system through documentation. I can document that in line with some of my workflows because the workflows are still kind of stable throughout the work process. But I can then cross the service boundaries and have manual steps where I need, stayed on the ERP system, and I come back and I can still hear automation there, so I don't have to deal with mindless tasks.
I can just focus on what was changed and still have documentation, behavioral things like a handoff to other candidates. So to share the love of testing some of the more complicated pieces of software. So you can see that three, four on that or you get some automation, you get some faster, I get the benefit, and then I get the ability to hand off off work. But I think the ability to have a whole application of automation is what really makes the difference for bringing testers kind of into that loop.
They need to see and feel it for it to really be meaningful. Like, if I'm just going to run every night or it gets run on the build. And deploy, even if I'm running it against production. I don't really have a firm sense of what value that's giving me and helping my day today. I know collectively our quality might be better and our customers might feel more stable.
My day today isn't impacted until it starts affecting how I can test. So the more we can do that, I think and especially automation engineers, we have the ability to kind of translate across those domains to the management and testers and then to kind of give a better, more holistic view. But automation can be rather than kind of peddling quantity.
So I'm just not sure that I think it's a race to the bottom, because you can say you can scale your tests, you can run them in parallel. But ultimately, like, who's that serving? I mean, I get it. And I understand why people want to do it. I think we're kind of doing it for bragging rights at a, a certain point because it makes us feel like it adds automation to areas like we're doing solid engineering work.
Jonathon Wright If, it is an interesting one, isn't a balance between maybe coverage and automation? You know, I know if you've listened to the show you will know about the backgrounds in model-based testing. And, you know, I love model-based testing because that kind of gives you that kind of ability to say, well, you know, if I do all pass, yes, it's going to be 30000 tests.
But in actual fact, you know, the core contract testing, which I want to do for the change in functionality based on some test pack impacts analysis is this kind of module and this and, you know, a bit more risk-based, bit more kind of strategic and hopefully finds value, especially when you've got that overhead in investigating root cause analysis and pinpointing issues of false positives, of bringing stuff that needs to get fixed.
You know, I think this is your kind of touched on a little bit about, you know, people needing that reassurance around documentation and the kind of the TDD side of things. You know, I've always said I've always struggled to kind of show the value of automation through some kind of reporting because there are so many different tiers.
If you took it to market, you know, if they were nearly there, we're nearly ready to ship or to you know, the financials, you know, what is potentially the loss of revenue if this system goes down in production and you can't connect this third party, you know, what's the risk associated with it? You know, how do you manage that kind of various different stakeholders and, you know, articulating the value of what's being done?
Brendan Connolly That's fantastic to question and I don't really know. That's why I've struggled with, I've faced that same type of problem where I, where I could kick out a report. But it doesn't change. I still need to give context to people. And I think that's kind of where that understanding your consumer of what the automation is, what service you're automation is providing is really, really important. And I'm not sure there's even going to be high-level generic advice. I think your team is going to vary. And it's fine.
I came in as a technical person, more technical testing, but more and more it's all about communication. And if you can't communicate the value and have that conversation with somebody, it's going to be really hard to encapsulate that with a deliverable. If you don't understand what they're trying to achieve. I'm happy to hear about the models. You talk, what the difference is that there's model-based testing and there are the sophistication and understanding behind the quantity.
I don't think that there is a huge spectrum in testing test automation. I just don't. From what so much of what I've seen across teams going around talking at conferences and things like that are talking to folks is that level of sophistication isn't always present. So here are lots of tests and then they feel better so that they can have conversations with other people so they can feel like they're doing Google scale. You know, I think that's where I kind of struggle. I wish there was, you know like Allure does really pretty graphs and things like that, but still showing stakeholder value with automation, many cases is a struggle.
Jonathon Wright Yeah, no, I think it's definitely something there for to kind of go, you're right. It's kind of officially how to do you kind of show that what you're doing and doing it together with the rest of the business. As far as well, we've got a goal. That goal is we're going to bring in this new piece of compliance or regulatory change, which needs to happen throughout that process. There's KPIs and cascading KPIs for different stakeholders to say, oh, you're there.
Are we ready? You know, if it is shippable, you know, what confidence level do we have, you know. And that covers managing communication. I think you've just kind of nailed it in the sense of articulating, well, you know, I looked at the swagger's back and I've done, you know, all the would should could scenarios there. And I feel like I've got, you know, the right kind of blend of negative tests in there.
And I've got some data that could potentially happen. And, you know, I've tried some of that kind of stuff. And now, we're now focusing again at this area, which is the next stage as part of, you know, integration or, you know, part of the system is ready to come together would be a relief. You know, there are so many different phases of proving. I guess that's the whole kind of thing, is the experiment. And the hypothesis behind that experiment is why are we doing it? And when do we know that we've proven it to a certain amount of confidence? And yeah, you're right.
If it a number of tests you know and I think if it's a big if, like there's this kind of a bit like low when you think about non-functional testing, people got this, you know, this idea that it's all about running at a million users, you know, simultaneously we're seeing now that, you know, your Walmarts, your big you know, big supermarkets, they're all, their apps are falling down, you know, the mobile apps because there is this issue with they never looked at scale from that.
They expect didn't expect that amount of throughput that was coming through the mobile and the service, how that the mobile app scales compared to the traditional kind of web side of things. But there's this perception that more is better. And I think that's what we've got to kind of change the view that actually it's not about running. You know, I remember working for a big financial services company and we had twenty-seven thousand staff.
They were really, really, really proud of these 27000 tests, apart from the fact that they ran in two weeks sprint and it takes four weeks to execute the twenty-seven scripts because they were all UI based and it was an SAP system and it was slow. And you know, and you know, part of it was what, what, what is the value of that regression. I know you, some of your slides, you've got, you know, you've got some presentations around regression, you know. Do you feel that is a bit of a bugbear as well?
Brendan Connolly Yeah, yeah, I think I think we need like a handful for most cases. Take a handful of tests. That's accurate. They kind of describe the core functionality. So our customers are safe. But I think as testers, we think we want to stay focused on some of that, like going through a greater understanding, making sure that our behavior is are in line and that we're serving our customers and that that contract is what keeps our customers coming back and paying us and in business.
So we don't need to justify it through engineering and we don't need to justify its quantity of test cases. But we do need to make sure that the kind of contracts that we're not changing things in an unexpected fashion. So often we're kind of pushing forward, we're pushing forward, we're pushing forward to make our incremental changes make things easier for us to ship. And that's to release. But is that really what our customer wants? Is that for the risk tolerance that they're accepting? Because as your customer, various, like you saw their mobile changes the way you things. Other customers may not have that same kind of high-end enterprise.
Customers may not want that level of change. So while you bring value, you might actually be diminishing that reputation with them because it's just it's very hard to recognize the frequent change from problems that have yet to face. The team that's building your software is building it in the direction that actually continues, that there isn't any light at the end of the tunnel that is past the MVP and everybody's happy. I'm not sure I actually answered your question.
Jonathon Wright No, no, I think you did. I think it added up another question, as you know this, I'm not going to say no any test kind of approach.
But, you know, one of the things I personally found quite recently by being at home, being, you know, relying on technology for food and for basic necessities. If there's you know, I've always been a big advocate when it comes to enterprise crowd test, crowd tests in general, I think your best crowd testers, all people using replication, of course, you don't want to you know segregate your brand by potentially, you know, shipping a product that's not up to a certain level, especially, you know, your early adopters who are probably more tolerant to,
Things not working as expected, so it's relief tended to al your outside access or whatever it is, you know, but all that Twitter feed and all that social analytics information, you know, do organizations really take that and go put that in service now and start particularly to it and start trying to fix it? Because actually, you've got thousands of testers out there on your website.
And, you know, all you using that and looking at the information, you know, is there was a better way of adding things into an app which gives crash analytics or allows people to actually give you feedback about the actions they took. You know, just things like that when you do things like test flight to say, oh, well, what happened? Next. Can you just give me an idea or a screenshot? You know, I think we should really leverage the actual user base and do more with what's actually happening, even if that support, you know, first-line support for you is a bit like what you're doing with the back end third parties.
How are those ticketing systems? What's the priority coming through from the service desk to new functionality coming through the door? And how does that make it, especially when the service desk is paying customers and they both social and revenue tag against them and new features is you know, yes, we all want to work on exciting, shiny new things? But if that's what's important to the business, I think that's what you just highlight is actually if the business is you top motoring around Riffe that actually is around potentially creating a better product by learning from the mistakes that have been made.
And then doing something about them and not just isolated in team silos where their learning is isolated is actually if the entire organization of how they're changing and how they get that they're dealing with the challenges in this kind of celebrating failure, which is something which to organizations just to do something goes wrong. The compliance issue, you know, the software, you know, security or, you know, it's a different tool to kind of make sure it doesn't happen again. Maybe it's learning from it and then put in the procedures so that a data breach doesn't happen again or this quality issue doesn't have to get just because of other financial aspects of it. You know, they're still your users and you should really still care about what they're finding.
Brendan Connolly Yeah. And you highlighted a need for strategy. Did you like part of, testing is as part of that. Automation is part of that kind of monitoring and a lot of your changes as part of that. And monitor this success. When do we need to reverse? And where do we need to keep forward? And I think that is more important than having the team pulling in one direction and understanding what is a success, what is pre-delivery, what have we put off our expectations around this deploy is more important.
Sure, we have enough tests. I think as testers, that's something we need to be more involved with. And, you know, I think data science and understanding both how our users are using our application and the analytics that it's providing as our applications are more and more plugged into observability, being able to slice and dice those numbers and see trends stand how our customers are consuming our product should serve as a feedback tool directly into our testers.
Right now, like a lot of UX folks are using that product. I'd really like to see testing kind of pulling more of that analytics side and as a tie and you put it up with something I've been trying to get to, we can pull you inside of our testing to some of our third-party clients and it so that we can see some of that data come back while we're testing to make sure that data processing, some of the metrics, how we're measuring our application is called you are coming back inside that testers feedback cycle as well. So I think the quality strategy would be.
Jonathon Wright Yeah, I think you just highlighted a whole new set of stuff and disciplines and patterns that needs to be addressed, really. And, you know, well, you would have code hygiene. You know, we're talking about kind of test hygiene as well as when should a test die and, you know, when is it no longer fitting its purpose? When is it being refactored so many times that actually too far away from what the user behavior is anymore? I think you highlight a really interesting trend. I'd be really interested indefinitely keeping an eye on what you're doing next. Have you got any conferences lined up for 2020 that haven't been canceled yet?
Brendan Connolly Actually, this week I think, or this week I was told there was one that was canceled. CP Con, which is a bummer, but it's been slow rolling. I'll be at Eurostar if that still makes it a bit in November. We got some kind of potential in the pipeline, but we'll see. But yeah, it's been a real challenge this year. About getting out there to talk about things.
Jonathon Wright Happening at a zoo. I didn't put my application in. I did some crowd testing. They protested in the wild is what I called my, with the idea of wild. It had to fit in with the animal stuff. So it was a shameless plug. Yeah. So for those listeners that are out there, you know, what's the best way to kind of reach out to obviously from the show notes. I'll put on your website. You're also your Twitter, which you are incredibly active on. And, you know, I'd also, you know, LinkedIn, what's the kind of best way to get people to get in touch with you?
Brendan Connolly Twitter LinkedIn, at B Connolly, you can find me. I'm happy to respond to you. I didn't have a community coming into testing, so getting involved with the testing community has been kind of a game-changer for me. So I could do anything for anybody. My hands are open, so please feel free. You can also stay tuned to the test. I'm trying to put it. It's on how this inclusive automation process is going. And it's actually been fairly time-consuming. So more so I haven't been quite as active on the talking circuit.
But I've been focused on trying to get a handle around what I don't like, what healthy automation looks like, how we can draw testers into it. I should have a repo coming on GitHub with kind of the docker containers that have Jupyter set up so you can do it. We're using kind of some Selenium gymnastics to get the local browser running so that it runs a local Selenium grid and you run a node. So you can see this when you drive on your time. But we should be able to get those published. And I have it for Ruby, JavaScript, Java.
So the Jupyter Notebooks, it's not language bound framework-agnostic type of stuff. So that becoming as long as some real-world sitting some test Website. So you can kind of see what hopefully what more inclusive automation looks like. So please reach out. And if you're doing inclusive automation about the previous something, it kind of falls into something being a tester, whether it's like a chrome extension or some applications that you're writing. I would love to hear about it and maybe we can talk more and get you some face time.
Jonathon Wright Well, sounds fantastic. Well, thanks again so much, it's been a fantastic session, Brendan. And, you know, we'll make sure we put that out.
You know, I think the health of the test is definitely what you know. Healthy body, healthy mind, healthy test. You know, part of all that washing your hands is now going to be washing your script. So, you know, stay safe. Thanks so much again for being on the show.
Brendan Connolly I really appreciate it. Thank you.