This week, host Jonathon Wright is joined by two special guests. First up is Artem Golubev, Co-founder and CEO of testRigor, to talk about what they’ve been working on in the test automation space (it will blow you away!). Second is the Dark Art Wizard himself, Paul Grossman, to share his insights into how testRigor is solving his automation challenges.
Interview Highlights:
- Artem Golubev is the CEO and Co-founder of testRigor. [0:31]
- testRigor originally set out to build an autonomous testing system and what they have found out is that you can build as many tests as you’d like, but if you can’t maintain the test, then it’s all going to be a waste. [0:47]
- They basically reoriented themselves mostly to make sure they reduce time spent on maintenance and figuring out stuff because of load test stability. [1:05]
- They rely on stuff like XPath, CSS selectors, or even IDs, and you can just express it how you would say it from the end-user’s perspective. [2:24]
- Paul Grossman is the dark art wizard who knows more about IDs and XPaths. [3:03]
- Paul got a tattoo of an XPath, which is one of the hardest XPaths we can possibly do with so many regular expressions in it. It’s all the way down his leg. [3:13]
- Paul has a sandbox website called CandyMapper. It’s a specialized sandbox for automation engineers. [4:07]
- Angie Jones set down a gauntlet for Applitools for automating one of their apps for a big event coming up. [6:44]
testRigor is focused almost exclusively on web-based applications.
Paul Grossman
- testRigor has an autonomous testing demo. It works well for mobile applications. Currently, they only test web and mobile because mobile have limited UI. [9:34]
- Several years ago, Paul and his manager would go out and they would do a three day proof of concept. [13:03]
- One of the other things that testRigor does is that you can have emails being sent out and then testRigor validates the emails on its side. [13:55]
- testRigor uses a combination of natural language processing and the parser to understand what you’re trying to say and then execute on behalf of an end-user interacting with a web browser or mobile screen. [15:39]
Test maintenance has been the number one plague in the industry.
Artem Golubev
- The next thing that Paul wants to do in testRigor is moving to RPA (Robotic Process Automation). [35:46]
- Paul is looking forward to seeing Angie Jones in the hackathon that’s coming up. This is the third year that Applitools has been doing it. Paul has known about it for three years. [36:47]
Guest Bio:
Paul Grossman has been delivering hybrid Test Automation framework solutions for nearly two decades in ALM, BPT and TAO. He has beta-tested many releases of HP QTP / UFT and in 2009 was a runner-up in HP’s Test Automation White Paper competition. He is a five-time HP Discover/Mercury World Conference Speaker and has spoken at Maryland’s QAAM and Chicago’s QAI QUEST and CQAA, always with a live demo. He freely shares his real-world technical experience. His framework designs focus on speed, accuracy, and scalability.
Paul is currently investigating LeanFT/UFT Pro along with Gallop’s QuickLean for UFT to LeanFT script conversion.
Not everybody writes their test cases the same way.
Paul Grossman
Artem Golubev is the co-founder and CEO of TestRigor. Prior to that, Artem served as a senior engineering manager at Salesforce.
The goal is basically allowing you, as a human, to express how your application should be functioning from the end-user’s perspective.
Artem Golubev
Resources from this episode:
- Subscribe To The QA Lead Newsletter to get our latest articles and podcasts
- Check out TestRigor
- Check out CandyMapper
- Connect with on Paul Grossman LinkedIn
- Connect with on Artem Golubev LinkedIn
Related articles and podcasts:
Read The Transcript:
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Jonathon Wright In the digital reality, evolution over revolution prevails. The QA approaches and techniques that worked yesterday will fail you tomorrow. So free your mind. The automation cyborg has been sent back in time. Ted speaker, Jonathon Wright's mission is to help you save the future from bad software.
Hey, and welcome to theQAlead.com. Today, I have a very special guest from 10 testRigor, the CEO and founder. So I'm going to get a little bit of intro about what he's been working on over the last five years, which is going to blow you away in the test automation space.
Artem Golubev Yeah. We originally had set out to build an autonomous, uh, uh, testing system and what we have found out that, well, you can build as many tests as you'd like, but if you can't maintain the test, then it's all gonna be waste.
So we basically reoriented ourselves mostly to make sure we reduce time, uh, spend on maintenance and, uh, uh, and figuring out stuff because of load test stability. Uh, and this is where we are today. So we ended up having this plain English language to expense the test, but because yes, no matter how you buildings the tests and we have a fancy system where you can deploy a library and production and map what you're using your users are doing to build the test as dramatically. You still want to expand those. It's not enough. You would want to generate data. In some cases you want to improve your validations and stuff like that. Like you have to be able to change the test. You can't just do something, record something and then just be done.
This is not how it works. So we ended up, uh, this, uh, plain English, uh, system, which is basically allowing you to express, uh, the staff from end-users perspective. And they all of that voice mostly to deal with stability and maintainability. So, uh, think about it. If you don't have to, we rely on stuff like XPath, CSS selectors, or even IDs, and you can just express it how you would say it from end user's perspective.
Hey, like this click on this button, which is below, uh, Uh, this section then, uh, and then it would work as soon as, uh, however you described it is true. And that's, uh, kind of, uh, what, uh, what is number one value proposition, which we provide to our customers right now.
Jonathon Wright That sounds also, I guess, you know, I w w we're very, we've got a very special guest with us as well, which is the, uh, Paul Grossman, the dark art wizard who knows more about IDs, XPaths.
He literally has. I hear he's got a tattoo of an XPath, which is one of the hardest XPaths we can possibly do with so many regular expressions in it. You know, it's, it's, it's all the way down his leg. So, you know, it's great to have you on the show, Paul, and, you know, welcome.
Paul Grossman Thank you, Jonathon. Oh my gosh.
You've let the secret out. I've never told, I've only told one person. How could you, um, yeah. Uh, thank you very much. It's great to be back here on the show again. And, um, I was gonna say, you know, the last time we were talking, I think we were mentioning, I was talking about the magic object model. I was, I was saying that I think there's a way that you can programmatically identify elements as you can go along and write, uh, test cases in, in just plain English.
So, you know exactly what Artem was talking about. And a couple of weeks ago he reached out to me, um, and he was, he was saying, you know, I think I've got something that's very similar to what you're describing. As it turned out. I sat down with him and um, I, I think I mentioned that I've got a, a sandbox website called CandyMapper.
Though we don't sell anything. It's nothing, it's just a specialized sandbox for automation engineers go and hit and see if they can use their tool automate. I have like this really short test that goes through and says, okay, launch the browser and click on a button, populate a field, verify some texts. And that's basically it.
And I just recently, they added one other D I, um, item, which was a pop up that comes up in front. You clear it out and then you can move on, which has got a new challenge. When I was talking to Artem I showed it to him and I said, you know, this, this is what I usually show to people. It's like, how does your tool work?
How can you handle it? And within about seven minutes, he had completely done it. There was no code, he just typed it out. And it had actually, he had me writing out the whole script as I was going along and half of it, I was just, okay, click on the button. It, it hit it, it found it, it was, uh, it was a relief.
I'm starting to think that, that too. It wasn't really the greatest idea because in this case I don't have to deal with the expense and I don't need to have to deal with even I frames in order to identify these elements on screen. And there's a couple of other features in there. Just, just blew my mind that we can get to in a little while.
But, um, that's, that's where I'm at on, on testRigor.
Jonathon Wright That sounds awesome. And one of the things which I guess are coming through quite a lot now, so if you think of things like cable for kind of doing data science, um, you know, having a platform, which you can kind of prove tools. The worth of tools really is, is a great idea.
And I love, I do love CandyMapper is, is it's a great one. And I think you've got V2 and V3. Um, whatever the latest and greatest is, but, you know, I guess this is it. It's the Pepsi Pepsi challenge really is kind of throwing the gauntlet down to maybe you're the tool vendors as well to say. Is it really as easy as you say? And also, you know, as you kind of point out, you know, it's the maintenance, right?
As che— you change between version three and four, how well does D cope with that? And so, you know, I'm really excited about this and, you know, part of the, the idea of having natural language as well. So you don't have to have that, that, that long ex-pat SETI really add some value in there. And, you know, I know we're both former kind of WinRunner, uh, gurus, but you know, that search radius where you can kind of tweak it.
And I was looking at how the testRigor app works and you can kind of, you can really customize how it searches, you know, how much it's looking, you know, that it's really kind of unique in that way to be able to give users a bit more control on how their app actually works with it. Um, and ironically I know, I know, uh, Angie Jones as just set down a gauntlet for Applitools for automating one of their apps for a big event coming up.
I think I don't know if it's an motor event or something else, but we should get testRigor to, to automate that because it'll demonstrate just how powerful the lighting engineers. And, and I think, you know, part of, I know your vision, you was kind of saying, you know, get into this kind of autonomous kind of level.
But actually, you know, thinking about it, you know, the, the hardest problem with the test automation is the maintenance, right? So dealing with that is, is a game changing approach. So, you know, as you've come along, cause you know, one of the things I noticed watching a couple of the demos, and I think Paul just mentioned this is, you know, even with the calculator app, which you've got, where you put your numbers in.
And it comes up. It realizes when it reruns that actually the number is different. Now that was always to me, was the golden goose. As far as you know, it can't understand math. It can't kind of go, okay, well, I know that's a plus symbol, so I'm going to start trying to calculate the logic behind it don't math.
It literally says actually we put the same two digits in, data-driven there. And actually we've seen this an era coming up, like I think it was on 3, 11 or something. The calculator was different, you know, it was, you know, it was, you know, it's that kind of V where it's actually validated in the data as well as the actual, uh, you are.
So that's a really powerful function because I think most people miss that out. And I know you can, again, customize that to have however many parameters that you're potentially wanting based on how quickly you want to run the test, because it gets very, very complex very quickly right? So, you know, how, how have you kind of gone about this, dealing with the data aspect as well?
Paul Grossman Uh, well, I will say that the, the, the demo you're mentioning is actually one, one of my oldest demos on the magic object model, and it is working on a, uh, a local app. It's not, it's not web based in that particular demo. Uh, I will say testRigor is focused, uh, almost exclusively on web-based applications.
Uh, as far as, uh, data, uh, information I'm, testRigor can actually pick up, uh, pick up texts off of the web screen screen, and then saved off in a, in a variable just by saying, let's save off this variable, uh, capture this variable, I believe, um, grab I'm sorry. It's grabbed the variable and store it, and then you can reuse it in another steps.
So it's basically, you can, you can grab from the front end and paste it somewhere else. Either paste it or validate that it appears some, uh, some other area, um, the, uh, But I know that I believe it's got other applications where we can do more data referencing, and I'm going to turn it over to Artem over there to kind of give a little bit more information on how that works.
Artem Golubev Yeah. So, uh, but basically I think, uh, Jonathon, what you're referring to is we have, is, uh, autonomous testing demo and like, uh, uh, it works well for mobile applications and, uh, we are currently, we only test web and mobile and, uh, um, because mobile have limited UI. So as they, you can do, uh, as a comparison, you can do dynamically. So you can, uh, compare whatever is, uh, visible data visible to end user on the screen where the data which had been there less time.
And if you can't find certain data anymore, or it changed, uh, like even though you are executed exactly the same steps, then, uh, you can, uh, make the system to highlight it and say, Hey, that might be an error. We don't know for sure, but, uh, it sure is. It looks like that. And then you can, uh, uh, actually mark it as, no, no, this is not, you should ignore that.
Uh, it's always changed or something like that. Or you can say, um, Uh, yes, this is, this is an issue let's create the JIRA ticket. You can click a button to create a JIRA ticket, um, or you can say it's a feature, not a bug. So this is how it should work right now from now on.
Jonathon Wright Uh, it's really cool. And I think, you know, going back to those days, um, you know, actually I was, I was on a call last week with Dorothy Graham, um, because she's actually come and do some work for, uh, one of the MIT contact tracing apps. And she wrote, you know, the original book on Software Test Automation, right? And, uh, he's referred to as the grandmother, uh, of, uh, automation and, and, you know, the, those fundamentals which you put in that book are these kinds of issues that we're still facing today, right? It's how we deal with data. How do we deal with the object recognition? How do we deal with the logic, um, behind that?
And I think there's a lot of, um, test is out there and an automators who are kind of get quite used to being quite happy with code. Um, but then there's another side of, of, of testers who want to build, to write in natural language, want to be able to understand it in this kind of domain specific language so they can kind of refer to it.
I know Paul, you know, you've done with like health care apps and stuff like that, where mission critical stuff that, you know, it needs to be quite auditable. So you can kind of understand what you've read.
Paul Grossman Uh, well, I'll um, I'll, I'll say that, uh, one of the things that you had just mentioned. Which, uh, um, was the, you were talking about the, the login and one of the things that is always a concern with test automation is that if you're using record and playback, uh, everyone's going to say, oh, don't, don't use that.
I always say, it's a tool. If you don't know how the tool works, the more important thing is, do you have modular design? And in fact, When you were talking about that login, uh, the login, we recorded it, but then we actually made it a modular design so that you can, uh, it's, it's a modular. We, we call it a, uh, um, a rule.
It's basically a bullet. It's basically a business rule so that you can reuse that in multiple different scripts. Um, I'm I'm seeing testRigor as a much easier way to really get into things. I'll say that when I first, uh, several years ago I had a manager who would go out and have me go out and we would do a three day, uh, proof of concept.
We wouldn't charge a client. Most projects today you have somebody who'll say we'll do a proof of concept for three months. Pay us if you like what you see, pay us for more. We came in for three days and said, Paul we'll sit down and do some stuff, demo it. And then we would, we closed on like four or five of those projects, uh, with testRigor, with how fast this can do stuff, um, uh, to setting up a test, the advantage of doing, uh, it could be a three minute proof of concept rather than a, uh, three day proof of concept.
You could almost sit down with the client and say, okay, show me your, your application. What might you want to do? And within a 30 minute period has some of that already, uh, already done and really condensed that amount of time. I also wanted to mention one of the things that other things that, that testRigor does that just right out of the box day one is that you can have emails being sent out.
And then testRigor validates the emails on its side, and then it'll come back and show you exactly where, you know, if you validated the information correctly, it found it, or if it, if it didn't. Um, and I thought that was actually a pretty cool front end, um, uh, feature of that, uh, that product that makes things go really, really fast.
Artem Golubev Yeah. So, uh, one thing I would like to point out is a testRigor, uh, at these stages, pure functional anti-aggression. So we don't, uh, by default, uh, compare screenshots. We, uh, extract information, uh, such as, uh, texts, input, input controls, and, uh, whatever is, uh, clickable things like buttons, links and whatever is you think as a user is the button and the low you to work with sets of those elements.
Uh, like you mentioned, Applitools we are eager to, uh, uh, Uh, collaborators them to use Applitools to do visual testing because we definitely the best tool on the market for that, uh, like VR on our side, just only doing, uh, functional at this point. So, uh, the goal is, um, uh, basically the following is to allow you, uh, as a, uh, human to express how your application should be functioning from end users perspective. So basically like an executable specification, if you will.
And then, uh, as the system would parse it and they do use a combination of natural language processing and the parser to understand what you're trying to say, uh, and the executed on behalf of an end-user interacting with a web browser or mobile screen, uh. And then you can also express validations and those kind of stuff again, on behalf of an end-user, how you do, would you say it from an user's perspective and, uh, like, this is what makes it like, this is a big part of low-maintenance stuff, right?
Because as soon as, uh, however you described your specification is still true. Uh, is the, is the test will succeed and be green. So as soon as it's no longer true, let's see here, the name is a button, or like through there's no button like that anymore, then it will fail and I have to adapt, uh, which is very easy because in our case, it is plain English, plain text.
So it's the regular plain text. You can have a full advantage of, uh, uh, using Git, or finding their place or what have you, anything you can think of, but when it says, uh, ultra simple specification, how you would say, who would you say it in from an end user's perspective where you can think of it as a, as if all of your gear can have been implemented.
Are those books for you? So you don't have to write any code anymore. You can just use it, uh, like, and. From, uh, UVA, uh, create engineer's perspective what's happening is that instead of being bogged down, like the low level details, like, uh, experts as, as selectors, I know black Paul is a big fan of ex perf. We're used to it. That's all. We'll come back to
Jonathon Wright programmatic descriptions too. Yeah. Yeah.
Artem Golubev The point being is that, uh, uh, engineers no longer have to deal with that, uh, stuff anymore. So we can, uh, basically focus on actual functionality from business users perspective and that makes it a little bit faster to produce.
Uh, especially if you're using a browser plugin to record yourself and then just modify it to make it an actual test and not just the console steps. Um, and, and this is most importantly what makes it more stable and far more maintainable because remember we taken screenshots on every step. And if something's not going according to plan, not only used to have this, this stuff fails, but you'll see those screenshots from less successful run on the main branch and current run.
And you'll see, okay. Like I see with my own eyes what's going on. This is very clear, this is a, this is a goal. This is the point that, uh, how, like how VR achieving this, uh, test maintenance. And I think, uh, uh, test maintenance had been the number one plague, uh, in the industry. Uh, like I can tell you horror stories all over the place.
Uh I've I have seen companies, we have some customers which are the companies multi-billion dollar corporations. They tried, uh, Selenium. And apparently they built lots of tests and they ran into issue that over half offer tests would break a couple of weeks. So they were booked down like more than 50% of her time and constantly maintaining the tests up to a point that they calculated.
Hey, if he just puts the same people to test manually, we'll get more value out of it. Literally we'll test more. Uh, and late get better results. So we ended up just having 100% manual testing, which is insane, if you think about it. How is that in the age of automation, uh, led you, you're doing manual testing and you have to resort to manual testing, but their application was changing so quickly that you just, we couldn't, uh, couldn't keep up, whereas like, versus no way.
And there are a lot of, uh, companies like that and, uh, is our tool. We were able to achieve that. Uh, zip maintenance was very, very low to none. And this is very variable to use our tool. And even though like for Selenium didn't work for them.
Jonathon Wright No, I think it's really interesting, cause you know, Selenium has kind of become this W3C kind of standard, right? And I think as, as great as it's been, it's also been a bit of a curse because part of it is yes, it was the underlining, um, let's call it engine to be able to execute against, but I always saw it as a test automation and engine or adapter, right? Wasn't the actual ability to actually test. And I think, you know, part of it is then people spend a lot of time with lots of different frameworks.
And I know both me and Paul are incredibly guilty, but they, you know, part of it is they've added complexity into the equation, right? And potential human error. And it was interesting what you said about the, the deaf so area, cause I remember actually buying a copy of WinDiff, you know, And getting it through the post.
Uh, and if you went to a barrel, he actually had my name on it and it came in a floppy disc and I was, I had to go out and actually buy a USB to floppy drive, convert it to build, to actually install it on a, on a, on a bank. I think he was, and I was thinking car and it, but it was as simple as that, you know, how do you understand against two different runs?
And what I kind of saw with testRigor is you obviously got your CI/CD um, you know, ability to just plug it straight in to your pipeline. And again, you know, it's funny, you know, we've come on so far, you know, I'd been, I'd been doing automation now for 25 years and it doesn't feel like we've made huge steps and it feels easy because you kind of go, Hey, I can do it in the CI/CD par pipeline, but that's not accessible to everybody because what people want to be able to do is anybody be able to use it.
Um, and I think, you know, partly going back to that. Actually, I saw on my, on my Facebook or LinkedIn today about Elizabeth Hendricks. She was just talking about something and I kind of was going back to those ATD kinds of days. And, you know, part of the acceptance test-driven development was this natural language.
And I always remember, like when I did a workshop in, at Fusion, in, in Sydney with, with, uh, with Elizabeth and you know, part of it was opened up notepad, what are the acceptance criteria wanna, we want to actually say, right? Natural language, you know, we are expecting it to be able to log in, but it shouldn't be able to log in if I put in an invalid password, right?
It should be really simple and keep it in natural language. And this was kind of what kind of GoCo kind of put with the execution, you know, specification by example. And I think, you know, then there was this obviously another movement and, and Dan, uh, North's good friend, but you know, again, with BDD and Gherkin, they try to correct create natural language with the Syntex, with given when then, then when, then, and try to build in that logic.
And it wasn't the right place for it. And again, you ended up, X having these executable specifications, which required as much maintenance as the actual underlining code for them. So again, best intentions were to getting earlier on. It was get a, make it readable to business users, as well as, uh automators and testers and, and everybody else.
But, you know, we've kind of overcomplicated things. And one of the things that I really like about testRigor is this. As the natural language, uh, text side of things, because actually I think it bridges the gap between all the chasm between, um, test management tools. So your ALMs and your test directors of the day or an Excel spreadsheet, whatever it is, a test or users to test with, uh, and the actual automation itself, because instead of them being separate things, they've actually become the same thing.
So, you know, I remember, you know, really, if you think about just test director, you know, it would be step one, do this step two. What was the post amble step where we're expecting it to pass? Never really had that much more, but you know, your platform does that preamble, post amble, that kind of test what it expects the state to be before and after, but also splitting those up into steps.
Now, this is really interesting because I was doing a panel recently. We, Jason Arbon, and we'd just finished doing a book called software, um, Accelerating Software Quality, which just hit the bookshelves last, last month. And I was chatting to Jason and he was kind of saying to me, well, what's the future, obviously, test.ai.
You know, um, they've obviously been working quite a lot hard on this kind of concept and, you know, partly where he was coming in was back to the language, right? He was kind of saying, If you could harvest your test management repository and understand the nouns and verbs, and then put them together to execute, then you've suddenly got this great power because people have already written all the tests and they just want to execute it.
But actually testRigor gives you that ability to is if your test scripts are written in natural language in your excel spreadsheet or your, uh, wherever it's stored. You could in theory, pull them across to the left and just work on how you process that language. So just, okay. I need to make this simpler or, you know, when I've got to understand how to structure it a little bit different, but at the end of the day, they then become in sync.
And I think that is a really big game changer because you're, you're talking about natural language, which can be reused. It can be run manually with exploratory. It can run for your fast automation. It can run in your CI/CD pipeline to give you that DevOps quick feedback, but also you're maintaining test assets are a lot smarter.
And I think that is a game changer. I'm really interested to see, you know, where you're hoping to take, um, the product naturally.
Paul Grossman I just going to add to that on, on, um, one of the features I find in testRigor that makes it fairly flexible is that not everybody writes their test cases the same way. If we were to sit down and say, well, I need to verify some texts. I might use the word verify. Artem might say, check, uh, Jonathon, you might say assert.
And the cool thing is that you can write, you can use all those three action words in, uh, in a script and, uh, and testRigor will identify it and say, okay, yeah, that's the same. Uh, you don't have to get your whole team on board and say, okay, you guys have the right to this specific standard. You have to use this particular word.
It's generic enough that it can say, okay, I understand what you really mean out of these. Even though you're using slightly different words to identify and do particular, um, actions in the script. So that was just one other feature I wanted to kind of mention on that, that I really like in testRigor.
Artem Golubev Yeah. Look, uh, regarding the, uh, making, uh, uh, uh, specifications executable and, uh, yeah, I agree. It's like this holy grail, right? If you have already test cases for it in certain way, why wouldn't you just execute that? Uh, well, uh, I don't believe you can, uh, based on just a test spec of themselves, because you basically, uh, missing a part, which is related to, uh, like, uh, domain knowledge, right?
Like verse some, uh, when people are creating those specification in a lot of cases, you can't just execute them because you need to, like, you need to train the system on a domain knowledge. You does not, you want to work. Uh, in our system, like the the help you to do that is just a, you create whatever they call the rules.
You can, uh, call them functions in completely plain texts, anything like spaces, thoughts, whatever, uh, you can use the terminology, which you use yourself in your, uh, uh, specifications. And then once you have all of these, uh, the main outlined, uh, fruits, the rules, then you can use basically rules in the, as a building blocks of the test itself.
So this way it is far closer, uh, to, uh, uh, to whatever specification is. And maybe even in some cases one-to-one match. You will be able to make your specification executable like fat, but you'll have to flesh out, uh, yourself what, uh, what each role or function, uh, stands for in a little bit lower level from end-user's perspective, because this is where the main knowledge come from.
And, uh, in a lot of cases is just, like technically impossible, unless we have like this, our overarching general AI, it should be far from it at this point. Like the, uh, I don't believe it is technically possible to, to automate the very, this, this part. Moreover, I would say, uh, the whole value of, uh, engineers and key automation engineers is not just that we can code.
That's great. And this is awesome, but it's also that they are humans and can learn and understand and apply the main knowledge. This is where the value actually coming from.
Jonathon Wright Yeah, I think, I think, yeah, you're absolutely right. There's obviously we are aiming towards, um, the holy grail at some point in time, right? That's the goal. Uh, but I do think actually you after what you're saying is going to be a collaboration, right? I think, you know, what you've achieved here is, is great, great steps forward. Now, you know, you could combine that with what Jason's doing. You could also combine it with the, you know, the Applitools for visual testing.
Uh, you know, my friend alone, um, is doing up nine at the moment for using AI to generate API tests, which is obviously underneath from a kind of underneath the hood perspective, you know, with somebody, uh, somebody like the dark art, was it right? Who could pull all of those together, all those layers, which I don't think we split up in automation very often.
We kind of, you know, oh yeah, we've got some postman scripts. They're going to do all our API stuff. We've got, you know, um, we've got these tools that we're using for, for, for these other activities. And I think what you're, you're kind of highlighting is actually really important, which is the human aspect, right?
Which I've kind of been trying to defend quite recently with this kind of, as an AI advocate is to say, well, actually we need the human to learn and train the system. Um, and we are in a very much of a throw away kind of society, right? And I think, you know, part of it is those people who are debating context specific kind of camp you've spent all that time, writing those tests in a test management tool.
To then have a separate team over here. We've been given the goal of automating them. Um, and it's part of, there's a disconnect, right? There's not this well, well, we'll keep the integrity of the tests manually written as well as the automated. They could never bridge that gap. And I remember when, uh, uh, I was doing a session recently, with James Bach and Michael Baldwin, right?
And they were obviously in a particular camp and they're also in this kind of, well, automation doesn't work. And as you probably know, James is working with Tricentis at the moment to do kind of a session-based testing tool, which is how easy is it from for a tester to get from no, from no automation to something, right? Um, and I think for the sprint, today's where you kind of, you know, let's bring to capture it and then converted it into UFT down the line. But actually, with your tool, you've already got that kind of capability where the note to, to two to 10 to 20 is very easy because you're literally giving it a URL.
You're giving it some training, uh, data and you're letting it go off. And like Paul kind of said, proof of value can be demonstrated in minutes, not hours. It doesn't need, you know, somebody to actually write it in a specific language or know Python or know as say, or even TSL, you know. It literally just lets you use it natural language, which we're all humans are, are kind of already kind of there for an as things go along and obviously as AI and ML kind of comes up when we're training the system, of course we'll be able to add those accelerators down the line.
But I think the foundational core of what you've got with results and our ally, which I think is another kind of deadly curse for automation. I know Paul on your, your profile, you took about, you know, just saving the organization by finding these defects, right? And that is really important because you want to go from nought to somewhere as quickly as possible.
And if you've got something as powerful as testRigor, where you can point at a URL and you can let it do let it run, then suddenly you're already getting some level of coverage. And I think that is, is amazing. I also think it's great that it's going to get rid of that maintenance burden and it's not going to make them disposable, right? Which is what I think most of these are, is people will do a bit of a run at doing some automation, get the numbers up. And they just can't maintain it. And I think that is the problem with automation. It's also why we've got the unfortunate, bad rep of kind of seen as their silver bullet to the solution as well.
We've got to automate everything cause we're doing DevOps. We're releasing faster. We need everything to be automated. So why can't testing keep up? And the question is it can now with testRigor. So, you know, part of it is that's that big step. And I think, you know, most of us technically will go, well, you know, it's not a big jump, but it is, you know, uh, it runs, sent me a message yesterday cause he's got the, the iPhone 12 pro, which is what I've got as well. And he straight away, he wants this in his lab so that you can check that the dimensions work. And even though there's dimensions are so small, different between the old generation, it still means when the Dom renders it's gonna look differently.
You know, some buttons might not work. Well if I'm in this at MIT at the moment, and we've just done a, we just released this weekend to Puerto Rico and we're finding the Spanish kind of overlapped some of the buttons and you know, part of it is it's simple things like that, which actually get us stuck.
And, you know, we, we, we don't, we take it for granted because we think, well, it's automation, which is going to handle all this, but actually it's those small little changes that actually need, you know, cause massive problems. And with something like testRigor, you can do the web, you can do the mobile. And you can literally keep it running in the background and keeping an eye on your system health and making sure that everything's working as it should do on different types of screen sizes and et cetera, using other adapters.
So, you know, I think this has opened up so many great opportunities, um, and I'm really excited to see a, what Paul's going to do with it, cause once the dark art was it place is magic object. We won't be in the need to apply the magic object model. You'll have to come up with something even greater. So it'll just be, you know, Uh, his, his magic wand is what we'll talk about, which I never thought I'd referenced your magic wand, but I have now and it's out there.
So what are you going to do with your magic wand with testRigor and I'm referring to testRigor as your magic wand, okay? Paul?
Paul Grossman I would say the next thing to do with this is probably moving to RPA (Robotic Process Automation). Um, I want to tell you that the, the ideas that I've been talking about with the magic option model, and I think that's that's over because Artem did it.
He's clearly I had some really good ideas and I had some demos, but, uh, we were working independently for five years, not knowing of each other. And he did a whole lot better. So the next thing I would do is, like I said, RPA would be one thing to look at. Um, in fact, going back to what you just mentioned, I think it's just checking localization.
How quickly can you take the test and say, okay, this works great in English. Now let's try in Spanish. Let's do and try in French and, uh, see where we can go with it from there? Uh, that would be the first thing that my magic wand would be looking at. Um, So that's, that's, that's my future idea on that. Well, so I was, well, yeah, what I was saying too, was that, um, yeah, the, the website is candymapper.com that we test on.
And I was going to say that we're looking forward to seeing, you know, Angie Jones is the, uh, the hackathon that's coming up. This is the third year that, that the Applitools has been doing it. I've been knowing about it for all three years. And, um, I think it's something that, you know, let's just take a look and see what, uh, what can be done over there, just to show off that, uh, the power of the testRigor on that.
Artem, was there anything else you wanted to add on that?
Artem Golubev Sounds interesting. Yeah, I would love to try it out and see.
Jonathon Wright Oh, there we go. The goal has been set. So I'm looking forward to seeing what we can get between now and the first to the fourth. And let's, uh, let's demonstrate the power of testRigor and we, check out.
We'll make sure that those links that we've talked about are available in today's show notes. I just want to say a massive thank you to Paul, the dark arts Lord and our special guest, and gold sponsor for the virtual, uh, community days in December. So thanks so much, testRigor. Thanks so much, Paul. It's been great to have you on the show and, um, thanks a lot.
Paul Grossman Thank you, Jonathon. It's been a pleasure speaking with you again.
Artem Golubev Thank you, Jonathon.