QAL Podcast Mariia Hutsuk

Everything Is Just A Test (with Mariia Hutsuk from Zattoo)

Related articles and podcasts:

We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.

Audio Transcription:


In the digital reality, evolution over revolution prevails. The QA approaches and techniques that worked yesterday will fail you tomorrow, so free your mind. The automation cyborg has been sent back in time. TED speaker Jonathon Wright’s mission is to help you save the future from bad software. 

Jonathon Wright:

Hey, it’s Jonathon Wright and today I’m interviewing Mariia. It’s going to be a really exciting conversation we’re going to have around, just the general where you start and how leadership is so important. I’m going to pass it over to Mariia for just a quick intro.


Hello Jonathon and everyone. I’m so glad to be here. Let me introduce myself. I’m Mariia Hutsuk. I’m QA Lead of the B2B team in Zattoo, in Berlin, Germany. I’m glad to be here. 

Jonathon Wright:

Fantastic. It’s lovely to have you on and I think you’re probably the quickest guest I’ve ever had. We put a post out just yesterday and it was really exciting to see that you’d replied. Quickly looking at your profile, it’s a perfect fit. One of the things that we’re really looking to learn more about and our listeners to learn more about, is the leadership aspect of the QA. I think you’ve got a fantastic story to share, but I want to start off with how you started in QA and how you first got into QA after doing your master’s degree in computer software engineering.


Basically I’m already eight years in software testing. I’ve worked as QA and I also was team leader of the QA team, functional, and people manager. In 2012 I had a master’s degree in computer software engineering and I also started my career. I got my first QA role in a project for [inaudible 00:02:23] service of Ukraine in a software company called EPAM. Basically I got this role through the internship program which EPAM suggests after three months of courses. I passed additional exams and I got this role. It was a really fantastic event for me.

During some time working as QA in EPAM, I understood that I want to relocate to Europe. In 2015 I moved to Germany, I started to work as the first QA in Chantelle group. My role changed significantly during the first year, I became a manager of four employees. Those QA were working remotely for another company. It was really a magical transformation for me because I needed to switch my background from testing to management perspective. After three years in the Chantelle group, I switched to my current role in Zattoo. Now I am team leader of a B2B QA team. I’m responsible for defining the strategy and allocation of QA resources in different projects that we have.

Jonathon Wright:

That sounds absolutely fascinating. What I love about that story is that you came from an apprenticeship scheme and I think this is really important. I’ve just recently become a member of the British Computer Society committee and one of my tasks is inclusion and around how do we get younger people an apprenticeship to go into this industry. I think when you come back from a technical qualification, or even the level you’ve gone through to get your master’s degree. How many modules did you have when you were at university, that covered quality or even prepared you for this kind of leadership and management skills?


Regarding the QA module, I had only one, software testing. It was just half a year during my five and a half years of education. It was not enough from my point of view, and actually we did not have enough practical tasks. After this module, I was thinking that testing is not my area even. I’m really thankful to my first company where I worked because they suggested to me the internship program where I had the possibility to try and practice how the testing is going, how test cases can be written. At that point, I found out that it’s really interesting for me.

Jonathon Wright:

Absolutely. I was reading your blog which is your journey and some of your life goals of what you wanted to achieve. One of them was around doing the ISTQB. How did you find that kind of formalized approach to software testing? What kind of challenges did you have, or what surprises did you find for doing the course?


To tell the truth, even being eight years of testing, I did not pass it yet, any ISTQB certification, unfortunately. I believe that it’s really important to have this certificate, or at least to know this glossary. In the QA area, we need some common understanding regarding what the case means, what the strategy means. At the moment I believe that it’s our common background and list of papers, that you pass certificate or not. It’s something additional. The main thing you can get bypassing this is to gain knowledge. I really hope to pas it in the next couple of months, and I advise it to everyone. 

Jonathon Wright:

I think it is really important. I remember starting my software testing career and wanting to do it. Back in those days it was ISEB, so it was a slightly different variation, but the same thing. I actually was just messaging Lisa Crispin who’s on the show in a couple of weeks, and she was part of those original standards with people like Rex Black, and the people who we well know in the industry who’ve formalized the certification process. I think it’s interesting because obviously there are, like you said, common languages that you can use, common terminology. 

There are the design patterns in there, whether or not that’s using things like classification trees, or whatever else you may use on your day-to-day testing job. It’s a very useful formalized approach to testing, I suppose. That’s really exciting. That apprenticeship that you started off, you mentioned about the technical writing, I noticed from your title. What was that like, doing the writing of technical documentation?


Yeah, my first job was more technical writing rather than testing. I was, for example, editing technical documentation according to some ISO standards. We had some rules, for example how we manage tables, how we manage images, how we structure different titles, different levels of titles. Different rules on how we start describing the image. It was really useful for me, experience because I can gain knowledge even now when I test some system and I read some supporting documentation. I immediately notice those mistakes in documentation. Of course, I can appoint my team and improve this to be more professional. It’s really great experience for me.

Jonathon Wright:

I love that. It’s one of the biggest things I miss. In the prologue, I get interviewed by Ben, who’s one of the founders, and we talk about some of the work that I did for a German communication company. What I loved when I was there was technical documentation. I thought it was an art, creating these requirements documents. I know things have changed and we’re going to talk a little bit about documentation because you do a lot of test strategy as part of your leadership. There’s always a question about how much is too much. 

I think part of the Agile manifesto may be the misinterpretation of the Agile manifesto of working software over documentation, is that actually we still value those things on the right-hand side, i.e documentation. We know that 70% of all issues are found in requirements. Why don’t we shift left and really focus a little bit more on requirements and get that right? I think that’s something we’ve kind of lost. I think it’s something that I understand that these things like BDD for behavior-driven development, and Dan North is a good friend. 

I understand executable specifications, specification by example, and GoCo. I’ve met the guys, I understand where they’re coming from and I think it’s a great way, but there’s no substitute to good documentation. I think it’s great to know that you’ve come from this background where you’ve understood the value of technical documentation and how important is it. One of the things we did is we used to have a little tool, it was back in the ’90s at the time, which NASA used. It used to go through and work out the ambiguity of requirements.

If it had words like should, could, would, it would highlight those, give them a score and say, “Well actually, you need to clarify those in a little bit more detail.” Your first role being state tax service, it’s pretty important that those things are right and they’re documented correctly. 


Yeah, actually you pointed one of the rules which I also was trying to follow. We were avoiding verbalization like, “This module could do or would do.” It gives some additional misunderstanding regarding what exactly would be done. We were avoiding, for example, the formulation that, “This system should not do blah, blah, blah.” Actually the software documentation should just contain what exactly a system should do or is doing, instead of writing all things which would not be done. It sounds a bit interesting, but it’s rules in the documentation. 

Well, based on this experience I would say that sometimes some projects are over documented and it’s becoming really complex to maintain this and keep it up to date. I like more the Agile approach when we have a product of documentation. In my eyes, we should have a really limited amount of documentation, which is necessary for the team. If it speaks about the strategy, the strategy can be not a document of 40 pages, because it would be complex to maintain it. It can be even a mind map or it can be one page and a [inaudible 00:13:11] regarding what has been tested, and that’s it.

Jonathon Wright:

I love the idea of just having it in a mind map. Unfortunately, I think probably the listeners are going to realize that I’m a bit of a model-based testing fanboy. In the sense of, when I was out in Silicon Valley and the work that I’ve been doing, it is all around model-based testing. A model to me is the heart of everything, and obviously now we’re looking at things like robotic process automation. It’s even more important because that is also documenting the business process model of the application. 

I think that’s the question, what is too much and what is maybe too little, on the opposite side, from good habits and bad habits? Do you find that any of your team, especially in your current role, do they put one line, “It must do this.” Or, “It should do that?” Do you find that you spot things like that, or do you find that they spend too much given when then explaining the different personas, detailing the associated data sets? How much do you see on a day-to-day basis from leadership, and when do you usually have to step in, to get involved?


It’s a really good question. Well, in my eyes, for example when you’re switched to the leadership position, you need to empower your team. So, you need to give them the possibility to decide and take independent decisions from you. Of course, you need to be a bit more humble and put your ego a bit outside and provide the possibility to your subordinates to canvas your ideas, suggest own approaches on how to test, and of course, you can still support them and guide. In my eyes, I really like how [inaudible 00:15:21] are working and we are currently using them in the team. 

In my eyes, it works really well, especially in my current role. We are defining the [inaudible 00:15:34] of the unit, of our team, including developers. This helps us to focus as a group on something which is more important to all of us. Of course, this helps us to have prioritization. Let’s say our first goal is this one and everyone, developers and QA’s, are in sync. They do all the best to achieve those goals and we know what is less important.

Jonathon Wright:

So in a way, you’re moving into this new realm of tribes and squads, which they talk about for things like cyber liability engineering, and some of the new methodologies around how teams really own what they’re doing, what those objectives are, the mission and the vision of what you’ve got to do. They also define, from what it sounds like, what you’re describing, what the methodologies and approaches, and the direction. 

I know, reading from your blog, you mentioned about and it was a great example of misalignment, where potentially technologies change, maybe approaches change. I know you had a personal goal around learning Python, but then the stack changes and then you’re going to have to learn JAVA. Do you find that you’re putting it back into the hands of your team, to really own what they’re doing and how they deliver, and you’re just shaping and helping facilitate that?


Yes. Basically, our team and Zattoo, in general, are switching to Agile methodology. We are not 100% following the methodology of Spotify with tribes and squads, but we try to be more agile. We believe that this will bring big value and this would help each employee to be involved and take responsibility for our own tasks. If we speak about exactly the team where I work, before the starting of the quarter, we are gathering some ideas from everyone who wants to work on some nice things. 

And of course, I am trying personally to prioritize what would bring a bigger value. Of course, we have some improvement ideas from the team, but we also have some backlog items or some projects which we need to work on. This should be a balance between the initiatives of the team and the initiatives which are necessary to perform the project. I am personally trying to select the initiatives which would help to be faster and perform better, exactly for the upcoming projects. 

Jonathon Wright:

Sure, so you’re from a business value perspective? You’re looking at capabilities that your team potentially needs to build. I know you mentioned that learning some test automation might be a personal goal, but also your team might be doing. So it gives you an opportunity to understand the kind of struggles that they’re going through, the kind of challenges that they’re going through, to build those capabilities. I think we’ve seen a big shift, especially from the W-model kind of Waterfall days, where we had the center of excellence. 

You had teams that were literally there to write test strategy documents, to over-engineer things. Now it’s moved to this center of enablement which is more around teams like it sounds like from your team, where you understand that you’ve got technical challenges around capabilities, that you want to improve. You’re partially understanding what, and road mapping those from a safe point of view. So a scaled Agile, you’d have this portfolio level where you’ve got some themes and some goals of things that you want to achieve, that will unlock further down the line when you’ve got to go faster. 

So you’ve got to have more regression, so you’ve got to have a certain level of automation capabilities. You might be missing certain capabilities like test data or something else, and you know you need to be able to build these up but they’re going to take time. It must be so challenging to understand the deliverables of a project because you mentioned about managing four or five different projects when you first moved to Berlin. Then you’ve obviously got the language gap as well with learning German, which must be incredibly challenging. 

Then you’ve got to context switch between these different projects and different teams, different allocations. How do you personally keep on top of all these different challenges? Do you have your own Kanban board or do you have your mind map? What kind of techniques do you use to manage your workload?


It’s a good question actually. For me personally, I am trying to define and know the top two, three calls that I am involved in. Basically, now I want to evolve more into the direction to become a speaker, to become a blogger, to share my knowledge, maybe become a better facilitator of some meetings. Basically it’s [inaudible 00:21:10] my goal and direction where I [inaudible 00:21:12] personality. Secondary, I want to be a bit better in technologies. As you mentioned, I want to write my first tests. Now it would be in Selenium, in JAVA. 

This is really, let’s say the techy approach to write automated tests. For example, in my previous job we selected the bit of NASA stack of technologies for test automation. At that moment we did not have some automation engineer in the team, and we selected the framework called [inaudible 00:21:53] inspector. Basically it’s a ready framework where you can manually select all steps. You just need to write selectors of the UI elements, which can be identified by tests. From that perspective it’s really easy to test automation, it’s more like a manual approach of test automation.

If you select such a stack of technologies, afterward you have really a lot of constraints, how you would use this automation. For example, we had some challenges with integration and also to make the test to [inaudible 00:22:35], or we could not do the continuous delivery, we could not integrate them into Jenkins or some other thing. Now my personal goal is to be a bit better with true test automation with JAVA and Selenium because this has huger and many more possibilities. 

Of course, at first, you need to invest a lot of time in your framework, because you need to set up everything. To write your first test you need to spend several weeks or months, depending on your background. If you know this, if you have this in your project, it would be easier for you to use this. You can easily define the continuous delivery pipeline, you can better integrate this together with developers when you would run it on which environment. It [inaudible 00:23:45] to my directions. At the moment I don’t have some special tool, how I track my goals. It’s not Kanban or some Trello boards. I just keep those two ideas in mind and when I notice that I have some possibilities to achieve my goal, then I use this possibility. 

Jonathon Wright:

It’s that great idea of having downtime to provide this enablement. I love it because it’s a journey that everyone goes through. What I find wonderful about the podcast is just this idea of your experience. It’s the challenges that everyone who’s listening has right now. The community behind it that needs to be there to support it and those patterns, it’s so difficult because there’s so much information out there. There’s so much information on going onto websites, you can go onto different forums, you can read different tools and see which one’s better than others. 

There are different conflicting viewpoints, it’s a mind field of scope. But at the end of the day, we’re still trying to solve the same problems. I love how you said with Selenium you’re just looking at identifies with the Ghost Inspector. To me, that’s incredibly complex. You’re actually going through something that’s quite technical. I actually just got a Tweet from Paul Grossman. He’s on Twitter known as the @darkartswizard. This guy loves regular expressions, he loves finding identifiers to ex-pat locations, for more improved automation.

I recommend checking out the test project, the IO, which is one of the things she’s working on at the moment. Part of it is whatever pattern you’re using, we’re all still having the same kind of challenges. We’re still wanting to be able to do things continuously, we’re wanting to do continuous testing, we’re wanting to be able to build into Jenkins, use the same tools and technology as our developers. We want to be leaner, we want to reduce waste, we want to help people, enable them to do what they want. 

It doesn’t matter if you’re six months in or you’re 20 years in, you still get these same challenges. I spent yesterday looking at something called the TICK Stack, which is kind of ability to, when you’re running a JMeter script, is to be able to do the monitoring behind it, to be able to look at disk space and CPU usage, and all this other stuff for performance. It doesn’t matter if it’s JMeter, but it reports it into time-series data, database, influx db. Then I integrate that into Graphtana so that my operational guys can actually see what’s running, how many users are running against that system. All in the same stack as what the operations guys are using, what the devs are using. 

It would be so much easier if we just had a blueprint that someone could share. I know we’ve got Eran, he’s one of the chief technology evangelists for Perfecto Mobile, who’s coming on in a couple of weeks’ time. He’s got a website which he set up, which has just blueprints for reusable continuous testing. So literally, “Here’s seven steps you can do to run your Selenium scripts in Jenkins.” I think things like that are going to be really helpful for the community, to be able to share some of those patterns, so that we’re not all investing a huge amount of time to investigate these ourselves.

Also, to help educate people, because I think there’s just so much information out there. Where do you start? Where have you personally gone out and found information about the challenges that you found in QA and testing?


Yeah, it’s a really insightful idea, because actually we QA needs such kind of blogs or more communities regarding testing or test automation. During our talk, you were mentioning about Lisa Crispin. I know she’s also contributing to the community called First DevOps. It’s really grateful for me. From time to time I’m trying to read some new posts on this blog or website. I also really like her book, Agile Testing Condensed, in corporation with Janet Gregory. It’s also some kind of QA manifesto regarding what activities have to be on the project, how the testing should be performed continuously on the Agile project.

Basically we already moved from the old Waterfall projects and the old concept which we all know. They are not working anymore in Agile. I think that we are in the state when we already need to define also standards and also activities, which has to be done in an Agile project. Maybe us QA’s can as volunteers contribute to this, suggest open-source code for example, or even share our knowledge on blogs, attend some conferences, to really advance [inaudible 00:29:56] since in parallel and help each other.

Jonathon Wright:

I think that’s a brilliant idea. I remember working with Dorothy Graham on a website which was called Test Automation Patterns. The idea behind that was reusable patterns for each stage of the low cycle. There was designing planned stages, there was execution, there was reporting, there were results. Each one would be a reusable pattern that we could all contribute to in a Wiki landscape. I know that’s something Eran from Perfecto‘s been trying to do as well. Actually, one of the things we’re going to do with the QA lead is we’re going to create a forum where you can ask people and share some of these resources.

One of the things I found really useful when I started out, the main tool I was using when I started was Xrunner and then it became WinRunner. Mercury Interactive had this amazing forum. The best thing about the forum was they had even had gamification, it was the 90’s. You’d log in, you’d get a score for contributing, you’d get a score for helping somebody with a support ticket, you get a score for writing an article. I think something like that, where people want to come back, it used to be SQA forums. That was brilliant, it had so much resource in it, but then part of it is, it got out of date quickly. 

You need people to moderate the content, you need to make sure that the stuff there is up to date. It’s about sharing. You mentioned Lisa, I look back to the kind of testing pyramid, that UI component, that service component. Then obviously people used to flip it on top of, upside down. My friend Paul Gerard always used to do that. But part of it is, patterns evolve over time. We need to be able to share those experiences and we need to find a platform to do that.

I think that’s a really good idea and there’s a lot. I think everyone can contribute to this. What I found fascinating from your blog was about the OKR, which I’ve never of before. It’s these objects and key results framework which you started to use. What the pattern behind that and how have you applied that to what you’re doing at the moment?


This tool, basically it was suggested by Chantelle some time ago and he described how he was supplying this in the book, Measure What Matters. In general, this means that you need to define some goals, and the amount of those goals should be limited. For example, if you define [inaudible 00:32:47] of your company, then you need to have up to five goals. But actually, if you have more goals, then you would have fewer results in each of them. If you would have just one goal or two, your company would bring in some incredible results in those two goals. 

For example at the team level, at this moment we are also defining five goals, but we put them into the order of importance. For example, the first one is the most important and the five are less important. Inside of one goal you define some key results which you want to achieve. So [inaudible 00:33:39] in general used to be different from KPR’s. In KPR’s you have some goal which you have to achieve. In OKR’s you have really ambitious goal. You’re not supposed to achieve it. If you achieve it it’s great, but if you achieve it by the end of the quarter, then it was not ambitious enough. 

In the basic scenario, you need to achieve it for 70-80%, then it’s a great defined goal. Basically this key result should contain some measured metrics. If your team is discussing the percentage of how the team is confident in achieving some key result, then this means that you defined the key result, not in a proper way. It has some double meaning inside or did not have some digital evaluation. Sometimes it’s impossible to define some goal key result and you should not wait. You write it as you feel. 

So in our case, before starting the quarter, we defined five objectives. We put three-five key results in each objective and we have some default confidence level. In most cases, it’s 70%. During the quarter, on a weekly basis, we review this confidence level. If we did something on top of the goal, of course, our confidence that we’ll achieve this is becoming bigger. We can check that we’re becoming more confident that we would finish it by the end of the quarter. Basically this tool is more for alignment and focusing people, rather than [inaudible 00:35:52] like in KPIs. 

Jonathon Wright:

I think that’s really interesting. One of the things I miss massively is big room planning. We used to have quarterly steering meetings, and literally we’d get I’d say everyone from the company, but we’d get the leadership from the company which could be up to 100 people. The people from sales, the people from marketing, even the people from the business, of course. Part of it we’d go through and we’d do kind of what you’re talking about, with this idea of initiatives that we could try and do for the quarter. 

We had the ropes on the side where we used to go, “What’s the risk, the opportunities? Where were the surprises?” Where we thought actually by bringing sales and marketing and other people who we’d normally not work with on a day-to-day basis, into a room together and really anything goes, just come up with whatever idea you’ve got in your head, and try to qualify that as quickly as possible. I love the idea of confidence. It’s one of the things I’ve always wanted to implement as a KPI or a cascading KPI, how confident are you and how do you move that lever? 

You’re doing something part of your Scrum of Scrums or your product increment, and you’d get this capability that you now feel more confident around your ability to do continuous delivery over continuous deployment. You suddenly start that, that helps moves the confidence level of your overall capability, of being able to deliver value to your customer. I think that is really hard. I know one of the things with quality rich is it’d be great to get your viewpoint in your current role. 

In the first episode, when I was talking to Kate, we talked about something like Netflix, a streaming service. You think about the perceived quality of its buffering and people have to wait. From a performance engineering perspective, people just aren’t tolerant of things like that. In actual fact it’s typically not the service provider that is actually having problem streaming, it might be their local network, their wifi, or something else. Or someone’s transferring some files or downloading another stream.

People just don’t think about that kind of thing from a customer perspective. Your job must be incredibly hard now for doing live streaming TV. How do you set the overall charter or the overall quality manifesto of what is good? Is it uptime, is it the quality of the stream? How do you measure quality in your current role?


It’s a really good question. Actually we don’t check some metrics regarding how good or bad the software is now. Basically, our job is to deliver some stable and reliable TV service to end-users. We are following this role. Some people may say television is not so critical domain. If you would compare television together with rockets or together with the medicine sector, of course, you would say the innovation of spaceship, the price of a mistake is human lives. On TV, okay, the player would be broken. In fact, if you are a football fan and you are watching, your team has a game with some competitors. Let’s say on the penalty moment, the stream is stopped. It’s a really awful moment. 

For example, you can hear that your neighbors are screaming something and you don’t know what’s going on. That’s why I believe that TV is also a really important sector and we need to have really reliable service. Of course, if you speak about TV, the heart of TV is of course live stream in traditional linear TV. But if you speak about Zattoo, for example, we have some additional features of nonlinear TV. Our end users also are capable to pause the live stream or add recording in the cloud or locally. They can record the show, so this allows some additional possibilities to watch what you want, not what you are suggested at this moment.

If I would measure the quality of TV, of course, I would measure those main key features are working, that they are stable and reliable. As you know, I am also part of the B2B team, so Zattoo as a company, as a TV platform, we have two directions by which we work. From one side we are working from B2C, for end-users. From others, we are working on B2B. We can configure our platform to the needs of other businesses. We can brand it, we can set up exactly the features which are necessary exactly for another business. We can set up the channel lineup which is necessary for this business.

If I would speak about quality, not of the platform, but of the B2B platform, then I would say that I would measure quality from perspective. If the features which are contractually agreed are working properly, if some specific B2B customer has all features which they have requested, or they have some feature which belongs to a competitor. If their logos and color schemes are used. This is with another direction, but we need to make sure both, I think.

Jonathon Wright:

That’s highlighted quite a few interesting things. You mentioned colors, saturation and things like quality and how that’s linked. Obviously people like Angie talk about [inaudible 00:43:08] tools, talk about visual testing, which I know is a very big trend at the moment. I find the live stream is one of those really challenging areas. I remember when I started off back in the ’90s, RealAudio used to do this great capability where the sample quality would drop based on the level of bandwidth you had. Of course, I’m sure things are extremely more complex now, to deliver from a quality of service perspective. You mentioned about the pausing in the middle of a match and I think it is important. 

I think it’s important to the brand and it’s also important to the brand of those B2B’s. One of the things I always find really interesting when I was out working with people at Apple is, Apple doesn’t have their own cloud. We know Google has a cloud, we know Microsoft has a cloud. We know that they don’t, so what happens when the app store goes down? Who do they blame? Who do we blame? We blame Apple, right? We know actually it’s Azure. When a celebrity gets hacked and their pictures get off iCloud, who do we complain about? Apple, but actually it’s Azure. 

It’s really interesting, where does the buck stop and what kind of brand damage can actually happen from the streaming? I was working out in Australia for ABC, which is a broadcaster over in Australia. It was really interesting because they use drones to do live streaming. Because Australia is a big place and they’ve not got a huge amount of camera crews, they encourage public live streaming of events. Anybody with a non-commercial drone can fly up in the air and live stream a football, tennis, whatever match, straight to ABC and the ABC can put it onto one of those channels, or one of those live streams.

To be that’s absolutely fascinating compared to the states, where when I was working with Hitachi, we literally built things like drone interception and detecting platforms, to take out drones that were trying to film illegal footage of an NBA game or some big Superbowl match. Because they want to restrict the streams, because of course there’s a value to that from advertising, there’s a value for the brand. It’s interesting culturally, how things change from area to area.  

I guess from Germany, that live streaming you provide and how you than the brand it for other third parties, it’s a really important aspect that you provide. Like you said, a reliable solution. I mentioned site reliability engineering a couple of times. Is that one of the things that are on your radar, as far as how do we provide a more reliable solution? Is that a QA challenge, or is that more around working as an organization to provide a better service to your end customers and your B2B customers?


Here I would say that quality is not just the duty of care engineers. We as QA’s need to engage the whole team to feel responsible regarding the quality of our platform. If I’m as QA testing TV solution, there are a lot of components behind. I would be not capable to test exhaustively integration between all those components. That’s why we as an organization take care of the stability of our platform, stability of our streams. We have a separate support team that checks each channel from 150 channels that we have, or even more, are stable, that there are no micro freezes, that it is available audio and video. That we can support all kind of qualities what we have, like SD, HD, UHD. Only the QA department cannot take all this responsibility. 

Jonathon Wright:

Sorry, go ahead.


Actually you also mentioned about the streaming in Australia and I really like that other people could contribute and suggest their streams. I believe that different countries have different laws, actually. For example, we in the B2B team have branded solutions for businesses from Germany, Switzerland, Monaco, Ireland, USA. I would say that feature stats for those different brands are different. For example, in Germany, there is one limitation of content providers. We have some sets of contract agreements. In Switzerland, for example, the law is more loyal to content providers. We can give much more possibilities to end-users. Maybe because of that, the focus of testing can be different.

Jonathon Wright:

I remember being fascinated by one of my good colleagues, a guy called Dave Fox. He used to work for the BBC and he was one of the chief architects for the BBC iPlayer, which was a streaming service. It was really interesting because he was part of the team that was building it on Silverlight, which was a Microsoft technology back in the day. There was another team that was also building it on a different technology. I don’t know if it was Flash or some other kind of stack, which is the one that they ended up using, because it was more reliable and it was more scalable. It was important to be able to stream the content on lots of different devices.

When I was your profile for the first time, I was really excited about the hardware aspects of these things as well. There are different form factors in the different boxes and the different TiVo systems, or whatever kind of receiver you’re using, or even a handset. The introduction of 5G and rollout for 5G, all those kinds of challenges suddenly add another layer of complexity. We’ve got a couple of guests that are coming up, one of my good close friends, a guy called Todd Dicaprio. He was one of the founders of Shunra, which was a network virtualization company. At the time, it was back in the ’90s again or the 00’s, it was fascinating to show when people started to implementing things like network virtualization, network function virtualization.

You could really think about, “What is it like for an end-user walking down the street in Copenhagen with a 5G connection on this particular device, watching one of your streams?” Now that’s a really interesting challenge, something that I’ve done in the past with Smart city projects, like in Copenhagen. But getting that to the end device and then understanding, “What is that from resource utilization, what’s that looking like from a data battery [inaudible 00:50:54] drain?” I know Perfecto and Eran probably have some really interesting insight into that. 

We’ve also got on a guy from Microsoft who’s going to be talking about Force. I remember speaking to Alan Page who wrote the book on how Microsoft tests. I remember one of the great examples he used to talk to me about was it was great, they used to use the Microsoft operating [inaudible 00:51:19], to understand what the individual Xbox’s were doing. They then understand, from all that metrics that they gathered from the hardware, that there was a frame rate drop when a particular type of car was going around a particular type of corner, and it was spiking the CPU or spiking the GPU. They would look to optimize the hardware on the Xbox, as well as the software patches, to reduce that.

I also remember him talking to me about the fact that in certain countries, obviously having an overheated CPU is not good if the ambient temperature is higher than somewhere else. You’ve got all these different locations. You’ve got Germany, you just mentioned about Finland, and all these different end locations. With all these different end devices, in all these different languages, your job must be the most difficult job in the world. I don’t know how you can do that. It’s just such a great challenge, but at the same time, I think that’s what’s so fantastic about quality. It’s that your part of boldly going into somewhere that has to deal with something that is so many variables in it. 

Yes, you said that it might not be mission-critical, but in actual fact, it’s as complicated if not more complicated than launching a space shuttle. I know when you replied to me originally it was around Elon Musk’s spaceship that had crumbled overnight. His view was, “Okay, we’ll just get some duct tape and fix it, and we’ll be good to go.” He just iterated to Mach 2 and he’s going to launch it again. The reality behind that is its hundreds of millions of RND. It’s huge amounts, it impacts your share price, it’s billions of pounds. 

It’s so challenging when you think about it as a global platform, such as television. Actually your experience and what you’ve learned, it’s really good that you’re wanting to get out and do public speaking. It’s great that you’re wanting to do blogs and really help share some of those challenges that you’ve had. I think as a follow up we’ll definitely have to introduce you to some of the other speakers or find yourself a mentor to get to do some public speaking and do some conference events. I think you’ve got a really interesting story to tell. 


Sorry, could you repeat it? I could not hear the last minute. 

Jonathon Wright:

No problems. I was just saying that as a follow up to the podcast, I’ll introduce you to a few people like Lisa because Lisa does a mentor as well. To give you an idea of some of the opportunities to go and speak, because there’s a lot in Europe as well. I know Swizz testing days are coming up soon, which is unfortunately named. There are lots of other opportunities in Europe. I think public speaking and being able to share your adventure journey in this would be really interesting. Also, maybe if you’ve got a bit of time, maybe do a blog for the QA lead as well. That would be really good. 


I would love to, really. 

Jonathon Wright:

Fantastic. We’ve gone massively over time, but is there anything before we leave, any big tips that you want to give or anything you’d like to close off around? You had the best quote ever around everything is just a test, around the Terry Pratchett stuff. I love the idea of you don’t have to test everything to destruction, just to make sure it’s right. Do you have any parting words of wisdom?


I would like to finish our talk, actually these words of Stacy Kirk. She’s also a motivational speaker and I’ve heard her speech on Agile testing days. I would tell that you are not supposed to be a quality champion, you should be a quality hero. The difference between a quality champion and the quality hero is that the champion is fighting for someone else’s ideas, and the hero is fighting for own ideas. If you truly believe that software should work stable, reliable, do all the best. Everyone else, what helps you, what empowers you, it’s like in Marlon movies. Heroes always find some people to help them. Be a hero of quality.

Jonathon Wright:

That’s fantastic. Thank you so much, Mariia. It’s been a wonderful podcast and I’m definitely in for a franchise. I’m not sure about wearing a costume, but I know you mentioned that in your blog, your husband’s been doing 3D printing. Maybe we can print off some superhero and you can paint them once we go live. Thanks so much again, I look forward to reading your next blogs.


Thank you so much for this talk and for inviting me. It was really a big pleasure to talk to you.

Easily build reliable tests the evolve with your application's UI.