Testing United Conference: Unleashing the Power of AI in QA
QA Engineer Iana and her colleague Sara report their insights from the latest Testing United Conference with the main topic “AI Augmented QA".
02.03.2023
I started by investigating the existing workflows and processes to see what was already done before me. I noted all the available automatic and manual test suites. I observed how they were run in each environment. Then I identified the gaps and got to work.
Since it was important for me to introduce quality as part of the process as early as possible, we started with requirements. When our requirement engineering process was stable enough, we introduced regular requirement reviews with development and QA team members. This helped to gain a common understanding and identify mistakes and missing pieces early.
Then it came to the code review process. Nowadays there are plenty of automatic tools to help with that. We chose Sonar to do the static code analysis before the actual human looks at the code, so that main code smells, security vulnerabilities, and other bad practices are identified. This also created a stricter gateway to make sure that unit tests are written, and that they have good coverage. So that if any regressions were introduced into the code, they would be caught more quickly in the future.
There were some manual testing processes present already, so it was important to standardize them. So, we introduced a test case management tool that allowed us to create new test cases, group them into test suites, and track executions. We chose X-ray and are very happy with the features it provides.
I also introduced a layer of automatic integration tests that run after the code is deployed to the environment for the first time. Our QA team is constantly working to maintain and ensure good coverage when new features are added to the codebase.
Alongside that improvement, we started developing some end-to-end tests for existing and new products with limited coverage to be sure that our main business scenarios are working when the code is changing.
Even though there is still a lot of manual work involved in testing that could be automated, I am very proud that our processes are running quite smoothly, with the level of automation we have now. It means the QA team can focus on more creative scenarios and more edge case issues while doing manual testing. This was not possible before, because regression testing was taking a lot of time and labor. This gives us much more confidence about the code that we bring to production because we are sure that the likelihood of the customer seeing any issues is very low.
Furthermore, we introduced performance and load testing on a regular basis, so that we can provide good input to our operations team and have confidence that we can flexibly support periods of exceptionally high traffic or workloads across the platform.
I see the QA team’s role and processes as the “eyes” of the whole software lifecycle process. We constantly learn about the product and its state and provide valuable input to stakeholders like product owners and the development team about product status so that well-informed decisions can be taken about whether the product is good enough to go to production and be used by our customers.
Together with that, we are constantly reviewing existing processes to ensure that quality can be built in as early as possible, meaning that the QA team needs to be involved in every stage of the software development lifecycle. We have a good understanding of both the user perspectives and the technical aspects, which means we provide a unique perspective and opinion on certain parts of product development.
Manual testing is included in our software development process. After code development is done the code is deployed into a realistic test environment, where the developer, with a quick manual test, can discover and fix the issues that were not visible in the local development environment. Then we move to functional manual testing, regression manual testing, and exploratory manual testing which are performed by the QA team. We have introduced the processes and tools so that we can easily organize our test cases and track our executions and bugs. It was very important to accommodate exploratory testing since this part cannot be automated. This allows the QA team to gain unique knowledge of the product, exercise creativity and discover edge cases that can be used in future testing.
We also work on automating many scenarios as system tests and end-to-end tests. We do not have dedicated test automation engineers. That’s why all QA team members participate in test automation. We have a backlog of scenarios that need to be automated and each QA team member can pull a task from there when they have time. Since the QA engineer has already tried the feature manually and knows very well how it works, he/she can design a good automatic test case.
We are constantly expanding our automatic test suites, covering new product features, and improving regression coverage.
We are still constantly improving all our processes. While we strive for continuous process improvement in all areas, we also recognize that stakeholders, like product owners and business analysts, need to be aware that QA starts with requirements gathering.
That’s why we started having regular requirement and process review sessions where the requirements are presented to the whole team and then discussed. We found these sessions very useful, there are usually a lot of questions, and sometimes some definitions are found to be ambiguous or contain gaps and errors. These discussions ensure that when the requirement gets to the development team, they have a comprehensive and clear understanding of what is needed, and the QA engineers know how to go about testing and what to focus on.
As the AURENA team grows, it will be vital to stay on top of process improvement, strategy effectiveness, and knowledge sharing, so that our teams are coordinated well. We have recently introduced pair-testing sessions with our developers and early cross-functional design reviews with all team members. We believe this is providing joint focus and understanding between developers and QA engineers.
Together with that, we are working on improving our performance testing to make it more regular and provide valuable results early in the process.
Another challenge is to constantly expand our automatic test suites, and improve speed and stability.
When I started working in the field, I thought the main challenge of the QA engineering role is a technical one – understanding requirements and writing good manual and automatic test cases. Now I see the main challenge for any QA team or engineer is really about communication. The QA engineer needs to be part of the software development lifecycle from the start.
We need to constantly question everything and not be afraid to voice our questions at any stage. Every question can reveal a potential knowledge gap or misunderstanding between the requirements and developers and the QA team, or a potential issue in the implemented code. For that, we need to maintain great communication and build trust and positive team dynamics. I am happy to say that this is the case in AURENA Tech, which makes it easy for the QA team to do our job well and constantly improve the product.