Processing...
Artificial intelligence (AI) is no longer the “next big thing”—it’s here, and it’s reshaping industries in real-time. For businesses seeking to stay competitive, leveraging AI is no longer optional. Yet, while the potential of AI is clear, implementing it effectively is a different story. At NIX, we’ve received a surge of inquiries from clients asking how AI can enhance their software development and testing processes. These conversations sparked an important journey: figuring out how to use AI in quality assurance (QA) to bring real, tangible benefits.
To explore this fascinating topic, we sat down with Serhii Mohylevskyi, NIX’s QA Practice Leader. With over a decade of experience spanning manual, automated, and performance testing, Serhii has dedicated his career to ensuring the highest standards of quality. As an industry trailblazer, he’s adept at spotting emerging trends and integrating innovative technologies—like AI—into NIX’s QA practices. Serhii has helped countless businesses elevate their software quality, streamline software testing process, and future-proof their technology strategies.
In this interview, Serhii shares insights into how NIX has embraced AI and generative AI in quality assurance, the challenges and breakthroughs along the way, the immense benefits AI brings to testing, and what the future holds for QA professionals in an AI-driven world. Stay tuned as we dive into this exciting journey and uncover the potential of AI in quality assurance.
Serhii: Certainly! NIX is a global software engineering and IT services provider with a 30-year history and over 3,500 completed projects. We have a team of over 3,000 in-house experts, including more than 400 QA engineers. We take pride in our ability to deliver high-quality solutions and achieve a 95% customer satisfaction rate. Any changes in our development and QA standards impact hundreds, if not thousands, of projects. This is why introducing AI into our QA processes was a particularly significant undertaking.
It was a bit of both, actually. About two years ago, when ChatGPT took the world by storm, we started seeing a surge of interest in AI from our clients. They were curious about how we were incorporating AI testing tools into our development processes, particularly in QA. Internally, we were also exploring the potential of AI solutions to enhance our QA practices and improve efficiency.
Serhii: You’re absolutely right, it was a challenge! We had to cut through the noise and focus on practical applications of AI, ML, and Deep Learning that could truly benefit our QA processes. We started by analyzing our existing QA workflows and pinpointing areas where AI could potentially make the biggest impact. This included tasks like automated test case generation, defect prediction, and test analysis. Then, we began researching various AI tools and platforms, evaluating their capabilities and suitability for our specific needs. It was important for us to find solutions that could integrate seamlessly with our existing workflows and deliver tangible value to our clients.
Serhii: We had a few key goals in mind when in comes to the implementation of AI in quality assurance. First and foremost, we believed that AI could help us increase the productivity of our engineers and teams, thus providing top-tier QA solutions. This could be achieved by automating repetitive tasks, such as test data generation or bug report analysis, freeing up our QA experts to focus on more complex and strategic activities. Secondly, we wanted to reduce testing time and costs for our clients. By optimizing our QA processes with AI, we aimed to deliver software faster and more efficiently, ultimately providing cost savings to our clients. And finally, we saw AI as a way to enhance the quality of our software. By leveraging AI’s ability to analyze vast amounts of data and identify patterns, we hoped to detect defects earlier in the development cycle and prevent them from reaching production.
Serhii: We spent a couple of months investigating everything we could find related to AI in quality assurance. We collected a lot of data and categorized the tools based on their readiness for commercial use and whether we could confidently recommend them to our clients. We found that most AI tools for QA fell into three categories: those that existed previously but added AI features on the hype wave, those that were entirely new AI-based products, and those that were essentially false advertising.
The first category, existing tools with added AI features, often had good quality and UI, but the AI was more of a marketing gimmick than a truly valuable feature. They offered things like generating user avatars or checking spelling in test cases, but nothing that truly transformed our QA processes. These products were also quite expensive for the limited AI benefits they provided.
The second category, new AI-based products, showed more promise in terms of actual “intelligence.” However, they often lacked polish, had buggy UIs, and some of their innovative ideas didn’t quite work as expected. While these tools weren’t ready for prime time, they gave us a glimpse into the potential future of AI in QA.
Finally, there were products that made grand promises but lacked substance, sometimes even asking for credit card information before revealing their actual capabilities. These were obvious cash grabs or scams, and we tried to avoid them. However, the sheer number of such products was surprising.
Serhii: Absolutely! Amongst the noise, we did find some genuinely good ideas and promising AI test automation tools. For example, we came across services that could generate automated tests against real applications and even document the test cases along the way. There were also tools that could generate test cases based on a feature description, which could be a huge time-saver for QA engineers. And some tools offered the ability to “heal” automated tests that failed due to unexpected changes in the application, reducing maintenance efforts and improving the robustness of automated testing. We even found tools that could provide automatic recommendations for a test plan from a pool of test cases.
Serhii: Unfortunately, yes. Some of the products lacked a complete feature set for what they claimed to do. Others required excessive access to the application, like access to the entire source code, which raised significant security concerns. Some didn’t seem to scale well, which would be a problem for our larger projects. There were various other reasons that prevented us from recommending them for an average project.
Serhii: That’s where things get interesting. We had a column in our summary table to indicate whether a product was ready for commercial use on a large scale. And I’m sure you’re curious how many products actually received a “Yes” in that column—the answer is zero. None of the QA-specific tools we evaluated met our criteria for large-scale projects due to limitations in their feature set, potential security concerns, or scalability issues.
Serhii: Not at all! It’s important to note that we intentionally excluded major players like ChatGPT, Google’s Gemini, and GitHub Copilot from our initial analysis. We were already familiar with these tools and using them in some capacity, but without a specifically investigated and documented approach. Our focus was on evaluating QA-specific AI tools, and while none of those met our criteria at the time, we still see tremendous potential for AI in QA testing to enhance our processes.
Serhii: Exactly! Our next step was to understand how our engineers were already using AI and then develop a structured approach to maximize its benefits. We created a questionnaire for our 400 QA engineers, asking about their AI usage in their daily work. We received around 350 responses, and approximately half of our QA team was already utilizing AI in various ways. The top use cases included assistance with test automation, brainstorming ideas (including generating test cases), generating test data, proofreading texts like emails, and automating routine activities. This confirmed our belief that AI in software quality assurance significantly enhances our processes. To translate these individual use cases into a standardized approach, we developed an in-house course focused specifically on enhancing the quality assurance process using generative AI. This course covered best practices, ethical considerations, and practical techniques for leveraging AI in various QA tasks.
The overall content of our course look like this:
With all that in place, our approach just boiled down to figuring out what general-use AI models are best used for and empowering our engineers to use AI for those flows.
And now, as engineers all across the company are learning, they’re also starting to come up with more new use cases and optimizations to the existing ones.
Serhii: We’ve seen significant improvements in efficiency and productivity. Our measurements show that manual testers at the mid-to-senior level spend about 20% less time on test case generation and documentation compared to pre-AI times. For engineers who code, the benefits are even more pronounced. Tasks that were previously time-consuming or impossible are now achievable with AI assistance. For example, generating multiple test frameworks in a single day for A/B testing or conducting code reviews with a single QA engineer are now possible.
However, there are downsides. While AI in quality assurance can be incredibly helpful, it’s not perfect and requires human oversight. For instance, when asked to write test cases, AI might miss critical checks or generate nonsensical ones. This means that AI-generated output needs to be reviewed and validated by experienced QA professionals. In fact, we’ve found that AI assistance can be detrimental to junior engineers who may not have the experience to discern correct output from incorrect output. It’s similar to how a junior employee wouldn’t typically have a personal assistant—they’d need to develop their core skills first.
Serhii: Many companies form their initial impressions of AI from advertisements and exaggerated claims, leading to unrealistic expectations. They then try to incorporate these “unreal” tools into their workflows, and inevitably, it doesn’t work out well. The reality is that any new technology needs time to mature before it can be truly effective in a commercial setting. Generative AI, in its current form, is a great tool for QA engineers, but the software testing industry still needs to refine AI-powered QA-specific products to reach their full potential.
Serhii: The future of AI in QA is bright, with AI transforming how software testing is conducted by enhancing processes, boosting efficiency, and ensuring greater accuracy. AI is set to be a pivotal force, enabling QA engineers to streamline workflows and achieve superior results in less time. Below are key advancements that highlight AI’s potential in shaping the future of QA:
The future of AI in quality assurance is undoubtedly promising. AI’s ability to analyze vast amounts of data, identify patterns, and automate repetitive tasks holds immense potential for improving the efficiency and effectiveness of human testers and QA processes.
However, while AI can automate certain aspects of QA, it cannot entirely replace the expertise and judgment of human QA professionals.The human touch remains crucial for tasks that require critical thinking, creativity, and a nuanced understanding of user expectations.As AI continues to evolve, the future of QA lies in a collaborative approach, where AI augments human capabilities, enabling QA teams to focus on more strategic and complex tasks while AI handles the heavy lifting of data analysis and test automation.This synergy will lead to more robust, efficient, and user-centric software development processes, which is beneficial for tech teams and businesses.
Serhii: Our experience has taught us several important lessons:
At NIX, we’ve embraced AI as a valuable tool to augment our QA processes and empower our engineers. Our team of experienced QA professionals, combined with our strategic approach to AI implementation, ensures that we deliver high-quality software solutions leveraging AI and generative AI in software quality assurance.
At NIX, we empower clients’ projects with gen AI in quality assurance to optimize testing, accelerate release cycles, and enhance software reliability. Our QA experts leverage machine learning, natural language processing, and generative AI in QA to automate test case generation, detect patterns in defects, and expand test coverage with precision. By implementingAI in QA testing, we reduce manual testing efforts, speed up issue resolution, and improve overall software quality. While AI enhances efficiency, our team ensures expert oversight to validate results and handle complex logic testing. With gen AI in QA, we help businesses streamline their QA processes, reduce costs, and bring high-quality products to market faster.
If you’re looking for a technology partner with a proven track record in QA and a forward-thinking approach to AI to outsource software testing, we invite you, our readers, to connect with us and explore how we can help you achieve your business goals.
Be the first to get blog updates and NIX news!
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
SHARE THIS ARTICLE:
Schedule Meeting