Request a Call
Spinner

Processing...

  • This field is hidden when viewing the form

Artificial intelligence (AI) is no longer the “next big thing”—it’s here, and it’s reshaping industries in real-time. For businesses seeking to stay competitive, leveraging AI is no longer optional. Yet, while the potential of AI is clear, implementing it effectively is a different story. At NIX, we’ve received a surge of inquiries from clients asking how AI can enhance their software development and testing processes. These conversations sparked an important journey: figuring out how to use AI in quality assurance (QA) to bring real, tangible benefits.

To explore this fascinating topic, we sat down with Serhii Mohylevskyi, NIX’s QA Practice Leader. With over a decade of experience spanning manual, automated, and performance testing, Serhii has dedicated his career to ensuring the highest standards of quality. As an industry trailblazer, he’s adept at spotting emerging trends and integrating innovative technologies—like AI—into NIX’s QA practices. Serhii has helped countless businesses elevate their software quality, streamline software testing process, and future-proof their technology strategies.

In this interview, Serhii shares insights into how NIX has embraced AI and generative AI in quality assurance, the challenges and breakthroughs along the way, the immense benefits AI brings to testing, and what the future holds for QA professionals in an AI-driven world. Stay tuned as we dive into this exciting journey and uncover the potential of AI in quality assurance.

Interviewer: Serhii, could you tell us a bit about NIX and what sparked your interest in exploring AI for QA? Was it client-driven, an internal initiative, or a combination of both?

Serhii: Certainly! NIX is a global software engineering and IT services provider with a 30-year history and over 3,500 completed projects. We have a team of over 3,000 in-house experts, including more than 400 QA engineers. We take pride in our ability to deliver high-quality solutions and achieve a 95% customer satisfaction rate. Any changes in our development and QA standards impact hundreds, if not thousands, of projects. This is why introducing AI into our QA processes was a particularly significant undertaking.

It was a bit of both, actually. About two years ago, when ChatGPT took the world by storm, we started seeing a surge of interest in AI from our clients. They were curious about how we were incorporating AI testing tools into our development processes, particularly in QA. Internally, we were also exploring the potential of AI solutions to enhance our QA practices and improve efficiency.

Interviewer: It sounds like you were facing a significant challenge. How did you approach the task of finding the right AI tools and strategies for QA, especially given the hype and high expectations surrounding AI at the time?

Serhii: You’re absolutely right, it was a challenge! We had to cut through the noise and focus on practical applications of AI, ML, and Deep Learning that could truly benefit our QA processes. We started by analyzing our existing QA workflows and pinpointing areas where AI could potentially make the biggest impact. This included tasks like automated test case generation, defect prediction, and test analysis. Then, we began researching various AI tools and platforms, evaluating their capabilities and suitability for our specific needs. It was important for us to find solutions that could integrate seamlessly with our existing workflows and deliver tangible value to our clients.

Interviewer: What were some of the key benefits you were hoping to achieve by incorporating AI into your QA processes? Were there any specific goals or metrics you were aiming for?

Serhii: We had a few key goals in mind when in comes to the implementation of AI in quality assurance. First and foremost, we believed that AI could help us increase the productivity of our engineers and teams, thus providing top-tier QA solutions. This could be achieved by automating repetitive tasks, such as test data generation or bug report analysis, freeing up our QA experts to focus on more complex and strategic activities. Secondly, we wanted to reduce testing time and costs for our clients. By optimizing our QA processes with AI, we aimed to deliver software faster and more efficiently, ultimately providing cost savings to our clients. And finally, we saw AI as a way to enhance the quality of our software. By leveraging AI’s ability to analyze vast amounts of data and identify patterns, we hoped to detect defects earlier in the development cycle and prevent them from reaching production.

Generative AI tools have potential to improve the developer experience

Interviewer: So, you embarked on this journey to explore AI in QA. What were some of your key findings? Did you encounter any tools that met your expectations or were ready for commercial use?

Serhii: We spent a couple of months investigating everything we could find related to AI in quality assurance. We collected a lot of data and categorized the tools based on their readiness for commercial use and whether we could confidently recommend them to our clients. We found that most AI tools for QA fell into three categories: those that existed previously but added AI features on the hype wave, those that were entirely new AI-based products, and those that were essentially false advertising.

The first category, existing tools with added AI features, often had good quality and UI, but the AI was more of a marketing gimmick than a truly valuable feature. They offered things like generating user avatars or checking spelling in test cases, but nothing that truly transformed our QA processes. These products were also quite expensive for the limited AI benefits they provided.

The second category, new AI-based products, showed more promise in terms of actual “intelligence.” However, they often lacked polish, had buggy UIs, and some of their innovative ideas didn’t quite work as expected. While these tools weren’t ready for prime time, they gave us a glimpse into the potential future of AI in QA.

Finally, there were products that made grand promises but lacked substance, sometimes even asking for credit card information before revealing their actual capabilities. These were obvious cash grabs or scams, and we tried to avoid them. However, the sheer number of such products was surprising.

Interviewer: It sounds like you encountered a mixed bag of AI tools for QA. Were there any that stood out as genuinely useful or promising?

Serhii: Absolutely! Amongst the noise, we did find some genuinely good ideas and promising AI test automation tools. For example, we came across services that could generate automated tests against real applications and even document the test cases along the way. There were also tools that could generate test cases based on a feature description, which could be a huge time-saver for QA engineers. And some tools offered the ability to “heal” automated tests that failed due to unexpected changes in the application, reducing maintenance efforts and improving the robustness of automated testing. We even found tools that could provide automatic recommendations for a test plan from a pool of test cases.

Interviewer: That sounds impressive! Were there any reasons why you couldn’t adopt these promising tools for your projects?

Serhii: Unfortunately, yes. Some of the products lacked a complete feature set for what they claimed to do. Others required excessive access to the application, like access to the entire source code, which raised significant security concerns. Some didn’t seem to scale well, which would be a problem for our larger projects. There were various other reasons that prevented us from recommending them for an average project.

Interviewer: So, after all this research and analysis, did you find any AI tools suitable for your QA processes?

Serhii: That’s where things get interesting. We had a column in our summary table to indicate whether a product was ready for commercial use on a large scale. And I’m sure you’re curious how many products actually received a “Yes” in that column—the answer is zero. None of the QA-specific tools we evaluated met our criteria for large-scale projects due to limitations in their feature set, potential security concerns, or scalability issues.

Interviewer: Does this mean you’ve given up on AI in testing altogether?

Serhii: Not at all! It’s important to note that we intentionally excluded major players like ChatGPT, Google’s Gemini, and GitHub Copilot from our initial analysis. We were already familiar with these tools and using them in some capacity, but without a specifically investigated and documented approach. Our focus was on evaluating QA-specific AI tools, and while none of those met our criteria at the time, we still see tremendous potential for AI in QA testing to enhance our processes.

Interviewer: It sounds like you already had some adoption of general-purpose AI tools within your QA team. How did you go about formalizing and optimizing their use, and what were the most common ways your QA engineers were using AI?

Serhii: Exactly! Our next step was to understand how our engineers were already using AI and then develop a structured approach to maximize its benefits. We created a questionnaire for our 400 QA engineers, asking about their AI usage in their daily work. We received around 350 responses, and approximately half of our QA team was already utilizing AI in various ways. The top use cases included assistance with test automation, brainstorming ideas (including generating test cases), generating test data, proofreading texts like emails, and automating routine activities. This confirmed our belief that AI in software quality assurance significantly enhances our processes. To translate these individual use cases into a standardized approach, we developed an in-house course focused specifically on enhancing the quality assurance process using generative AI. This course covered best practices, ethical considerations, and practical techniques for leveraging AI in various QA tasks.

The overall content of our course look like this:

Training Levels

With all that in place, our approach just boiled down to figuring out what general-use AI models are best used for and empowering our engineers to use AI for those flows.

And now, as engineers all across the company are learning, they’re also starting to come up with more new use cases and optimizations to the existing ones. 

Interviewer: These advancements sound promising! Can you quantify the benefits you’ve seen from integrating AI into your QA processes? And are there any downsides or challenges you’ve encountered?

Serhii: We’ve seen significant improvements in efficiency and productivity. Our measurements show that manual testers at the mid-to-senior level spend about 20% less time on test case generation and documentation compared to pre-AI times. For engineers who code, the benefits are even more pronounced. Tasks that were previously time-consuming or impossible are now achievable with AI assistance. For example, generating multiple test frameworks in a single day for A/B testing or conducting code reviews with a single QA engineer are now possible.

However, there are downsides. While AI in quality assurance can be incredibly helpful, it’s not perfect and requires human oversight. For instance, when asked to write test cases, AI might miss critical checks or generate nonsensical ones. This means that AI-generated output needs to be reviewed and validated by experienced QA professionals. In fact, we’ve found that AI assistance can be detrimental to junior engineers who may not have the experience to discern correct output from incorrect output. It’s similar to how a junior employee wouldn’t typically have a personal assistant—they’d need to develop their core skills first.

Interviewer: Based on your experience, why do you think some companies struggle to effectively implement AI in their QA processes?

Serhii: Many companies form their initial impressions of AI from advertisements and exaggerated claims, leading to unrealistic expectations. They then try to incorporate these “unreal” tools into their workflows, and inevitably, it doesn’t work out well. The reality is that any new technology needs time to mature before it can be truly effective in a commercial setting. Generative AI, in its current form, is a great tool for QA engineers, but the software testing industry still needs to refine AI-powered QA-specific products to reach their full potential.

Interviewer: It’s been fascinating to hear about NIX’s journey with AI in QA. What are your thoughts on the future of AI in this field?

Serhii: The future of AI in QA is bright, with AI transforming how software testing is conducted by enhancing processes, boosting efficiency, and ensuring greater accuracy. AI is set to be a pivotal force, enabling QA engineers to streamline workflows and achieve superior results in less time. Below are key advancements that highlight AI’s potential in shaping the future of QA:

Alt: Future Outcomes of AI-driven Quality Assurance
  • AI-driven Test Automation: AI is elevating test automation by introducing self-healing capabilities to test scripts. AI in QA automation provides smart algorithms that adapt to application changes, automatically updating scripts and predicting failure points, significantly reducing maintenance efforts and strengthening automated testing frameworks.
  • Intelligent Test Case Generation: With the ability to analyze requirements, user stories, and historical data, AI generates comprehensive test cases that anticipate a range of potential issues. This predictive capability ensures more thorough testing coverage.
  • Predictive Defect Prevention: AI-powered analytics proactively predict defects by examining historical patterns, code repositories, and previous testing outcomes. This allows QA teams to address critical areas early, minimizing risks and enhancing software reliability.
  • Enhanced Test Execution and Analysis: AI in quality assurance accelerates test execution by identifying anomalies, patterns, and correlations in test results. This not only speeds up problem detection but also provides actionable insights into root causes, enabling faster resolutions.
  • Optimized Test Environments and Data Management: AI intelligently optimizes test environments by analyzing usage patterns and provisioning resources dynamically. It also facilitates realistic test data generation and management, ensuring that testing scenarios mirror real-world conditions effectively.
  • AI-powered Defect Reporting: AI tools automate detailed bug report creation by capturing essential data from recorded test sessions. These reports include problem descriptions, steps, expected outcomes, and relevant context, improving communication and streamlining the debugging process.
  • Continuous Testing Powered by AI: Integrating AI into continuous testing enables real-time analysis of software performance, security, and user experience. This rapid feedback loop supports faster, high-quality software releases.

The future of AI in quality assurance is undoubtedly promising. AI’s ability to analyze vast amounts of data, identify patterns, and automate repetitive tasks holds immense potential for improving the efficiency and effectiveness of human testers and QA processes.

However, while AI can automate certain aspects of QA, it cannot entirely replace the expertise and judgment of human QA professionals.The human touch remains crucial for tasks that require critical thinking, creativity, and a nuanced understanding of user expectations.As AI continues to evolve, the future of QA lies in a collaborative approach, where AI augments human capabilities, enabling QA teams to focus on more strategic and complex tasks while AI handles the heavy lifting of data analysis and test automation.This synergy will lead to more robust, efficient, and user-centric software development processes, which is beneficial for tech teams and businesses.

Interviewer: Thank you for sharing your insights, Serhii. What are some key takeaways from NIX’s journey with AI in quality assurance?

Serhii: Our experience has taught us several important lessons:

Key Takeaways From Communication With Serhii Mohylevskyi
  • AI is not a silver bullet.AI in quality assurance does help if you know how to use it but it will never double or triple your productivity in the long run. AI in QA automation showcases good results.The gains are more minor but they definitely exist and they increase over time as the people get more familiar with the toolset.
  • AI is not easy to introduce. If introduced on a company level it definitely needs to be tuned for specific needs. Just following popular opinions will not get you far.
  • AI helps engineers but it can’t empower a junior to work as a senior. Generative AI in quality assurance needs to be fact-checked and controlled. If it’s not, you could bring the result of AI hallucinating into your work thinking that it’s true information. And dealing with those misconceptions is always a painful process.
  • Specific purpose AI tools for testing are not quite here yet. Well, they are, but they don’t seem to bring that much real value compared with non-AI tools of similar purposes.
  • Fortunately I’m not losing my QA job any time soon. AI is exciting. It’s truly changing how we perceive our life in some aspects. But for now it’s only an assistive technology and it’s not ready to tackle such complex activities as quality assurance.

At NIX, we’ve embraced AI as a valuable tool to augment our QA processes and empower our engineers. Our team of experienced QA professionals, combined with our strategic approach to AI implementation, ensures that we deliver high-quality software solutions leveraging AI and generative AI in software quality assurance.

At NIX, we empower clients’ projects with gen AI in quality assurance to optimize testing, accelerate release cycles, and enhance software reliability. Our QA experts leverage machine learning, natural language processing, and generative AI in QA to automate test case generation, detect patterns in defects, and expand test coverage with precision. By implementingAI in QA testing, we reduce manual testing efforts, speed up issue resolution, and improve overall software quality. While AI enhances efficiency, our team ensures expert oversight to validate results and handle complex logic testing. With gen AI in QA, we help businesses streamline their QA processes, reduce costs, and bring high-quality products to market faster.

If you’re looking for a technology partner with a proven track record in QA and a forward-thinking approach to AI to outsource software testing, we invite you, our readers, to connect with us and explore how we can help you achieve your business goals.

Contents

Contact Us

Accessibility Adjustments
Adjust Background Colors
Adjust Text Colors