Artificial Intelligence has taken shape slowly from the 1960s, penetrating into multiple fields like retail, medicine, banking, social networking and a lot more today as we speak.
It has come to assist humans, as it should, whenever the problem space has –
- Non deterministic rules — such as a spam filter. Imagine you had to add the list of rules for creating a spam filter. As soon as you have figured out a set of words to mark as spam there are a few more. AI aids in learning and providing solutions to these non determinism.
- Evolving interactions such as reading from a hand written file (cheque reader), listening to voice (Voice assistants), classifying images (Face recognition) and more.
- Deriving insights from a huge dataset to predict the next set of actions as an ‘informed’ business. If not for AI, decisions will be taken based on human understanding of limited data which can be skewed.
- Automation of repetitive predictable tasks such as elastic infrastructure.
Software development & testing does have many challenges similar to above and technologists have started seeking AI assistance to excel in those areas. To give a broad idea — using AI assistance for product visioning and shaping the user interactions, analysing and predicting failures from the gigabytes of logs everyday, elastic infrastructure as per usage and a lot more.
In this article, I would like to discuss specifically on how AI can assist in the testing space from the perspective of an individual tester and also as an organisation which needs elaborate testing skills. We shall also see what the market has as of today along these lines.
If you are all geared up, then let’s begin by exploring the day of our Lead QA, Pamela! She works in an agile software development team and is responsible for testing the application which consists of Web & Mobile UI with backend services. Sounds like a typical QA job?
Good Morning, Pamela!
Pamela starts the day warming up to good mornings, mails & pings. Before picking up any new tasks, she opens the regression suite results from the previous night run. She wanted to quickly see the pass percentage and report defects, if any. She realises the test suite has grown quite huge as the project has been running for a year now. She reminds herself to check again on the additional QA with the recruitment team which she has been waiting for 3 months now.
Pamela was shocked to see the many red lines on the results dashboard! There are a lot of test failures. She lets out a heave thinking how her day is going to span out. She knows it may take till lunch to analyse each of these test failures to identify them as defects, test changes and environment issues, post which she needs to start fixing them so that she can get a greener dashboard next time she runs the suite. Her morning is going to be occupied.
If AI can assist Pamela here-
- Can there be intelligence in test automation tools to understand that the test failure is actually an issue in the test and suggest Pamela with auto correction?
- Can there be AI to figure out system issues while test run and stop further running of tests? Can the dev team be alerted of the system issue while it’s failing? All the more can it try fixing common system failures by trying to reboot the service etc?
- Can AI help identify application defects and log them on approval, given that there are mechanisms to take videos, screenshots and a BDD layer already?
- Can there be assistance in deriving insights from test suite failures such as features with frequent failures in the recent times, related issues in user journeys, pass percentages of different features over a chosen period? These will help in root cause analysis and to give the dev team a heads up to incorporate the right practices.
ReportPortal.io is providing salvage to above questions, but not all. The tool helps tagging the test results into categories of defects, test failures and system issues from the logs. Tools like Test.ai, Functionize etc also bring assistance to test maintenance when there are changes at UI layer like in point 1.
Pamela checks the pass percentage one last time before running into a team retrospective meeting before lunch. The team had raised a few concerns about if the test coverage was thorough, if the right tests have been covered at right layers and about the almost always red regression suite! While Pamela listened to all the concerns which seemed valid, she once again remembered to hasten the new hire process — as she felt the lack of capacity being one of the major contributors for the above concerns.
One of those vent turned into Voila lunch!
Pamela hit the lunch with her colleague Gina, another QA, venting all the events of the morning and how she felt crunched of capacity and void of solutions to improve the situation till the new QA is hired. Gina remembered the fascinating AI assisted test authoring tools she heard of in a recent conference — Test.ai, Functionize, Appvance, Testim, TestCraft etc. (some are in beta stages and paid tools) and elaborated to Pamela on the same. Pamela couldn’t contain her curiosity and she decided to raise a testing spike card to explore these tools in the next iteration!
Pamela is fascinated by AI in test automation!
With the problem of capacity to increase the test coverage being addressed to an extent, Pamela wondered if she can find solutions to other concerns from the retrospective. Post lunch, she set off to find AI tools to assist in ‘understanding’ the test coverage areas better, especially those areas that do not have enough coverage. Plus tools to provide insights of coverage across different layers. A tool called Sealights tried to solve the same with AI (once again paid tool) and she added that too to the spike list.
She wondered if there will be tools in the future
- To find the duplicate test cases and to make the suite more crisper.
- To automatically suggest the minimal test set to run for each change in the commit.
- To automate the APIs with meaningful test data in the domain
Pamela’s wants more AI in Manual Testing!
After her little tête-à-tête with AI, she picked a new story card for manual testing. As she went through the various activities of manual testing, she couldn’t but ponder if AI can assist there too.
- Can AI assist in generating meaningful test data given a domain model of a DB?
- Can AI help in exploratory testing of edge cases using methodologies like boundary value testing, all pairs testing a form or an API?
- Can AI help in testing internationalisation of text across the web & Mobile UI?
- Can AI help in visual testing of UI look and feel and responsiveness across browsers?
- Is there help to cover basic security checks on the website?
- Can her exploratory testing be translated into test cases automatically and help document them?
As the day came to a close she explored the above questions. She learnt that visual testing is becoming prevalent with Applitools Eyes AI technology and added it to her spike list. She also got to know that there is assistance from Zap Spidering tools to help her with the security checks. So there is some relief with the manual testing aspect too.
What a day, she exclaimed with joy as she switched her laptop off!
So if we look back at Pamela’s day, we saw how currently AI tools provide, though limited, but critical assistance in the testing space. It’s no surprise most of these tools come with a cost. Its a cost which we need to compare against the lack of QA supply pool in certain markets, wise use of senior QAs time in productive aspects than those that can be left to AI tools, getting better insights into the quality aspect overall to alter course of action be it process, right skill set mix in the team.
Moreover the software industry in itself is becoming bigger and wider and the skills needed for a QA to do their jobs effectively is increasing — be it big data, infrastructure testing, performance, accessibility etc. In order to linearly expand, adoption of AI in the testing space can be one crucial first step?!