MOBILE TESTING & THE IMPORTANCE OF TEST METRICS

If you’re afraid, don’t do it; if you’re doing it, don’t be afraid!
—Genghis Khan

Nothing can be loved or hated unless it is first understood.
—Leonardo da Vinci

If you are in the mobile application testing business, you need to accept that the less time you spend worrying about the test metrics, the more problems you will have when your app goes live. The less time you spend on testing, the more unsatisfied users you will have. In addition, you may end up using more test budget than expected, losing management’s trust, improving the wrong areas of your software delivery life cycle, and may even end up explaining to people why a live defect was not found during system testing.

Without the knowledge you would obtain through proper test metrics, you would not know (1) how good/bad your testing was and (2) which part of the development life cycle was underperforming. Never forget that if you want to reach and announce correct results and convince people that you are doing good testing, you need to take care of your test metrics. If you are in the mobile business and do not pay enough attention to test metrics, your users will announce the quality instead of you. Obviously, you, the professional tester, would not want to be in that situation.

If you are prepared to collect your test metrics and show them the utmost attention, then you are ready to convince people around you. Unless you have the full team’s commitment, you cannot establish a healthy working process. Your team (including project managers, business analysts, and software developers) should show commitment on the metrics that you will collect and publish. While collecting your metrics, it is also important to know that you will be creating a competition between several parties (business analysts, software developers, and testers) and this competition will create resistance. Your findings will probably restrict people’s comfort zones. Some people will get frustrated by the transparent environment you will be creating through monitoring the metrics. They will object to the metrics you are publishing, and even worse, they will try to intimidate you with taking back their support and may show several other unprofessional attitudes.

If you are afraid already, I suggest that you not collect and publish any metrics. As a tester, you need to be brave, transparent, and informative; otherwise, anyone can run a test case from a test suite. No big deal.

The list below includes some tips that you may want to consider while you are setting your metrics-based testing framework:

 

  • Tell people why metrics are necessary.
  • Select the metrics that are fitting (fitting to your test process, development methodology, organization structure, and so on).
  • Explain each metric that you will collect/publish, not only to testers but to any people who are related to the project.
  • Try to compare a metrics-based test project with a nonmetrics-based one and show the consequences of having no metrics.
  • Make people believe in metrics and get their commitment (you can spread the commitment by having ambassadors in different groups, such as some business analysts and some developers who believe in you and the metrics).
  • Try to be informative, gentle, and simple in your first test reports (the first metrics are very important, so let people digest them one by one).
  • Try to evaluate/monitor/measure processes and products rather than individuals (when people recognize that they are evaluated, they can act aggressively and unnaturally, and they can be more self-enclosed; worst of all, they may falsify the data you collect and make you publish inaccurate metrics).
  • Publish two different kind of metrics: (1) point-in-time metrics (a snapshot showing the situation at a particular moment t) and (2) trend metrics (displaying any activity over a period of time).
  • Do not send each test metric or report to everyone; upper managers will not be interested in very low-level metrics (you need to include information at a suitable level of detail).
  • Use tools and proper formats in your reports; imprecise metrics will damage your reputation as a tester (include metric definitions, labels, thresholds, team, and project and date information in every metric).
  • Metric reports should include comments and interpretation. They must tell people what to do. Publishing numbers doesn’t mean anything unless you interpret them and draw conclusions from them.
  • Try to be 100 percent objective with your comments; people don’t like or trust subjective arguments.
  • Do not show only bad things; there must be good things in any project (test metrics are published not only for showing missing, lacking, and problematic areas; they are also for showing your confidence).
  • Try to correlate different metrics with each other (e.g., if invalid defects are numerous, it is logical to have high levels of defect turnaround or fixing time because developers are struggling with understanding defect reports rather than fixing them).
  • Metrics should be accessible and visible any time. You need to make them available 24-7.
  • Do not wait to be 100 percent complete and perfect before publishing your results (whether you have waited for a long time or not, your metrics will not be perfect, so do not waste your time).

Now we turn to which metrics to collect. Of course, you may find many more than the ones listed here, but this is a starting point. I will organize metrics into three categories: (1)test resources, (2) test processes, and (3) defects.

  1. Test Resources Metrics

– Testing time (schedule)
– Testing budget (man-day effort)
– Testing resources (people involved)
– Percent effort per test phase (e.g., 10 percent unit, 20 percent integration, 50 percent system, 10 percent UAT, 10 percent other)
– Test efficiency (actual test effort/planned test effort)
– Test case generation efficiency (test cases generated/prepared within a period of time)
– Test case execution efficiency (test cases executed within a period of time)
– Total cost of quality ([detection effort + prevention effort + defect fixing] / total project effort)
– Test cost variance (earned value–actual cost)
– Test schedule variance (earned value–planned value)

  1. Test Process Metrics

– Total number of test cases
– Number of passed, failed, blocked, not run, inconclusive test cases
– Execution ratio ([number of passed + failed] / total number of test cases)
– Quality ratio (number of passed / number of executed))
– Requirements coverage
– Requirements volatility (updates per requirement within a given period of time)
– Test effectiveness / quality of testing (defects found during testing / total number of defects)

  1. Defect Metrics

– Number of defects found
– Number of open/closed defects
– Defects by platform (iOS, Android, BlackBerry, etc.)
– Defects by display density/size
– Defects by priority (business impact of a defect)
– Defects by severity (system impact/fix cost of a defect)
– Defects by root cause, defect taxonomy (missing requirement, invalid data, coding error, configuration issue, etc.)
– Defects by test phase (unit, integration, system, UAT, etc.)
– Defects by state (open, closed, in progress, etc.)
– Defects clustering (where do defects populate most?)
– Defect turnaround time (time spent for fixing a defect)
– Defect rejection ratio (ratio of invalid defects)
– Defect fix rejection ratio (ratio of unfixed defects)
– Defect per requirement
– Defect per developer day/line of code
– Defect finding rate (how many are found within a given period)

As testers are evaluators, they need to believe that there is as great a value in measurement as in metrics. Metrics involve numerous benefits to all project stakeholders. Metrics make the test process transparent, visible, countable, and controllable and in a way allows you to repair your weak areas and manage your team more effectively.

If you say that a tester never assumes, you need to back this idea with your belief in metrics. If you do not have metrics, you are obliged to act with assumptions rather than proven realities. So you need to make a choice: Do you want to assume, or do you want to be sure?