8ed42c1ac769b0246bad7f1a0a750b35
Thomas Carney
Thomas thinks of ways to make Planio grow, writes content and ponders metrics.
January 09, 2017 · 6 min read

QA in Software Engineering

High score in tetris"tetris" by Nicolas Esposito, CC BY 2.0

When software crashes, you are increasingly likely to lose more than your high score in Tetris.

Our cars, our airplanes, and health care systems all rely more and more on software. As Marc Andreessen says, software is eating the world, as almost every industry is being reimagined with the impact of software.

All of this is to say that the quality of all this software is really important. Let’s dive into a landscape overview of quality in software.

  1. The Two Aspects of Software Quality
  2. Testing Software
  3. The Distinction Between Checking and Testing
  4. The QA Function in Software Teams
  5. A Process for Tracking Bugs
  6. The End

The Two Aspects of Software Quality

functional versus structural quality

When we’re talking about the quality of software, we’re talking about how good it is. Broadly speaking, there are two ways in which software is good.

The first is functional quality: “how well does this software do the job it’s supposed to do?” Will it crash if I try to import a CSV file? Can the user do the tasks that she wants to do with it without swearing and threatening the computer?

The second is structural quality:”how well is this software engineered?” Is it a pile of spaghetti code that will drive future developers to desperation? Or is it written in modular code that can be easily extended and modified later? Your users may not directly see the structural quality, but it still impacts how quickly you can develop new features and fix bugs that arise. You can sacrifice structural quality for speed, but doing this excessively will lead to technical debt.

Testing Software

Functional testing involves checking whether the software works as expected. For example, you might log into a microblogging app and try to create a post with no content and you expect to receive an error message saying that you can’t do that.

Every time you add some new code to your app, you run the risk of breaking the existing functionality (these are called regressions).

Ideally, you’d check whether the existing functionality still works after each new code change. That, however, will be very tedious if you’re doing it by hand.

That’s where the power of automation can come to your rescue. You can write a script that will automatically test the functionality you want to test. Here’s an example of a test for a Ruby on Rails microblogging app using the RSpec testing framework:

Example of test

In the above screenshot, the test:

  1. creates a user;
  2. signs the user into the app using the Capybara web driver;
  3. attempts to create a micro post with empty content; and
  4. checks whether this results in an error.

The fundamental idea is that you can run this test repeatedly during your development process, and the test results will show you whether you’ve broken anything.

You can be sure that creating an empty post will result in an error message if this test passes.

This saves you from having to open up the app in your browser, logging in and clicking through all the screens to see whether you’ll get an error message if you create an empty post.

At the same time, you should still go through the app on a regular basis, thinking about whether the app meets the quality requirements you had in mind. However, you’ll be spared the boring task of checking whether every little piece of functionality still works for the 15th time.

The reason is that there is a subtle difference between checking whether everything works as you think it works and testing the product on whether it meets your required standard.

There is a difference between checking whether the product works as expected & testing whether it meets standards

The Distinction Between Checking and Testing

Checking versus Testing in QA

Michael Bolton draws this distinction between checking and testing in software development.

Checking is the process of checking whether everything works as you think it will work. For example, you introduce a new feature in your software, and you go back and check whether the rest of the software runs as intended.

Checking”, Michael writes, “is focused on making sure the program doesn’t fail”.

The result of this is that you can write code that will test assertions about your code, and you can run those tests to see whether the assertions hold up or not.

In Michael’s definition, testing is an exploratory process in which you test the limits of the product at hand. Michael comments that when you are testing, you are “largely driven by questions that haven’t been answered or even asked before.

When you are testing you are largely driven by questions that haven’t been answered or even asked before.

You’re using your knowledge and experience to make judgment calls on whether something might be a problem. You can’t automate the process of exploring your software because by definition you’re uncovering new information.

You’re asking “what if?” questions about your software that you haven’t answered before.

The QA Function in Software Teams

Not every organization takes the same approach to quality in software. Some see it as a functional role owned by a team in the company. Others see it as a role in a cross-functional team, and yet others see quality as something every person working on the product should work towards.

Approach #1: the QA Team

Some organizations have a separate QA team that tests and analyzes the quality of the software. In this case, a developer develops a feature, and then it’s handed over to the QA team for assessment of its quality.

Perhaps the flaw in this approach is that you end up with a rather inflexible release schedule, an adversarial conflict between the product team and the QA team and perhaps even an attitude of developing features to the minimum required standard before letting the QA team figure out the problems.

Approach #2: the QA Champion

Some teams moved on to having a QA person in the cross-functional product team. That person would champion QA in the team and might also take the lead on writing automated tests. The aim is to move away from an adversarial situation where testers point out the problems in the developers’ work and move toward working together on producing higher quality software.

Approach #3: “No QA

Finally, some teams have taken this approach a step further. They’ve adopted a “No QA” approach without any dedicated QA role in the team. Every developer owns the QA of their work. They write their own tests, think up of edge cases and ship features that meet quality standards. Steve Wells comments that having QA roles allows others on the team to abdicate responsibility for quality to someone else.

Some teams also advocate a “test-driven” approach to software development. In this model, you write the assertions the software should pass before you write the software itself. You then write the software to pass the tests and then you refactor the software (improve the code quality), all while keeping the test passing.

A Process for Tracking Issues in Quality

Regardless of how you approach quality in your software development process, you’ll need a process of tracking bugs or issues. Let’s work through a process using Planio, which is perfect for this use case.

Capture Information about the Bug

First, you create an issue in Planio reporting the bug. Joel Spolsky argues that a good bug report needs three things:

  1. Steps to reproduce the bug;
  2. What you expected to see; and
  3. What you saw instead.

Three elements of a bug report

The reason for the second two is that it’s not always obvious that an outcome is a bug. The classic response to an irate middle manager filing a bug is, “That’s a feature, not a bug”. By being able to understand what the bug reporter expected to see you can determine whether the software is actually defective.

You might also want to gather information such as the browser, the operating system, the screen size or the device.

Prioritize the Bug

Prioritize the Bug

Is the padding off on one of your app screens? Or perhaps the security of your users’ data is at risk? One of these bugs is obviously more important than the other. You might even have a service level agreement that means you have to address certain categories of bugs within 24 hours, for example.

Communicate & Assign

Assign the issue

Based on the priority, you’ll have to get one or more people to look into fixing it. This means you have to assign it to someone and you have to keep the reporter updated on the status of the bug. People hate submitting a bug, and never hearing another word, even if you fix it.

Reproduce The Bug

It’s hard to fix a problem you can’t see. Therefore, the first step toward fixing a bug will be to reproduce the bug. If the bug can’t be reproduced, you’ll have to assign it back to the reporter with the status Cannot Reproduce.

Review and Deploy

It is good practice to have at least one other pair of eyes look over a bug fix, as there’s no point in solving one bug to create two others. You’ll want to then assign the fix to someone else to review before deploying the bug fix.

You can set up custom statuses in Planio for bug tracking. For instance, they could be:

  1. New
  2. Verified
  3. Cannot Reproduce
  4. In Progress
  5. Testing
  6. Fixed
  7. Unresolved

Custom Statuses for Bugs

A bug can then cycle through these statuses, ending up at Fixed or Unresolved depending on the outcome.

The End

That wraps up this article on quality in software. You should also check out these articles:

  1. Cross-Browser Testing: A Practical How-To Guide
  2. Principles and Processes for Smooth Issue Tracking
  3. Issue Tracking in Planio