In 1962, NASA launched the Mariner 1 as their first attempt to send a spacecraft to Venus. However, shortly after it launched, the rocket veered off course and was forced to self destruct.
The cost? $135 million (in today’s dollars).
The issue? A missing hyphen in the code.
You might think software testing won’t result in as serious consequences for your company. However, a 2017 study found that software failures cost the U.S. economy $1.7 trillion in financial losses (and more than 268 years in downtime) that could’ve been avoided with proper testing.
Before any piece of software or new feature goes out to your users, you need to thoroughly put it through its paces. Test it. Try to break it. And make sure that whatever your users do, it responds as designed.
In short, you need a test plan.
A test plan is one of the most important parts of any software development process. It outlines how you’ll make sure your product or feature will do what it’s supposed to and not break when your users need it most.
But what should your test plan include? How deep do you really need to go to ensure your product holds up and your users get what they expect?
This guide will cover everything you need to know about defining and documenting your test plan and choosing the right test strategies that will ensure your users, development team, and stakeholders are all happy.
Ready to write your own test plan? Download our free test plan template and follow along with the post.
What is a test plan (and why do you need one)?
A test plan is a detailed document that outlines the test strategy, objectives, resources needed, schedule, and success criteria for testing a specific new feature or piece of software.
The main goal, of course, is to discover defects, errors, and any other gap that might cause the software to not act as intended or provide a bad experience for your users. More specifically, a test plan ensures your software:
- Meets the requirements that guided its design and development (In other words, does it do what it’s supposed to do when it’s supposed to do it?)
- Responds correctly to all kinds of inputs
- Lives up to the performance standards you’ve outlined and can be used as intended
- Can be installed and run in all intended environments
- Achieves the results you and your stakeholders are after
The cost of a bug found after release is nearly 30 times more expensive than if it was found during the design phase.
While these sound like pretty straight-forward criteria, in practice, they rarely are. The problem is that a “test” means you’re naturally testing against something. In most cases, this will mean your specifications and success criteria you put down in your SOW or planning doc. (But could also include things like comparable products, past versions, user expectations, standards or laws).
It’s nearly impossible to test every scenario, environment, or use case your software will encounter during its lifecycle. Instead, software errors, bugs, and defects could pop up from:
- Coding errors - i.e. bugs
- Requirement gaps - i.e. unrecognized or overlooked requirements such as edge cases, scalability, or even security.
- Environment changes - i.e. new software, hardware, alterations in source data…
This makes writing a clear, yet comprehensive test plan a difficult balance. You want to include as much detail as possible to make sure you’re not missing any glaring errors. But you also don’t want to drown your team in testing tasks, delay your release, or add in new errors from your “fixes.”
What should be included in your test plan template?
So what should go in your test plan then? Well, that depends.
Each product and feature will have its own specific testing criteria, strategies, and needs. Also, the goal of your test will change how you approach it. For example, User Acceptance Testing (UAT) is completely different from stress and load testing and your plan will need to be tailored to your ultimate goal.
However, this doesn’t mean you want to start from scratch each time you’re testing a new piece of software. Creating different test plan templates for different products is a great way to quickly guide your approach to testing new product releases, updates, and features.
So what should (or could) you include in your test plan? Broadly speaking, there are a few main areas you’ll want to include in your test plan that will act as the foundation of your test plan document:
1. Coverage: What exactly are you testing?
As we said before, creating a test plan is all about balance. You want to be comprehensive, but not overwhelming, which means getting specific about what will (and won’t) be included in the test plan.
After a brief introduction highlighting your test plan objectives, high-level scope, and schedule you need to define what you will or won’t be testing.
This is your test scope and it can quickly get out of hand if you don’t take time to be specific with it and answer both what you will test and why you’re going to test it.
- What tests are you going to undertake?
- Why have you chosen these ones (and not others)?
Everyone needs to be on the same page with the test criteria and scope. As a best practice, make sure you use industry-standard or at least agreed-upon standards and terminology to describe your tests and why they were (or weren’t) completed. This way, there’s no grey area or confusion about what you tested.
2. Methods: How are you going to carry out these tests?
Next, you need to clearly explain what your test strategy is. Go into as much detail as possible.
- What rules will your tests follow?
- What metrics are you going to collect and at what level?
- How many different configurations or environments are you going to test?
- Are there any special requirements or procedures you need to test?
You also need to know when your test has been successful. In other words, what are the pass/fail criteria for each test?
This isn’t the only criteria you need to be aware of. There are a few other common situations you need to outline in your test plan, including:
- Exit criteria. When is it OK to stop testing a feature and assume that the feature is “successful” in doing what it set out to do?
- Suspension criteria. When should you pause a test? Is there a threshold of errors where you should stop testing and start looking for solutions? What are the steps for closing it off and documenting what’s been done so far?
- Resumption requirements. How do you know when to resume a paused test? What are the steps for reviewing what’s been done and picking up?
It’s also a good idea at this point to list your assumptions and risks. In other words, what are you assuming is going to happen and what are some of the risks you’re going to face during the test?
Lastly, you need to outline your test project’s resource needs and schedule. Who is in charge of testing and what resources do they need (both technical and human)? When is the testing going to take place and for how long?
3. Responsibilities: What are your desired outcomes?
What are your required test deliverables? This means the data you want to collect, how you’re going to compile them in reports, and the issues and tasks that will be passed back to the development team.
To make sure nothing gets missed, each test deliverable should be assigned to a specific person on your team in a section on roles and responsibilities.
It’s important to remember that this is just a basic framework of what to include in a test plan. Over time, you’ll create your own library of test plan templates that will serve as guides for new product releases, updates, and features.
Download our free Test Plan Template to get started.
5 steps to create (and execute) a test plan for your new product or feature
Now that we have a high-level idea of what to include in our test plan template, let’s dig into the specifics. To make sure your testing scope doesn’t get way out of proportion, it’s important to have a step-by-step process for creating your test plan and executing on it properly.
Here’s where you should start:
1. Analyze the product or feature you’re testing
You need to have a deep understanding of the product or feature before you can start creating a test plan for it. For example, let’s say you’ve just gone through a website redesign and want to test it before launch. What information do you need?
- Talk with the designer and developer to understand the scope, objectives, and functionality of the site.
- Review the project documentation (such as your SOW, project proposal, or even the tasks in your project management tool).
- Perform a product walkthrough to understand the functionality, user flow, and limitations.
This step is what gives you the context to write your test plan introduction and objectives and start to plan out the resources you’ll need to complete it.
Truly understanding your product is the first step to creating a productive and successful test plan.
2. Design the test strategies (and approach) you’re going to use
Next, it’s time to decide the scope of your test plan. What’s included in the scope of your testing will depend on a number of factors beyond just the product or feature. You need to dig in and think about:
- Customer requirement: What are your users going to use most?
- Budget and timeline: How much time and resources do you have to complete testing?
- Product specs: What are the most important parts of this feature that need to be tested?
- Team abilities: Do you have the technical expertise you need to complete each test?
For our website redesign example, we might want to say that functionality, UX, and checkout flow are in scope. While stress, performance, and database testing are out of scope.
You might also want to think of this in terms of commonly used testing approaches, such as:
- Unit testing: Test the smallest piece of software or a specific feature.
- API testing: Test the API created for the application in multiple scenarios.
- Integration testing: Test multiple software modules or features as a group.
- System testing: Test the entire integrated system against its requirements.
- Install testing: Test the install/uninstall process your customers will go through.
- Compatibility testing: Test your software on different hardware, operating systems, and environments.
- Load and stress testing: Test your software performance as the workload increases (or goes beyond normal conditions).
Deciding what to test and documenting your test strategy are the most critical parts of your test plan. Don’t rush through it. Take the time to really understand your goals and needs and balance them against the resources you have for testing.
3. Define the test objectives and pass/fail criteria
As you define each different test you’re going to run, you need to know when your test is “done.” This means defining the pass and fail criteria for each specific test, as well as some of the things we mentioned above, such as exit and suspension criteria.
To do this, you’ll want to identify individual system metrics that you’re checking and decide what success means for each one. For example, if you were doing a performance test you might look at metrics such as:
- Response time: Total time to send a request and get a response.
- Wait time: How long it takes to receive the first byte after a request is sent.
- Average load time: Average amount of time it takes to deliver every request.
- Peak response time: The longest amount of time it takes to fulfill a request.
- Requests per second: How many requests can be handled.
- Transactions passed/failed: The total number of successful or unsuccessful requests.
- Memory utilization: How much memory is needed to process the request.
Remember, you can continue testing and iterating forever. So you need to decide what’s “good enough” to get your software out and in the hands of users.
Don’t rush. Take the time to really understand your goals and needs and balance them against the resources you have for testing.
4. Plan the test environment
The results of your test plan will depend as much on the feature you’re testing as the environment you’re testing it in. As part of the scope, you need to determine what hardware, software, operating system, and device combinations you’re going to test.
This is a situation where it pays to be specific. For example, if you’re going to specify an operating system to be used during the test plan, mention the OS edition/version as well, not just the name.
5. Execute your test plan and track progress in your project management tool
Once your test plan is in place, there’s a specific process you need to follow. Think of this as the Software Testing Life Cycle (STLC). Similar to the Software Development Life Cycle, the STLC follows each phase of testing and usually looks something like this:
- Requirements/Design review
- Test planning
- Test designing
- Test environment setup
- Test execution
- Test reporting
This is roughly the path that we’ve described so far. But what about actually executing on your test plan and tracking/reporting the results?
Using a tool like Planio, its easy to set up and track any number of testing scenarios.
Planio’s customizable trackers and workflows can be used to track and create issues or repeatable tasks related to each test. Each issue associated with a tracker has a fixed selection of statuses for the test phases to work through.
Using a workflow like this, you can identify when a test has failed, decide what to do next and prevent the issue being closed. Even better, once you’ve created these workflows they can be repeated in Planio every time you need to test a new feature or piece of software.
For example, let’s say you’re going to test the checkout functionality of your new e-commerce software. Using Planio, you could:
- Set up a tracker for “Software Test”
- Create custom statuses to ensure each test step is completed
- Create an issue for “Checkout Functionality” to be tested
- Assign the issue to a specific person on your testing team
- Track the progress
Want to learn more about Planio’s powerful workflows? Check out our post on How to create a project workflow in Planio that will save your team hours a week.
Why you should start testing as early as possible during your development process
If you’ve read our Guide on Software Development Processes, you know that when you test features, products, and specific code is dependent on what style you’re using. The biggest difference probably being between traditional (Waterfall) and Agile development.
In traditional software development (aka Waterfall), your test plan starts once you’ve completed the project. Whereas in Agile, requirements, development, and testing are all done simultaneously as you develop new pieces of usable software.
So which is right? And when should you start testing?
While there are debates on both sides, it’s probably safe to say that the earlier you start testing the better.
In fact, IBM commissioned a report almost a decade ago that discovered that the further into the Software Development Life Cycle that a bug or issue is discovered, the more expensive it is to fix.
In other words, if you leave your testing until just before release, you’re adding a huge strain on your resources to deal with it. Even worse, the cost of a bug found after release is nearly 30 times more expensive than if it was found during the design phase.
Testing is an iterative process. When one bug is found and fixed, it can illuminate other, deeper bugs or even create new ones. The earlier you can start dealing with those, the less of an impact testing will have on your launch and go-to-market strategy.
Don’t treat your test plan as an afterthought
Testing isn’t just another thing to check off your list. It’s an important and potentially project-altering phase that needs to be carefully thought through and planned.
Your test plan will guide you through that process from start to finish, helping you understand the objectives, define the scope of testing, create pass/fail criteria, document your process, and deliver the documents and artifacts you need to make your product or feature the best it can be.