Cross-Browser Testing: A Practical How-To Guide
There’s an old internet rumor that the total cost of supporting Internet Explorer 6 would have handily covered the cost of a manned expedition to Mars.
I had a quick look in our Google Analytics account, and I’m happy to say that over the last year only 12 people visited Planio using Internet Explorer 6.
I imagine the internet must be a strange and terrifying place for those 12 people.
Today, self-updating evergreen browsers have made cross-browser compatibility easier in some respects. In this case, many developers follow a rule of “supporting the last 2 versions” of evergreen browsers.
Mobile is the New Cross-Browser
In parallel, however, there’s been an explosion of different mobile devices. Mobile means you have a maelstrom of different screen resolutions, pixel densities and and browsers on countless variations of Android, iOS and more.
Therefore, making sure your product functions for all of your customers has gotten harder, if anything, and remains a huge challenge.
Most developers build for whatever browser they currently use (probably Chrome or Firefox or Safari) and then handle cross-browser and cross-device during a pre-shipping test phase.
That means you’ll have a few frantic days before each new release testing for cross-browser compatibility issues.
We’ll dig into how you can create a plan for testing cross-browser and cross-device issues in a systematic, rigorous way.
Decide the Bounds Within Which You’ll Test
As I mentioned above, Google analytics will give you data on the browsers, devices and screen sizes your visitors are using. You can also dig down into the Browser versions.
At this point, there are two ways to approach the issue of cross-browser compatibility: graceful degradation or progressive enhancement.
With graceful degradation, you start with a certain level of user experience on the latest browsers, and then you degrade gracefully to a lower level for legacy browsers. Put another way, you start with a complex system and work backwards to accommodate legacy browsers.
From a “good practices” point of view, progressive enhancement is the way to go. You’ll have less technical debt. As a by-product, you’ll have a website that works on more devices.
The price is that you’ll have to put in the work upfront.
In any case, you can use the Google Analytics data to make an informed decision about which devices will get the full experience, which browsers will get a enhanced or degraded experience and which browsers will not be supported at all.
Ideally, you’ll make this decision at the outset, so you’ll know what technology choices you can use.
For example, supporting Internet Explorer 8 means you can’t use SVGs without a fallback for Internet Explorer 8. Obviously this means every image asset in SVG needs to have a corresponding png or jpeg image file, resulting in additional complexity.
You can use CanIUse.com to find out whether a particular browser will support a particular technology, and you can use Global Stats to find out what browsers people tend to use in different countries.
In situations where you’ll be building an app for internal purposes or just a small group of people, you might even be able to decide to support a particular browser on a particular operating system.
You probably can’t make that choice if you’re building a website for a broad public audience.
So, at this point you’ve set out the plan for which browsers, version and devices you’ll test for. You’ve chosen either progressive enhancement or graceful degradation.
Now it’s time to find some bugs.
Manual Testing vs. Automated Testing
Manual testings means you look at your app or website and see if behaves as it should. When you see a bug, you make a note to fix it.
You can also run automated tests using something like Selenium or Casper.js. For example, Felix Gliesche, our front-end meistro from Hamburg, used Casper.js for automating the process of taking screenshots of common pages within the Planio app. It allowed him to quickly get an idea of the pages that still had problems.
Automated testing doesn’t work so well for subjective issues and it won’t replace manual testing for cross-browser compatibility.
How to Find All These Bugs
You can start by looking at your website on browsers other than the one you used to develop the site.
The next step is to use services such as BrowserStack or Browser Shots. The advantage with a service such as BrowserStack is that you have remote access to a virtual machine running any given operating system, browser and browser version.
For example, you can look at your site on Internet Explorer 8 installed on Windows XP. Because you have access to a virtual machine, you can access the developer tools to inspect what’s going on.
Another similar service is CrossBrowserTesting.com.
Have Your Heard About Open Device Labs?
BrowserStack and co are great for testing websites on different browsers. However, particularly with mobile, you’ll get a better idea of how your product “feels” by using it on an actual device.
One initiative is called Open Device Labs. You can go any of the labs and try out your product on any of the devices that they have on hand, for free.
Felix started the first open device lab in Germany. As you can see from his website, he has over 30 different devices.
How to Keep Track of all the Issues You Find?
Finding, fixing and testing issues even on a simple website gets overwhelming, fast. Before you know it, your desk will be covered with sticky notes, urgent lists and super urgent lists of bugs.
That’s where an issue tracker comes in. An issue tracker is essential in modern, efficient software development, and it will make your life much easier for cross-browser testing.
An issue tracker is different to a to-do list, because it focuses on accountability, progress and reviewing actions.
A quick note: Planio is really, really great at issue tracking.
We’ll look into how you can set up Planio for optimal issue tracking. Here’s the steps:
- Create a tracker for bugs in Planio
- Create the statuses for the lifecycle of a bug: New, In Progress Fixed, Tested, Closed
- Optional: create custom fields for device, OS, screen resolution, pixel density and browser so you can filter bugs based on those values.
- Start looking for bugs.
- Create an issue for each bug you find.
- Fix the bugs
- Have someone else test whether the fixes worked
- Close the issues
- Celebrate tiny wins on the road towards world domination
Get Everyone to Pitch in On Bug Hunting
Imagine you’re building a website for a client. You’ve built it to their specifications, and it’s ready to use.
Before you start using it, you deploy the new website to a staging server.
Now, your client can start playing around with their new site. Every time they see something that they don’t think is right, they can file a bug report in your issue tracker. You can determine whether it’s actually a bug, attempt to reproduce it and then fix it, as required.
You need to get high quality bug reports, so you can actually fix the problem.
The three core elements of a good bug report are:
- the steps the user took preceding the bug;
- the result that the user got;
- the result that the user expected.
Screenshots of bugs are very useful addition to bug reports. You can use something like Monosnap for taking quick annotated screenshots, and then upload it as an attachment in Planio.
So, you’ve got a framework for finding, identifying and triaging cross-browser issues. Once in an issue tracker, you’ll be able to work with a team of people to fix them, rather than just by yourself.
And you’ll have a much better product to show for it, without all the last-minute stress of endless to-do lists sprawled over your desk.
IE6 Support vs Manned Mission to Mars: How to Make Cross-Browser Testing Less Painful