Software Testing: The Mozilla Approach
Ehsan Akhgari
Software Testing Course
Sharif University of Technology
Mozilla Project – Introduction
- Started in 1998 using the code base from Netscape
- Produces software such as Firefox, Thunderbird, Sunbird, SeaMonkey, Bugzilla, etc.
- Our talk focuses on Firefox
- Large and complex code base (~6.6 MLOC)
- Open source, developed by a global community
- My Involvement in the Mozilla Project:
- Since August 2006
- User interface development
- Some back-end development
- Persian translation
- ...
Mozilla Platform Overview
Mozilla Platform Overview
- XPCOM: Cross-platform Component Object Model
- XPIDL for defining interfaces
- Supports implementing components in many languages (primarily C++ and JavaScript)
- Language bindings (XPConnect for JavaScript)
- Utility components (strings, hash-tables, arrays, etc.)
- xpcshell (a shell running JavaScript programs accessing XPCOM components)
- JavaScript: used both to provide scripting capabilities to web content, and to the browser itself
- XUL: XML User Interface Language
- Build system: based on GNU autoconf and GNU make
- Revision control system: Originally using CVS, now using Mercurial
What do we test?
- The general rule is: test every aspect of the software being produced
- Is it practical?
- With the right tools, methodologies, and mindset, yes!
- There are a couple of key concepts here:
- Use as much automation as possible
- Test as early as possible
- What do we test in the Mozilla project?
- Functionality: whether the software functions as users expect it to?
- Conformance: whether the software acts as dictated by standards?
- Performance: whether the software performs its job fast enough?
- Internals: whether the internal parts of the software each act as advertized?
How do we test?
- Three aspects should be considered together:
- Testing tools: We need powerful testing tools for both automated and manual testing
- Testing process: We need a well-defined process for running those tools
- Without such a process, a huge set of test cases is no good
- Involving Humans: We need to know where to involve humans in the process, and how to do that
- Involving humans too much slows down the testing process
- Involving humans too little makes recovering from test failures difficult
- Motivating humans, in addition to keeping the process fun and productive are of high importance
Mozilla Testing Tools – Overview
- Automated Testing
- Unit testing
- xpcshell test harness
- Compiled-code tests
- Graphical/interactive tests
- Mochitest
- Chrome tests
- Browser chrome tests
- Reftest
- Crash tests
- Performance tests
- Manual testing
- Litmus
- Manual tests performed by the community
- Other tools
Mozilla Testing Tools – xpcshell test harness
- Provides component level unit testing capabilities
- Run from the top source directory using a single command:
make check
- Possibility of running a single test under a C++ debugger
- Provides a number of test framework functions
Mozilla Testing Tools – xpcshell test harness
- Test sample:
function test_functionality() {
var result;
// test something
return true;
}
function do_test() {
do_check_true(test_functionality());
/**
Other test framework functions available:
do_throw("Error message");
do_check_eq(value1, value2);
do_check_neq(value1, value2);
do_check_false(booleanValue);
do_timeout(delay, "javascript code");
do_test_pending();
do_test_finished();
do_import_script("path/to/script.js");
do_get_file("path/to/file.txt");
**/
}
Mozilla Testing Tools – xpcshell test harness
- Strengths:
- Lightweight, therefore higher execution throughput
- Can test any XPCOM component
- Tests written in high-level language
- A JavaScript-based HTTP server available, for network tests
- Weaknesses:
- Can't test components not exposed through XPCOM
- Can't open windows, test user interface and/or interactions
Mozilla Testing Tools – Compiled-code tests
- Used for testing components not exposed through XPCOM (such as XPCOM itself)
- Written in C++
- Don't have a unified error reporting strategy
- Usually hook into the xpcshell test harness to run automatically using:
make check
- Very small test framework available
Mozilla Testing Tools – Compiled-code tests
- Test sample:
#include "TestHarness.h"
nsresult MyTest() {
if (1 == 1) {
fail("why isn't 1 == 1?");
return NS_ERROR_FAILURE;
} else {
passed("MyTest");
}
return NS_OK;
}
int main() {
ScopedXPCOM xpcom("MyTests");
if (xpcom.failed())
return 1;
// XPCOM is now started up, and you can access components/services/etc.
int rv = 0;
if (NS_FAILED(MyTest()))
rv = 1;
// ...
return rv;
}
Mozilla Testing Tools – Compiled-code tests
- Strengths:
- Lightweight, therefore higher execution throughput
- Can test arbitrary C++ code
- Weaknesses:
- Tests written in low-level language
- Can't open windows, test user interface and/or interactions
- Generally using these types of tests are not recommended, unless where no other type of test can be used
Mozilla Testing Tools – Mochitest
Mozilla Testing Tools – Mochitest
- Test sample:
<!DOCTYPE HTML>
<html>
<head>
<title>MyTest</title>
<script type="text/javascript" src="/MochiKit/MochiKit.js"></script>
<script type="text/javascript" src="/tests/SimpleTest/SimpleTest.js"></script>
<link rel="stylesheet" type="text/css" href="/tests/SimpleTest/test.css" />
</head>
<body onload="myTest();">
<p id="display"></p>
<div id="content" style="display: none"></div>
<pre id="test">
<script class="testbody" type="text/javascript">
function myTest() {
is(something, true, "Test if something is true");
SimpleTest.finish();
}
SimpleTest.waitForExplicitFinish();
</script>
</pre>
</body>
</html>
Mozilla Testing Tools – Mochitest
- Strengths:
- Can access XPCOM components
- Can open windows, test user interface and/or interactions
- Tests written in high-level languages
- Can express test requirements which are not yet satisfied
- Weaknesses:
- Heavy-duty, therefore lower execution throughput
- Only recommended if the test can't be written as an xpcshell test
Mozilla Testing Tools – Chrome tests
Mozilla Testing Tools – Chrome tests
- Test sample:
<?xml version="1.0"?>
<?xml-stylesheet href="chrome://global/skin" type="text/css"?>
<?xml-stylesheet href="chrome://mochikit/content/tests/SimpleTest/test.css"
type="text/css"?>
<window xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"
title="MyTest" onload="myTest();">
<title>MyTest</title>
<script type="application/javascript"
src="chrome://mochikit/content/MochiKit/packed.js"/>
<script type="application/javascript"
src="chrome://mochikit/content/tests/SimpleTest/SimpleTest.js"/>
<script type="application/javascript">
<![CDATA[
function myTest() {
is(something, true, "Test if something is true");
SimpleTest.finish();
}
SimpleTest.waitForExplicitFinish();
]]>
</script>
<body xmlns="http://www.w3.org/1999/xhtml">
<p id="display"></p>
<div id="content" style="display: none"></div>
<pre id="test"></pre>
</body>
</window>
Mozilla Testing Tools – Browser chrome tests
Mozilla Testing Tools – Browser chrome tests
Mozilla Testing Tools – Browser chrome tests
- Strengths:
- Can access XPCOM components
- Can open windows, test user interface and/or interactions
- Tests written in a high-level language
- Can express test requirements which are not yet satisfied
- Weaknesses:
- Heavy-duty, therefore lower execution throughput
- Only recommended if the test can't be written as an xpcshell test
Mozilla Testing Tools – Reftest
Mozilla Testing Tools – Reftest
- Tests usually consist of two HTML files, with a list file specifying what should be tested
- Test sample:
Mozilla Testing Tools – Reftest
- Strengths:
- Easy to convert HTML test cases to reftests
- Tests written in high-level languages
- Can express test requirements which are not yet satisfied
- Can express platform-specific behavior
- Weaknesses:
- Can't access XPCOM components, or perform any privileged action
- Theoretically, every reftest can be written as a Mochitest, so tests requiring special privileges can be expressed as Mochitests
Mozilla Testing Tools – Crash tests
Mozilla Testing Tools – Crash tests
- Tests usually consist of one HTML file, with a list file specifying what should be tested
- Test sample:
Mozilla Testing Tools – Crash tests
- Strengths:
- Tests an important property: that the browser should never crash, no matter what input provided to it by the web
- Tests written in high-level languages
- Can express test requirements which are not yet satisfied
- Can test platform-specific crashes
- Weaknesses:
- Can't access XPCOM components, or perform any privileged action
- Crash tests requiring privileges can be written as other types of tests, where a crash is detected by getting a timeout error
Mozilla Testing Tools – Performance tests
- Ts: Startup time
- Difference between the time of starting the application, and the time when the home page is loaded
- Run 10 times, lowest value reported
- Txul: XUL window open time
- Time to open a XUL window
- 10 windows are opened, median and average times reported
- Tp, Tp2, pageloader extension: Page load time
- Loads a large number of pages and measures the load time
- Pages usually served by a local web server to minimize network latency variations
Mozilla Testing Tools – Performance tests
- Tdhtml: DHTML performance
- Runs a set of DHTML test cases and measures the time for each test
- Reports median and average times for each testcase, as well as the raw data, and the geometric mean of the median times
- Bl/Lk: Bloat and Leak numbers
- Bloat numbers show the total number of instances and references for each class
- Leak numbers specify the amount of memory allocated but not freed when exiting the application
- Tr/Tgfx/Tsvg: Rendering Performance numbers
- Tr: time required to render real-world HTML content
- Tgfx: synthetic graphics test results (such as transparency rendering performance)
- Tsvg: synthetic SVG rendering test results
Mozilla Manual Tools – Litmus
- Litmus: a web-based testcase management tool
- Stores a test case repository as well as a test results repository
- Provides querying, reporting and comparison tools
- Supports web-services for automatic batch submissions of test results
- Supports multiple products and test groups
Mozilla Manual Tools – Litmus
Mozilla Manual Tools – jsfunfuzz
- Introduced as a security tool at the Black Hat 2007 Conference
- Based on the ideas of Mutation Testing
- Works on test cases instead of the program
- Generates a large number of test cases testing both correctness and handling of incorrect syntax
- Revealed about 280 bugs in Mozilla's JavaScript engine
- Has been used to test other JavaScript engines as well by Apple, Opera and Microsoft
Mozilla Manual Tools – jsfunfuzz
- Workflow:
- Generate test cases
- Focus on the test cases which generate incorrect output
- Correct the program in order for those tests to pass
- Run the tests on the program again to verify that the number of failures has decreased
Mozilla Testing Process – Overview
- Continuous integration is the golden rule
- Principles
- Maintaining a central code repository (using Mercurial)
- Build automation (using BuildBot)
- Test automation (using BuildBot and the frameworks discussed)
- Reporting of the status of builds and tests (using Tinderbox)
- In addition, developers are required to test their changes before committing to Mercurial
Mozilla Testing Process – Details
- A build/test run starts with someone checking something in to Mercurial
- BuildBot waits a short amount of time in order to handle batch check-ins
- BuildBot queues a build on three platforms (Windows, Mac OS X, Linux)
- After the build is finished on each platform, it will be distributed to test machines and will be queued for testing
- The results of each step can be viewed in Tinderbox
- A tree status is maintained:
- Check-ins on closed trees are not allowed
- Check-ins on open "green" trees are allowed
- Check-ins on open "orange" and "red" trees should be coordinated with the Sheriff
Mozilla Testing Process – Tinderbox
Mozilla Testing – Human Involvement
- Human involvement in:
- Automated testing
- Manual testing
- Keeping the process healthy:
- Keeping humans motivated
- Making the test process fun
- Those who break the tests
Human Involvement – Automated Testing
- "Sheriff" as the coordinator
- For build failures:
- The committer is notified and asked to fix the code as soon as possible
- If that doesn't work, the change is backed out by the sheriff
- During this time the tree may be closed to prevent further check-ins
Human Involvement – Automated Testing
- For test failures:
- The test failure is evaluated to make sure if it's something wrong with the code
- The committer is notified and asked to fix the code as soon as possible
- If that doesn't work:
- For serious failures: the change is backed out by the sheriff
- For benign failures: a bug is filed to track the fix
- During this time the tree may be closed to prevent further check-ins
Human Involvement – Automated Testing
- For performance regressions:
- Performance numbers are reported by Tinderbox (graphs available as well)
- When performance regresses, the tree is closed
- All the people who committed something during the regression range are responsible
- They have one hour to determine if their change caused the problem, or explain otherwise
- At the end of that time:
- If the cause of the regression is known:
- The suspect change is backed out
- Another test cycle is allowed to make sure the regression has been fixed
- The tree is opened
- If the cause of the regression is not known:
- All changes are backed out
- Each of them is allowed to be checked in again through a "metered process"
- The tree remains closed
Human Involvement – Manual Testing
- Community test events called "Test Days" held periodically
- The goals of each one is defined carefully
- Test cases and the results obtained by testers are recorded in Litmus
- The event is coordinated via the IRC network
- Motivating measures such as awarding top testers are in place
Human Involvement – Process Health
- What to do when a test fails?
- Blame each other?
- Attack the people responsible?
- Fire the people responsible?
- Shoot the people responsible?
- None will help!
- Instead:
- Ask the responsible person to fix the problem politely
- Help them if they can't
- Be assertive, and in the same time polite and helpful
What can we learn from Mozilla here?
- Testing is there to test the software, not the people developing it
- Try to help people not familiar with testing in a specific area
- Be serious: require tests where possible
- Be lazy: automate as much stuff as possible
- Be creative: testing opportunities may be wider than appears at first sight
- Be responsive: if your code fails a test, try to fix the problem soon
- Be thorough: test as much as possible (correctness, efficiency, etc.)
- Have fun: testing can be tedious, try your best to make it fun
What else?
- Nearly all tools mentioned here are open source
- Some can be used in any project
- BuildBot
- Tinderbox
- Mercurial
- Litmus
- Bugzilla
- Some can be used on projects based on the Mozilla platform
- xpcshell tests
- Mochitests
- Reftest
- Some can be used as examples to apply to specific domains
- jsfunfuzz
- Performance tests