The Bad Old Days
The tests date to pre-mozilla days and do not follow the style of any of the other test suites.
Not all of the tests pass and keeping track of the known failures has been a time consuming and error prone task.
A single person was responsible for adding new tests causing the addition of new tests to increasingly lag behind new patches.
All of that has come to and end, because…
The Good Times have arrived!
(at least for the browser on tracemonkey)
David Baron’s reftest framework provides the solution to many of the problems of the Bad Old Days. It provides:
Familiar test invocation.
The reftest style manifests provide an excellent means of tracking obsolete tests, test failures, tests which take too long to run or tests which crash
Easy (hopefully) integration into the automated tests running on Tinderbox by leveraging the same approaches used to run the layout reftests
Currently, only the browser jsreftests are implemented. The same approach is planned to be implemented for the shell tests.
Running the jsreftests in the browser is familiar to anyone who has run the layout reftests or crash tests. Simply invoke
make in the object directory with the target
$ make -C $(OBJDIR) jstestbrowser
This executes the following command.
$ python $(OBJDIR)/_tests/reftest/runreftest.py \ --extra-profile-file=js/src/tests/user.js js/src/tests/jstests.list
You can see example logs here:
No longer will you have to perform
base-line runs to determine the known failures prior to testing your patches. If the jsreftests run to completion without crashing and without generating any
UNEXPECTED results, your patch is good to go.
jstests.listis the reftest manifest file used to tell the test runner which tests to execute.
The manifest files are organized into a 3-level hierarchy.
The top level manifest files live in
js/src/tests/. They incorporate the individual suites through the
include manifest keyword. Whenever a new suite is added, the top level manifest files must be updated to include it. New suites must also be listed in
js/src/tests/Makefile.in in order to be packaged with the other tests.
The second level manifest files are contained in the suite directories. They also include the sub-suites through the use of the
include manifest keyword. Whenever a new sub-suite is added to a suite, the suite’s manifest files must be updated to include it.
The third and final level manifest files are contained in the sub-suite directories. These manifest files list each test using the
script manifest keyword. This keyword tells the test runner to obtain the actual test results from the array returned by the
getTestCases() function in the testcase. In order for the test results to be properly recorded in the test case results array, you must use one of the standard reporting functions, such as
reportMatch(), etc., to record the result of the test. If you do not use
reportCompare() or one of its cousins, your test will fail with the message that
No test results reported.
The first line of the manifest file contains a “url-prefix” which tells the reftest extension to prefix each test file with the url-prefix value before attempting to load it. The shell test driver will ignore the url-prefix, thus allowing a single manifest file to be used by both the shell and browser. The remainder of the manifest lists each test file preceded by the
url-prefix ../../jsreftest.html?test=ecma_5/Date/ script 188.8.131.52.js
The truly great thing about the reftest manifest files are their ability to give tests a
Tests marked with
failswill be executed and will be marked as
TEST-KNOWN-FAILif they fail. However if they pass, they will be marked as
Tests marked with
randomwill be executed and will be marked as
TEST-KNOWN-FAIL(EXPECTED RANDOM)if they fail or
TEST-PASS(EXPECTED RANDOM)if they pass. This is useful for tests which may not reliably report a test failure but which you wish to continue to execute in the event they may find a regression through a crash or assertion.
randomresults are also used in the event that a test that returns multiple test results is marked with the failure-type
fails. In that case, failures are marked with
TEST-KNOWN-FAILwhile successes are marked with
TEST-PASS(EXPECTED RANDOM). This is necessary since it is not possible to mark the individual test case results in a multi-result test as either passing or failing.
Tests marked with
skipwill not be executed at all.
skipcan be used for obsolete tests, for tests which are known to crash the browser, tests which do not terminate or tests which take too much time.
The even greater thing about the reftest manifest files are their ability to make the failure-types conditional. These means that you can conditionally mark tests as failing or to be skipped depending on the build type (whether you are testing a debug build), the operating system, or even the cpu type. The variables which can be used in the failure-type conditional are described in the reftest sandbox implementation. You can also find examples in the tree.
You can read more about the reftest manifests in the reftest README.txt
Checklist when adding New Tests
New suites should be added to the top level
New sub-suites should be added to the suite level
New tests should be listed in the appropriate sub-suite’s
jstests.listfile. Each sub-suite’s jstests.list manifest must begin with a url-prefix.
Tests must call the
reportCompare()function (or its cousins) to record test results.
Updated September 26, 2009
The original patch had to be backed out due to failures in the SpiderMonkey shell. As a result, the js/tests were moved to js/src/tests to facilitate their integration into the build system. There was some pushback also about the two different manifest files required by the original design. With a modification to reftest.js to allow the use of a url-prefix, the browser specific manifest was dropped.
See bug 469718 for more background.