< C) Build | D) Test | > E) Referencing files |
This page documents how jsenv can be used to write and execute tests. The tests will be executed in a web browser.
If you want to execute tests in Node.js go to I) Test in Node.js.
Best parts of jsenv tests:
- debugging a test file === debugging a source file
- Test execution is standard; switching from source files to test files is easy
- Isolated environment; each test file has a dedicated runtime
- Test files can be executed in Chrome, Firefox and Safari
- Smart parallelism
- Logs are nice; dynamic, colorful and human friendly
Table of contents
This section shows how to write a test for a source file and execute it using jsenv.
project/ src/ sum.js index.html package.json
Let's write a test for sum.js:
export const sum = (a, b) => a + b;
In order to test sum.js a few files will be needed. The impacts on the file structure are summarized below:
project/
+ scripts/
+ dev.mjs
+ test.mjs
src/
sum.js
+ sum.test.html
index.html
package.json
src/sum.test.html
<!doctype html>
<html>
<head>
<title>Title</title>
<meta charset="utf-8" />
<link rel="icon" href="data:," />
</head>
<body>
<script type="module">
import { sum } from "./sum.js";
const actual = sum(1, 2);
const expect = 3;
if (actual !== expect) {
throw new Error(`sum(1,2) should return 3, got ${actual}`);
}
</script>
</body>
</html>
scripts/dev.mjs: start a web server that is needed to executed sum.test.html in a browser.
import { startDevServer } from "@jsenv/core";
await startDevServer({
sourceDirectoryUrl: new URL("../src/", import.meta.url),
port: 3456,
});
scripts/test.mjs: execute test file(s).
import { executeTestPlan, chromium } from "@jsenv/test";
await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromium(),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
});
Before executing test, install dependencies with the following commands
npm i --save-dev @jsenv/core
npm i --save-dev @jsenv/test
npm i --save-dev @playwright/browser-chromium
☝️ playwright↗ is used by @jsenv/test
to start a web browser (chromium).
Everything is ready, test can be executed with the following command:
node ./scripts/test.mjs
It will display the following output in the terminal:
In a real project there would be many test files:
project/
src/
sum.test.html
foo.test.html
bar.test.html
... and so on ...
Each test file can be executed in isolation, independently, directly in the browser:
The page is blank because sum.test.html execution completed without error and without displaying something on the page. Some test could render some UI but it's not the case here.
Debugging test execution can be done using browser dev tools:
To have a basic example, the part of the code comparing actual
and expect
was done without an assertion library.
In pratice a test would likely use one. The diff below shows how the assertion can be written using @jsenv/assert. Note that any other assertion library would work.
+ import { assert } from "@jsenv/assert";
import { sum } from "./sum.js";
const actual = sum(1, 2);
const expect = 3;
- if (actual !== expect) {
- throw new Error(`sum(1,2) should return 3, got ${actual}`);
- }
+ assert({ actual, expect });
Your web server is automatically started if needed. This is done thanks to the webServer
parameter.
If there is a server listening at webServer.origin
:
- Tests are executed, using the server already running.
If there is no server listening at webServer.origin
:
webServer.moduleUrl
orwebServer.command
is executed in a separate process- Code waits for the server to be started, if not started in less than 5s an error is thrown.
- Test are executed, using the server started in step 1.
- Once tests are done, server is stopped by killing the process used to start it
import { executeTestPlan, chromium, firefox, webkit } from "@jsenv/test";
await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromium(),
},
firefox: {
runtime: firefox(),
},
webkit: {
runtime: webkit(),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
});
Before executing tests, install firefox and webkit dependencies with the following command:
npm i --save-dev @playwright/browser-firefox
npm i --save-dev @playwright/browser-webkit
The terminal output:
Each test is executed in a browser tab using one instance of the browser.
If you need to push isolation even further you can dedicate a browser instance per test.
Use chromiumIsolatedTab
instead of chromium
. The same can be done for firefox and webkit.
import { executeTestPlan, chromiumIsolatedTab } from "@jsenv/test";
await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromiumIsolatedTab(),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
});
Executions are started one after an other without waiting for the previous one to finish.
It's possible to configure parallelism using parallel
parameter.
import { executeTestPlan, chromium } from "@jsenv/test";
await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
parallel: {
max: "50%",
maxCpu: "50%",
maxMemory: "50%",
},
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromium(),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
});
Controls the maximum number of execution started in parallel.
max | Max executions in parallel |
---|---|
1 | Only one (disable parallelism) |
5 | 5 |
80% | 80% of cores available on the machine |
The default value is 80%: For a machine with 10 processors, as long as there is less than 8 executions ongoing, remaining executions tries to start in parallel.
Parallelism can also be disabled with parallel: false
which is equivalent to parallel: { max: 1 }
.
This parameter prevent an execution to be started in parallel when the process cpu usage is too high.
The default value is 80%: As long as process cpu usage is below 80% of the total cpu available on the machine, remaining executions tries to start in parallel.
This parameter prevent an execution to be started in parallel when memory usage is too high.
The default value is 50%: As long as process memory usage is below 50% of the total memory available on the machine, remaining executions tries to start in parallel.
Each file is given 30s to execute. If this duration is exceeded the browser tab is closed and execution is considered as failed. This duration can be configured as shown below:
import { executeTestPlan, chromium } from "@jsenv/test";
await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromium(),
allocatedMs: 60_000,
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
});
It's possible to generate HTML files showing how much code was covered by the execution of test files:
The coverage above was generated by the following code:
import { executeTestPlan, chromium, reportCoverageAsHtml } from "@jsenv/test";
const testResult = await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromium(),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
coverage: true,
});
reportCoverageAsHtml(testResult, new URL("./coverage/", import.meta.url));
Coverage can be written to a json file.
import { executeTestPlan, chromium, reportCoverageAsJson } from "@jsenv/test";
const testResult = await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromium(),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
coverage: true,
});
reportCoverageAsJson(testResult, new URL("./coverage.json", import.meta.url));
This JSON file can be given to other tools, for example https://github.com/codecov/codecov-action.
Now let's say we want to get code coverage for the following file:
if (window.navigator.userAgent.includes("Firefox")) {
console.log("firefox");
} else if (window.navigator.userAgent.includes("Chrome")) {
console.log("chrome");
} else if (window.navigator.userAgent.includes("AppleWebKit")) {
console.log("webkit");
} else {
console.log("other");
}
The file will be executed by the following html file:
<!doctype html>
<html>
<head>
<title>Title</title>
<meta charset="utf-8" />
<link rel="icon" href="data:," />
</head>
<body>
<script type="module">
import "./many.js";
</script>
</body>
</html>
Now let's use jsenv to execute the HTML file in Firefox, Chrome and Webkit and generate the coverage.
import {
executeTestPlan,
chromium,
firefox,
webkit,
reportCoverageAsHtml,
} from "@jsenv/test";
const testPlanResult = await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./client/**/many.test.html": {
chromium: {
runtime: chromium(),
},
firefox: {
runtime: firefox(),
},
webkit: {
runtime: webkit(),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../client/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
coverage: true,
});
reportCoverageAsHtml(testResult, new URL("./coverage/", import.meta.url));
The resulting coverage looks as follow:
And the following warnings in the console:
Coverage conflict on "./client/many.js", found two coverage that cannot be merged together: v8 and istanbul. The istanbul coverage will be ignored.
--- details ---
This happens when a file is executed on a runtime using v8 coverage (node or chromium) and on runtime using istanbul coverage (firefox or webkit)
--- suggestion ---
disable this warning with coverage.v8ConflictWarning: false
--- suggestion 2 ---
force coverage using istanbul with coverage.methodForBrowsers: "istanbul"
At this point either you disable the warning with coverage: { v8ConflictWarning: false }
.
Or you force chromium to use "istanbul" so that coverage can be merged with the one from firefox and webkit with coverage: { methodForBrowsers: "istanbul" }
During test executions browser are opened in headless mode and once all tests are executed all browsers are closed.
It's possible to display browser and keep them opened using keepRunning: true
:
import { executeTestPlan, chromium, firefox, webkit } from "@jsenv/test";
await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromium(),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
+ keepRunning: true,
});
In that case execution timeouts are disabled.
The following code forwards custom launch options to playwright
import { executeTestPlan, chromium } from "@jsenv/test";
await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromium({
playwrightLaunchOptions: {
ignoreDefaultArgs: ["--mute-audio"],
},
}),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
});
See https://playwright.dev/docs/api/class-browsertype#browser-type-launch
The value returned by executeTestPlan
is an object called testPlanResult
.
import { executeTestPlan } from "@jsenv/test";
const testPlanResult = await executeTestPlan();
It contains all execution results and a few more infos
< C) Build | D) Test | > E) Referencing files |