Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tests: Added an option to accept the actual token stream #2515

Merged
merged 2 commits into from
Aug 17, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 21 additions & 3 deletions test-suite.html
Original file line number Diff line number Diff line change
Expand Up @@ -93,8 +93,11 @@ <h2 id="writing-tests-writing-your-test">Writing your test</h2>

<p>Your file is built up of two or three sections, separated by ten or more dashes <code>-</code>, starting at the begin of the line:</p>
<ol>
<li>Your language snippet. The code you want to compile using Prism. (<strong>required</strong>)</li>
<li>The simplified token stream you expect. Needs to be valid JSON. (<strong>required</strong>)</li>
<li>Your language snippet. The code you want to tokenize using Prism. (<strong>required</strong>)</li>
<li>
The simplified token stream you expect. Needs to be valid JSON. (<em>optional</em>) <br>
If there no token stream defined, the test case will fail unless the <code>--accept</code> flag is present when running the test command (e.g. <code>npm run test:languages -- --accept</code>). If the flag is present and there is no expected token stream, the runner will insert the actual token stream into the test case file, changing it.
</li>
<li>A comment explaining the test case. (<em>optional</em>)</li>
</ol>
<p>The easiest way would be to look at an existing test file:</p>
Expand All @@ -114,10 +117,25 @@ <h2 id="writing-tests-writing-your-test">Writing your test</h2>

This is a comment explaining this test case.</code></pre>

<h2 id="writing-tests-the-easy-way">The easy way</h2>
<p>The easy way to create one or multiple new test case(s) is this:</p>

<ol>
<li>Create a new file for a new test case in <code>tests/languages/${language}</code>.</li>
<li>Insert the code you want to test (and nothing more).</li>
<li>Repeat the first two steps for as many test cases as you want.</li>
<li>Run <code>npm run test:languages -- --accept</code>.</li>
<li>Done.</li>
</ol>

<p>This works by making the test runner insert the actual token stream of you test code as the expected token stream. <strong>Carefully check that the inserted token stream is actually what you expect or else the test is meaningless!</strong></p>

<p>Optionally, you can then also add comments to test cases.</p>


<h2 id="writing-tests-explaining-the-simplified-token-stream">Explaining the simplified token stream</h2>

<p>While compiling, Prism transforms your source code into a token stream. This is basically a tree of nested tokens (or arrays, or strings).</p>
<p>While highlighting, Prism transforms your source code into a token stream. This is basically a tree of nested tokens (or arrays, or strings).</p>
<p>As these trees are hard to write by hand, the test runner uses a simplified version of it.</p>
<p>It uses the following rules:</p>
<ul>
Expand Down
124 changes: 75 additions & 49 deletions tests/helper/test-case.js
Original file line number Diff line number Diff line change
Expand Up @@ -49,43 +49,66 @@ module.exports = {
*
* @param {string} languageIdentifier
* @param {string} filePath
* @param {boolean} acceptEmpty
*/
runTestCase(languageIdentifier, filePath) {
runTestCase(languageIdentifier, filePath, acceptEmpty) {
const testCase = this.parseTestCaseFile(filePath);
const usedLanguages = this.parseLanguageNames(languageIdentifier);

if (null === testCase) {
throw new Error("Test case file has invalid format (or the provided token stream is invalid JSON), please read the docs.");
}

const Prism = PrismLoader.createInstance(usedLanguages.languages);

// the first language is the main language to highlight
const simplifiedTokenStream = this.simpleTokenize(Prism, testCase.testSource, usedLanguages.mainLanguage);
const simplifiedTokenStream = this.simpleTokenize(Prism, testCase.code, usedLanguages.mainLanguage);

if (testCase.expectedTokenStream === null) {
// the test case doesn't have an expected value
if (!acceptEmpty) {
throw new Error('This test case doesn\'t have an expected toke n stream.'
+ ' Either add the JSON of a token stream or run \`npm run test:languages -- --accept\`'
+ ' to automatically add the current token stream.');
}

// change the file
const lineEnd = (/\r\n/.test(testCase.code) || !/\n/.test(testCase.code)) ? '\r\n' : '\n';
const separator = "\n\n----------------------------------------------------\n\n";
const pretty = TokenStreamTransformer.prettyprint(simplifiedTokenStream)
.replace(/^( +)/gm, m => {
return "\t".repeat(m.length / 4);
});

let content = testCase.code + separator + pretty;
if (testCase.comment) {
content += separator + testCase.comment;
}
content = content.replace(/\r?\n/g, lineEnd);

const actual = JSON.stringify(simplifiedTokenStream);
const expected = JSON.stringify(testCase.expectedTokenStream);
fs.writeFileSync(filePath, content, "utf-8");
} else {
// there is an expected value
const actual = JSON.stringify(simplifiedTokenStream);
const expected = JSON.stringify(testCase.expectedTokenStream);

if (actual === expected) {
// no difference
return;
}
if (actual === expected) {
// no difference
return;
}

// The index of the first difference between the expected token stream and the actual token stream.
// The index is in the raw expected token stream JSON of the test case.
const diffIndex = translateIndexIgnoreSpaces(testCase.expectedJson, expected, firstDiff(expected, actual));
const expectedJsonLines = testCase.expectedJson.substr(0, diffIndex).split(/\r\n?|\n/g);
const columnNumber = expectedJsonLines.pop().length + 1;
const lineNumber = testCase.expectedLineOffset + expectedJsonLines.length;

const tokenStreamStr = TokenStreamTransformer.prettyprint(simplifiedTokenStream);
const message = "\n\nActual Token Stream:" +
"\n-----------------------------------------\n" +
tokenStreamStr +
"\n-----------------------------------------\n" +
"File: " + filePath + ":" + lineNumber + ":" + columnNumber + "\n\n";

assert.deepEqual(simplifiedTokenStream, testCase.expectedTokenStream, testCase.comment + message);
// The index of the first difference between the expected token stream and the actual token stream.
// The index is in the raw expected token stream JSON of the test case.
const diffIndex = translateIndexIgnoreSpaces(testCase.expectedJson, expected, firstDiff(expected, actual));
const expectedJsonLines = testCase.expectedJson.substr(0, diffIndex).split(/\r\n?|\n/g);
const columnNumber = expectedJsonLines.pop().length + 1;
const lineNumber = testCase.expectedLineOffset + expectedJsonLines.length;

const tokenStreamStr = TokenStreamTransformer.prettyprint(simplifiedTokenStream);
const message = "\n\nActual Token Stream:" +
"\n-----------------------------------------\n" +
tokenStreamStr +
"\n-----------------------------------------\n" +
"File: " + filePath + ":" + lineNumber + ":" + columnNumber + "\n\n";

assert.deepEqual(simplifiedTokenStream, testCase.expectedTokenStream, testCase.comment + message);
}
},

/**
Expand Down Expand Up @@ -160,33 +183,36 @@ module.exports = {
*
* @private
* @param {string} filePath
* @returns {{testSource: string, expectedTokenStream: Array<string[]>, comment:string?}|null}
* @returns {ParsedTestCase}
*
* @typedef ParsedTestCase
* @property {string} code
* @property {string} expectedJson
* @property {number} expectedLineOffset
* @property {Array | null} expectedTokenStream
* @property {string} comment
*/
parseTestCaseFile(filePath) {
const testCaseSource = fs.readFileSync(filePath, "utf8");
const testCaseParts = testCaseSource.split(/^-{10,}\w*$/m);

try {
const testCase = {
testSource: testCaseParts[0].trim(),
expectedJson: testCaseParts[1],
expectedLineOffset: testCaseParts[0].split(/\r\n?|\n/g).length,
expectedTokenStream: JSON.parse(testCaseParts[1]),
comment: null
};

// if there are three parts, the third one is the comment
// explaining the test case
if (testCaseParts[2]) {
testCase.comment = testCaseParts[2].trim();
}
const testCaseParts = testCaseSource.split(/^-{10,}[ \t]*$/m);

return testCase;
}
catch (e) {
// the JSON can't be parsed (e.g. it could be empty)
return null;
if (testCaseParts.length > 3) {
throw new Error("Invalid test case format: Too many sections.");
}

const code = testCaseParts[0].trim();
const expected = (testCaseParts[1] || '').trim();
const comment = (testCaseParts[2] || '').trim();

const testCase = {
code,
expectedJson: expected,
expectedLineOffset: code.split(/\r\n?|\n/g).length,
expectedTokenStream: expected ? JSON.parse(expected) : null,
comment
};

return testCase;
},

/**
Expand Down
4 changes: 3 additions & 1 deletion tests/run.js
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ const testSuite =
// load complete test suite
: TestDiscovery.loadAllTests(__dirname + "/languages");

const accept = !!argv.accept;

// define tests for all tests in all languages in the test suite
for (const language in testSuite) {
if (!testSuite.hasOwnProperty(language)) {
Expand All @@ -27,7 +29,7 @@ for (const language in testSuite) {

it("– should pass test case '" + fileName + "'", function () {
if (path.extname(filePath) === '.test') {
TestCase.runTestCase(language, filePath);
TestCase.runTestCase(language, filePath, accept);
} else {
TestCase.runTestsWithHooks(language, require(filePath));
}
Expand Down