Tests must explicitly fail when expected conditions are not met, rather than silently exiting early or using weak assertions that can hide real issues. Silent test failures create false confidence in test suites and can mask regressions.
Tests must explicitly fail when expected conditions are not met, rather than silently exiting early or using weak assertions that can hide real issues. Silent test failures create false confidence in test suites and can mask regressions.
Key practices:
Example of problematic pattern:
it('should create sarif result with ignored issues omitted', async () => {
const sarifWithoutIgnores = resultWithoutIgnores?.analysisResults.sarif.runs[0].results;
if (!sarifWithoutIgnores) return; // Silent failure - test passes but validates nothing
// ... rest of test
});
Better approach:
it('should create sarif result with ignored issues omitted', async () => {
const sarifWithoutIgnores = resultWithoutIgnores?.analysisResults.sarif.runs[0].results;
expect(sarifWithoutIgnores).toBeDefined(); // Explicit failure if condition not met
// ... rest of test with confidence that data exists
});
For output validation:
// Weak - only checks existence
expect(stdoutBuffer).toBeDefined();
// Better - validates actual structure and content
expect(JSON.parse(stdout)).toMatchSchema(expectedSchema);
expect(backendRequests).toHaveLength(2); // Exact expectation, not just > 0
This prevents tests from appearing to pass when they’re actually not testing anything meaningful.
Enter the URL of a public GitHub repository