This project contains automated tests for WebNN (Web Neural Network) using Playwright.
- Chrome: Install Chrome browser (stable, canary, dev, or beta)
- Node.js: Install Node.js (version 16 or higher)
- Playwright: Will be installed via npm
- Install dependencies:
npm install# Run with configuration file
node src/main.js --config config/example.json
# Run all WPT tests (--suite wpt is default)
node src/main.js
node src/main.js --suite wpt
# Run specific WPT test case
node src/main.js --suite wpt --wpt-case abs
# Run multiple WPT test cases (comma-separated)
node src/main.js --suite wpt --wpt-case abs,add,mul
# Run with different Chrome channel (stable is default)
node src/main.js --suite wpt --chrome-channel canary
node src/main.js --suite wpt --chrome-channel dev
node src/main.js --suite wpt --chrome-channel beta
# Run with parallel execution (faster)
node src/main.js --suite wpt --wpt-case "add,sub,mul,div" --jobs 4
# Run tests multiple times (repeat mode)
node src/main.js --suite wpt --wpt-case "add,sub" --repeat 3
# Combine options
node src/main.js --suite wpt --wpt-case "add,sub" --jobs 2 --repeat 3 --chrome-channel canaryUse --config to run tests defined in a JSON file. This allows defining complex test suites with specific devices and browser arguments.
node src/main.js --config config/example.jsonExample configuration format:
[
{
"name": "ORT WPT",
"browser-arg": "--enable-features=WebNNOnnxRuntime",
"suite": "wpt",
"wpt-case": "add",
"device": "cpu,gpu"
}
]Use --wpt-case to run only tests that start string(s). Multiple cases can be specified separated by commas (no spaces):
# Run only tests with names starting with "abs"
node src/main.js --suite wpt --wpt-case abs
# Run tests with names starting with "abs" OR "add"
node src/main.js --suite wpt --wpt-case abs,add
# Run specific test indices (0-based)
node src/main.js --suite wpt --wpt-range 0,1,3-7
# This will run tests at indices 0, 1, 3, 4, 5, 6, 7 from the discovered listThe case selection is case-insensitive and matches the prefix of the test filename.
Run tests against WebNN samples and developer previews using the model suite:
# Run all model tests
node src/main.js --suite model
# Run specific model cases
node src/main.js --suite model --model-case lenet
node src/main.js --suite model --model-case sdxl,whisperAvailable model cases include:
lenet: LeNet Digit Recognitionsegmentation: Semantic Segmentationstyle: Fast Style Transferod: Object Detectionic: Image Classificationsdxl: SDXL Turbophi: Phi-3 WebGPUsam: Segment Anythingwhisper: Whisper-base WebGPU
Run multiple tests in parallel to speed up execution:
# Run with 2 parallel jobs
node src/main.js --suite wpt --wpt-case "add,sub,mul,div" --jobs 2
# Run with 4 parallel jobs
node src/main.js --suite wpt --wpt-case "add,sub,mul,div" --jobs 4
# Run with 8 parallel jobs
node src/main.js --suite wpt --wpt-case "add,sub,mul,div" --jobs 8Run the entire test suite multiple times for stability testing:
# Run tests 3 times
node src/main.js --suite wpt --wpt-case "add,sub" --repeat 3
# Run with parallel execution, repeated 5 times
node src/main.js --suite wpt --wpt-case "add,sub,mul,div" --jobs 2 --repeat 5Switch between different Chrome channels using --chrome-channel:
# Use stable Chrome (default)
node src/main.js --suite wpt
# Use Chrome Canary
node src/main.js --suite wpt --chrome-channel canary
# Use Chrome Dev
node src/main.js --suite wpt --chrome-channel dev
# Use Chrome Beta
node src/main.js --suite wpt --chrome-channel betaSupported channels: stable (default), canary, dev, beta
Pass extra arguments to the browser launch sequence using --browser-arg:
# Pass GPU selection flags
node src/main.js --suite wpt --browser-arg "--webnn-ort-ep-device=WebGpuExecutionProvider,0x8086,0x7d55"
# Pass multiple arguments
node src/main.js --suite wpt --browser-arg "--use-gl=angle --use-angle=gl"After test execution completes, an HTML report is automatically generated with:
-
WebNN Test Report Section: Displays in attachments section at the bottom of the page with comprehensive test results
- Test execution summary (passed, failed, error, unknown counts)
- Detailed results for each test case with status and timing
- Color-coded status indicators for easy identification
- Test configuration information (suite, cases, jobs, etc.)
-
Playwright Report Details: Contains detailed Playwright execution information
- Individual test execution traces
- Screenshots and videos (if configured)
- Step-by-step test execution logs
- Error details and stack traces for failed tests
Reports are saved in the report/ directory:
- Timestamped Reports: Each test run generates a timestamped file (format:
YYYYMMDDHHMMSS.html) - Iteration Reports: When using
--repeat, each iteration gets a suffix_iter1,_iter2, etc. - Direct Links: The console output provides direct file:// links to view reports
Reports are automatically opened in your default browser after test completion. To view reports manually:
# Open the report directory
start report/
# Open a specific report
start report/20251022143025.html
# Open an iteration report
start report/20251022143025_iter1.html