⚡️ Key Command: Import / Audit Site
npm run cli audit <url> # Example: npm run cli audit https://example.com # Then use the viewer to view the report or check report from `reports` folder npm run viewer
A minimal MCP-style server and CLI that can run audits, read and normalize web performance reports (Lighthouse JSON, HAR, Trace stub), summarize key metrics, and compare runs.
- Node.js 18+ (ES modules)
- macOS/Linux/WSL shell
- Chrome/Chromium installed (for running audits)
- Clone or open the project directory.
- Optionally run
npm install(no external deps needed). - Verify Node:
node -v(ensure 18+).
src/server.jsJSON-RPC stdio server exposing MCP-like toolssrc/tools.jstool implementations (list_reports,read_report,summarize_report,compare_reports)src/parsers.jsformat detection and parsers (Lighthouse, HAR, Trace stub)src/utils/fs.jsJSON file IO helperssrc/types.jsformat enum and normalization helperssrc/cli.jssimple CLI that summarizes the first found reporttest/run-tests.jsbasic parser testsreports/place your.jsonreports here (create this folder)
- Create a
reports/directory at the project root. - Add one or more files:
- Lighthouse JSON:
reports/lighthouse-run.json - HAR:
reports/site.har.json(HAR content saved as JSON) - Trace JSON:
reports/trace.json(optional; basic long task extraction)
- Lighthouse JSON:
- Ensure the files are valid JSON.
- Run
npm run cli - If a report exists, you will see meta info and a compact summary:
- Example:
Performance score: 90; LCP: 1600ms; FCP: 800ms; INP: 120ms; CLS: 0.01; Transfer size: 123456 bytes; Requests: 2
- Example:
- If no reports are found, create
reports/and add a JSON report, then rerun.
npm run cli audit https://example.com- Runs a headless Chrome Lighthouse audit.
- Saves the JSON report to
reports/. - Prints a summary immediately.
Visualize your reports with the built-in local viewer.
- Run
npm run viewer - Your browser will automatically open
http://localhost:3000 - Click any report in the sidebar to see:
- Performance scores (color-coded)
- Core Web Vitals (LCP, FCP, INP, CLS)
- Chart of top optimization opportunities
- Run
npm start - The server reads line-delimited JSON from stdin and writes a line-delimited JSON response to stdout. eg
{"id":"10","method":"tools/call","params":{"name":"run_audit","arguments":{"url":"https://example.com"}}}
- Send a JSON line to stdin:
{"id":"1","method":"tools/list"}
- Response contains tool definitions with names and input schemas.
- General request shape:
{"id":"<any>","method":"tools/call","params":{"name":"<tool_name>","arguments":{...}}}
- List reports under
reports/:{"id":"2","method":"tools/call","params":{"name":"list_reports","arguments":{"root":"reports"}}}
- Run an audit against a URL:
{"id":"3","method":"tools/call","params":{"name":"run_audit","arguments":{"url":"https://example.com"}}}
- Read and normalize a Lighthouse/ HAR report (auto-detect):
{"id":"4","method":"tools/call","params":{"name":"read_report","arguments":{"path":"reports/lighthouse-run.json","format":"auto"}}}
- Summarize a normalized report (use the
resultfromread_reportasnormalized):{"id":"5","method":"tools/call","params":{"name":"summarize_report","arguments":{"normalized":{...}}}}
- Compare two runs (use
resultobjects from tworead_reportcalls asbaseandhead):{"id":"6","method":"tools/call","params":{"name":"compare_reports","arguments":{"base":{...},"head":{...},"metrics":["lcp","fcp","inp","cls","tbt"]}}}
- You can use
printfto send requests:printf '\n{"id":"1","method":"tools/list"}\n' | npm start
- For multiple calls, run the server and type/paste JSON lines; each line yields a response line.
list_reports(arguments: { root?: string })- Returns
{ id, path }[]of JSON files underroot(defaultreports).
- Returns
read_report(arguments: { path: string, format?: "auto"|"lighthouse"|"har"|"trace" })- Returns normalized object with
report,scores,metrics,network,opportunities,diagnostics,raw.
- Returns normalized object with
summarize_report(arguments: { normalized: object })- Returns
{ summary: string }with key metrics.
- Returns
compare_reports(arguments: { base: object, head: object, metrics?: string[] })- Returns
{ diff: { metric, base, head, delta, percent }[], severity }.
- Returns
- Run
npm testornode test/run-tests.js - Verifies Lighthouse and HAR parsing, metrics, and network counts.
- No reports found: create
reports/and add valid JSON files. - Invalid JSON: ensure the file is valid; the server returns an error message.
- Format auto-detection fails: pass
formatexplicitly ("lighthouse","har","trace"). - INP variability: compare medians across consistent environments for stability.
- WebPageTest detection stub is present; parsing can be added similarly to Lighthouse/HAR.
- Security: the server reads only paths you provide; prefer
list_reportsto discover files under a known folder.