Skip to content

DamengRandom/web-performance-checker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Web Performance MCP Tools

Visitors

⚡️ Key Command: Import / Audit Site

npm run cli audit <url>
# Example: npm run cli audit https://example.com
# Then use the viewer to view the report or check report from `reports` folder
npm run viewer

A minimal MCP-style server and CLI that can run audits, read and normalize web performance reports (Lighthouse JSON, HAR, Trace stub), summarize key metrics, and compare runs.

Requirements

  • Node.js 18+ (ES modules)
  • macOS/Linux/WSL shell
  • Chrome/Chromium installed (for running audits)

Installation

  1. Clone or open the project directory.
  2. Optionally run npm install (no external deps needed).
  3. Verify Node: node -v (ensure 18+).

Project Structure

  • src/server.js JSON-RPC stdio server exposing MCP-like tools
  • src/tools.js tool implementations (list_reports, read_report, summarize_report, compare_reports)
  • src/parsers.js format detection and parsers (Lighthouse, HAR, Trace stub)
  • src/utils/fs.js JSON file IO helpers
  • src/types.js format enum and normalization helpers
  • src/cli.js simple CLI that summarizes the first found report
  • test/run-tests.js basic parser tests
  • reports/ place your .json reports here (create this folder)

Prepare Reports

  1. Create a reports/ directory at the project root.
  2. Add one or more files:
    • Lighthouse JSON: reports/lighthouse-run.json
    • HAR: reports/site.har.json (HAR content saved as JSON)
    • Trace JSON: reports/trace.json (optional; basic long task extraction)
  3. Ensure the files are valid JSON.

Quick Start (CLI Summary)

  1. Run npm run cli
  2. If a report exists, you will see meta info and a compact summary:
    • Example: Performance score: 90; LCP: 1600ms; FCP: 800ms; INP: 120ms; CLS: 0.01; Transfer size: 123456 bytes; Requests: 2
  3. If no reports are found, create reports/ and add a JSON report, then rerun.

Run a New Audit

  • npm run cli audit https://example.com
  • Runs a headless Chrome Lighthouse audit.
  • Saves the JSON report to reports/.
  • Prints a summary immediately.

Web Dashboard

Visualize your reports with the built-in local viewer.

  1. Run npm run viewer
  2. Your browser will automatically open http://localhost:3000
  3. Click any report in the sidebar to see:
    • Performance scores (color-coded)
    • Core Web Vitals (LCP, FCP, INP, CLS)
    • Chart of top optimization opportunities

Start the MCP-Style Server

  • Run npm start
  • The server reads line-delimited JSON from stdin and writes a line-delimited JSON response to stdout. eg {"id":"10","method":"tools/call","params":{"name":"run_audit","arguments":{"url":"https://example.com"}}}

List Available Tools

  • Send a JSON line to stdin:
    • {"id":"1","method":"tools/list"}
  • Response contains tool definitions with names and input schemas.

Call Tools

  • General request shape:
    • {"id":"<any>","method":"tools/call","params":{"name":"<tool_name>","arguments":{...}}}

Examples

  • List reports under reports/:
    • {"id":"2","method":"tools/call","params":{"name":"list_reports","arguments":{"root":"reports"}}}
  • Run an audit against a URL:
    • {"id":"3","method":"tools/call","params":{"name":"run_audit","arguments":{"url":"https://example.com"}}}
  • Read and normalize a Lighthouse/ HAR report (auto-detect):
    • {"id":"4","method":"tools/call","params":{"name":"read_report","arguments":{"path":"reports/lighthouse-run.json","format":"auto"}}}
  • Summarize a normalized report (use the result from read_report as normalized):
    • {"id":"5","method":"tools/call","params":{"name":"summarize_report","arguments":{"normalized":{...}}}}
  • Compare two runs (use result objects from two read_report calls as base and head):
    • {"id":"6","method":"tools/call","params":{"name":"compare_reports","arguments":{"base":{...},"head":{...},"metrics":["lcp","fcp","inp","cls","tbt"]}}}

Shell-Friendly Interaction

  • You can use printf to send requests:
    • printf '\n{"id":"1","method":"tools/list"}\n' | npm start
  • For multiple calls, run the server and type/paste JSON lines; each line yields a response line.

Tool Reference

  • list_reports(arguments: { root?: string })
    • Returns { id, path }[] of JSON files under root (default reports).
  • read_report(arguments: { path: string, format?: "auto"|"lighthouse"|"har"|"trace" })
    • Returns normalized object with report, scores, metrics, network, opportunities, diagnostics, raw.
  • summarize_report(arguments: { normalized: object })
    • Returns { summary: string } with key metrics.
  • compare_reports(arguments: { base: object, head: object, metrics?: string[] })
    • Returns { diff: { metric, base, head, delta, percent }[], severity }.

Testing

  • Run npm test or node test/run-tests.js
  • Verifies Lighthouse and HAR parsing, metrics, and network counts.

Troubleshooting

  • No reports found: create reports/ and add valid JSON files.
  • Invalid JSON: ensure the file is valid; the server returns an error message.
  • Format auto-detection fails: pass format explicitly ("lighthouse", "har", "trace").
  • INP variability: compare medians across consistent environments for stability.

Notes

  • WebPageTest detection stub is present; parsing can be added similarly to Lighthouse/HAR.
  • Security: the server reads only paths you provide; prefer list_reports to discover files under a known folder.

About

web-performance-checker

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published