A React + TensorFlow.js demo that detects hand gestures from live webcam video and maps them to discrete intent signals (rock / paper / scissors). The project uses real-time landmark tracking and gesture classification to drive an interactive feedback loop (prediction + game outcome).
Webcam → Hand Landmarks → Gesture Classification → Intent Output → UI Feedback
- Hand tracking: TensorFlow.js Handpose detects 21 hand landmarks per frame.
- Gesture recognition: Fingerpose compares landmark geometry against gesture templates.
- Interaction loop: The UI displays the recognized gesture and computes the game outcome.
- React (UI + webcam loop)
- TensorFlow.js Handpose (hand landmark detection)
- Fingerpose (gesture classification)
TensorFlow.js is a JavaScript ML library that provides pretrained models for the browser. This project uses the Handpose model to estimate hand landmarks from live video:
Fingerpose uses Handpose landmarks to classify gestures. This repo defines and detects three gestures:
- rock
- paper
- scissors Documentation:
- https://openbase.com/js/fingerpose/documentation
rpsdemo.mp4
npm install
npm start