A collaborative workspace that analyzes contribution patterns and rhythm rather than content semantics. Focus on participation balance, activity bursts, and real-time engagement metrics.
- Shared Workspace: Minimalist interface supporting both voice (transcribed via OpenAI Whisper) and typing contributions
- Rhythm Analysis: Focuses on pace, volume, and patterns of contributions, not content meaning
- Real-time Feedback: Live visualization of activity levels, burst patterns, and participation balance
-
Voice Channel
- Live speech capture and transcription via OpenAI Whisper API
- High-quality real-time transcription with confidence scoring
- Speaker attribution with timestamps
- Prosodic cues (talk time, pauses, overlap) for rhythm analysis
- Graceful fallback to mock transcriptions when API unavailable
-
Typing Channel
- Keystroke activity tracking (insertions, deletions, bursts, pauses)
- Timestamped activity segments
- No content analysis - only timing and volume metrics
- Activity Bars: Per-participant contribution rate (voice seconds/min + typed chars/min)
- Burst Timeline: Visual blocks showing sustained speaking/typing activity
- Balance Indicator: Team balance score reflecting equitable floor sharing
- Soft Nudges: Optional gentle cues when participation becomes imbalanced
- Content Agnostic: Uses timing and volume only, no semantic analysis
- Aggregation Options: Can send windowed aggregates instead of raw event streams
- Session Controls: Host manages metric visibility and nudge settings
- Transparency: Clear UI indicators show exactly what's measured and shared
# Install dependencies
npm install
# Configure OpenAI API (optional for demo)
cp .env.example .env.local
# Add your OpenAI API key to .env.local
# Start development server
npm run dev
# Build for production
npm run build
# Run linting
npm run lint
# Run type checking
npm run typecheckThis app integrates with OpenAI Whisper API for real-time speech transcription:
- Real-time transcription: Convert speech to text with high accuracy using Whisper-1 model
- Word-level timestamps: Precise timing information for rhythm analysis
- Confidence scoring: Quality indicators for transcription reliability
- Fallback mode: Intelligent mock transcriptions when API is unavailable
- Connection monitoring: Live status indicator in the voice controls
- Get API key from OpenAI Platform
- Add to
.env.local:VITE_OPENAI_API_KEY=your_api_key_here - App will automatically detect and use the API when configured
- Create or Join Session: Enter display name and create new session or join demo
- Contribute: Type in shared document or use voice recording
- Monitor Rhythm: Watch live activity bars and burst timeline in sidebar
- Session Summary: Export participation metrics and patterns when complete
The demo showcases:
- Two users typing with different patterns
- One user adding voice contributions
- Live visualization of activity bars and burst indicators
- Balance indicator responding to participation changes
- Clean participation report at session end
- Frontend: React + TypeScript + Vite
- Styling: Tailwind CSS
- State Management: Zustand
- Audio: Web Audio API + MediaRecorder
- Icons: Lucide React
- Data Fetching: TanStack Query
src/components/: UI components for workspace, feedback, and controlssrc/stores/: Zustand stores for session state managementsrc/types/: TypeScript interfaces and typessrc/utils/: Utility functions and helperssrc/services/: External service integrations (OpenAI Whisper API)
- Rate: Characters/minute (typing), seconds/minute (speaking)
- Volume: Cumulative characters/seconds per participant
- Bursts: Contiguous contributions separated by pauses
- Balance: Evenness of contribution distribution over time
- Overlap: Speaking overlap detection for interruption patterns