haus²⁵ backend includes four (4) services:
- Chat
- Curation
- Storage
- Livestreaming
haus²⁵ integrates XMTP protocol to provide secure, decentralized chat functionality for event communities. Each event gets a dedicated group chat accessible only to ticket holders and creators.
The integration uses XMTP's browser SDK with optimistic group creation patterns to minimize blockchain interactions while maintaining security.
graph TB
A[Event Creation] --> B[Optimistic Group Creation]
B --> C[Creator Group Storage]
D[Ticket Purchase] --> E[User XMTP Client Init]
E --> F[Creator Adds User to Group]
G[Event Room] --> H[Group Discovery]
H --> I[Message Streaming]
J[Group Metadata] --> K[On-chain Storage]
K --> L[IPFS Backup]
style B fill:#ff6b6b
style F fill:#4ecdc4
style I fill:#96ceb4
style K fill:#feca57
Group Lifecycle:
- Creation: Creator creates optimistic group at event minting
- Population: Ticket purchases trigger member additions (Optimistic Group Creation)
- Activation: Event start enables real-time messaging
- Persistence: Group ID stored in event metadata for discovery
Persistent Storage:
- Each user gets dedicated database (
xmtp-${userAddress}) - No database conflicts between different wallet connections
- Stable client state across browser sessions
- Reduced worker spawning with consistent DB paths
Ticket-Based Permissions:
- Only ticket holders can join group chats
- Creator verification before group creation
- Real-time access revocation for refunded tickets
Message Privacy:
- End-to-end encryption via XMTP protocol
- Decentralized storage prevents platform censorship
- User-controlled data with client-side key management
Explicit Consent Model:
// All groups require explicit consent
await group.updateConsentState(ConsentState.Allowed)
// Sync only allowed conversations
await client.conversations.syncAll([ConsentState.Allowed])
// Stream messages from consented groups only
const controller = await client.conversations.streamAllMessages({
consentStates: [ConsentState.Allowed],
onValue: handleMessage
})Connection Pooling:
- Single client per wallet across all events
- Persistent connections avoid repeated initialization
- Efficient memory usage with shared client instances
Database Optimization:
- Stable DB paths prevent worker respawning
- Incremental sync reduces bandwidth usage
- Local caching for message history
Client Recovery:
- Automatic reconnection for dropped connections
- State recovery from persistent storage
- Fallback modes for degraded functionality
The haus²⁵ autonomous curation system represents the first fully operational multi-agent representation agency for live performers. Built using LangChainJS modules and LangGraphJS patterns, it provides AI-powered event optimization from planning to promotion to post-production.
The curation system implements a hierarchical multi-agent architecture where specialized agents are coordinated by supervisor agents across three distinct scopes.
graph TB
subgraph "Curation Service"
API[Express API Layer]
PS[Planner Supervisor]
PrS[Promoter Supervisor]
PdS[Producer Supervisor]
end
subgraph "Shared Agents"
RAG[RAG Agent<br/>LangChain VectorStore]
RES[Research Agent<br/>Google + YouTube APIs]
MEM[Memory Agent<br/>On-Chain Storage]
BC[Blockchain Agent<br/>SEI Integration]
TR[Trends Agent<br/>Apify + Summarization]
SK[Social Knowledge Agent<br/>Platform Specifications]
end
subgraph "Specialized Agents"
TA[Title Agent<br/>Gemini Flash]
DA[Description Agent<br/>Gemini Flash]
PA[Pricing Agent<br/>Gemini Lite]
SA[Schedule Agent<br/>Gemini Lite]
BA[Banner Agent<br/>Imagen + DALL-E]
CM[Content Manager<br/>Claude Sonnet]
XA[X Agent<br/>Claude Sonnet]
EB[EventBrite Agent<br/>Claude Sonnet]
FB[Facebook Agent<br/>Claude Sonnet]
IG[Instagram Agent<br/>Claude + GPT-4o-mini]
end
subgraph "Storage & Blockchain"
IPFS[Pinata IPFS<br/>Metadata Storage]
EF[EventFactory<br/>Smart Contract]
EM[EventManager<br/>Smart Contract]
PM[Proxy Management<br/>Delegation Pattern]
end
API --> PS
API --> PrS
API --> PdS
PS --> RAG & RES & MEM & BC
PS --> TA & DA & PA & SA & BA
PrS --> RAG & RES & MEM & BC & TR & SK
PrS --> CM & XA & EB & FB & IG
MEM --> IPFS
MEM --> EM
BC --> EF
BC --> PM
style PS fill:#ff6b6b
style PrS fill:#4ecdc4
style MEM fill:#ffe66d
style BC fill:#a8e6cf
The system leverages multiple LangChainJS modules for cost-efficient operations:
Embedding and Vector Storage: GoogleGenerativeAIEmbeddings with MemoryVectorStore for user history indexing and contextual retrieval.
Summarization Chains: TokenTextSplitter combined with LoadSummarizationChain using "refine" type for cost reduction on large social media datasets.
Multi-Model Orchestration:
- Gemini 2.5 Flash Lite: Creative tasks (titles, descriptions)
- Gemini 2.0 Flash Exp: Analytical tasks (pricing, scheduling)
- Claude 3.5 Sonnet: High-quality content strategy
- GPT-4o-mini: Specific platform optimizations
Unlike traditional agentic frameworks that use expensive vector databases or inconsistent local storage, haus²⁵ implements the first on-chain iteration system for AI memory and RAG operations.
Technical Advantages:
- 100x cost reduction compared to traditional vector storage
- Immutable single source of truth prevents cache inconsistencies
- Persistent across deployments eliminates data loss risks
- Transparent audit trail for all AI decisions
Implementation Pattern: Each iteration contains aspect type, iteration number, original/proposed values, AI rationale, confidence score, timestamp, and source. Storage flows through EventManager contract updates with Pinata IPFS metadata uploads.
Based on LangGraphJS StateGraph patterns but simplified for production efficiency. PlannerSupervisor coordinates agents through two-phase execution: shared context preparation (user history indexing, category research) followed by parallel specialized agent execution (title, description, pricing, schedule, banner generation).
Specialized Agents:
- Title Agent: Gemini Flash for creative title generation
- Description Agent: Gemini Flash for compelling descriptions
- Pricing Agent: Gemini Lite for numerical optimization
- Schedule Agent: Gemini Lite for data-driven timing
- Banner Agent: Google Imagen + DALL-E fallback for visual content
Services:
- Enhanced event descriptions with engagement optimization
- Optimal scheduling recommendations based on audience data
- Reserve price optimization for maximum community participation
- Custom banner generation using AI art tools
Inherits: All Planner capabilities plus specialized promotional agents
Specialized Agents:
- Content Manager: Claude Sonnet for comprehensive strategy planning
- X Agent: Claude Sonnet for Twitter/X content creation
- EventBrite Agent: Claude Sonnet for event listing optimization
- Facebook Agent: Claude Sonnet for community-focused content
- Instagram Agent: Claude Sonnet + GPT-4o-mini for visual strategy
Services:
- Comprehensive promotional campaign development
- Social media content creation and scheduling
- Cross-platform promotion (Twitter, Instagram, Facebook, EventBrite)
- Community building and audience development strategies
Status: Architecture designed, implementation pending
Planned Services:
- No-compression video storage for maximum quality preservation
- AI-powered video enhancement and post-processing
- Comprehensive event highlight reels and documentation
- Professional-grade metadata compilation and presentation
The Lighthouse Storage monitors live streams and automatically uploads video content to permanent decentralized storage, ensuring performances are preserved forever without relying on centralized platforms.
Built on Lighthouse, it processes video streams into 60-second chunks and provides verifiable Proof of Data Possession (PDP) receipts.
graph TB
A[SRS Live Stream] --> B[HLS Segments]
B --> C[Storage Service Monitor]
C --> D[60s Chunk Processing]
D --> E[FFmpeg Optimization]
E --> F[Lighthouse Upload]
F --> G[Filecoin Network]
G --> H[PDP Generation]
H --> I[Metadata Compilation]
style C fill:#ff6b6b
style F fill:#4ecdc4
style G fill:#96ceb4
style H fill:#feca57
Real-time Processing:
- HLS Monitor: Watches SRS output directory using
chokidar - Video Processor: Combines segments into optimized chunks
- Upload Service: Handles Lighthouse communication and retry logic
- Metadata Service: Compiles manifests and creates IPFS backups
StorageService: Initializes HLSMonitor, VideoProcessor, Lighthouse, and MetadataService. startLivestreamStorage method configures monitoring and processing pipeline for specified eventId and creator with storage space preparation.
File System Watching: HLSMonitor class with chokidar watcher on event-specific hlsPath, processes .ts segments, buffers 6 segments (60 seconds) before triggering chunk creation via videoProcessor.
FFmpeg Integration:
class VideoProcessor {
async createChunk(eventId: string, segments: string[]) {
const chunkId = `${eventId}_chunk_${this.chunkCount++}`
const outputPath = `./chunks/${chunkId}.mp4`
// Create segment list file for FFmpeg
const segmentList = segments.join('\n')
await fs.writeFile(`./tmp/${chunkId}_segments.txt`, segmentList)
// FFmpeg command for optimization
const ffmpegArgs = [
'-f', 'concat',
'-safe', '0',
'-i', `./tmp/${chunkId}_segments.txt`,
'-c:v', 'libx264',
'-preset', 'fast',
'-crf', '26',
'-c:a', 'aac',
'-b:a', '128k',
'-movflags', 'faststart',
outputPath
]
await this.runFFmpeg(ffmpegArgs)
await this.uploadChunk(chunkId, outputPath)
}
}Encoding Presets:
const qualityPresets = {
high: {
bitrate: '2000k',
crf: 23,
preset: 'medium'
},
medium: {
bitrate: '1000k',
crf: 26,
preset: 'fast'
},
low: {
bitrate: '500k',
crf: 30,
preset: 'faster'
}
}Adaptive Processing:
- CPU usage monitoring to adjust encoding presets
- File size optimization for efficient storage costs
- Format standardization for consistent playback
- Metadata preservation during transcoding
interface ChunkMetadata {
chunkId: string
cid: string
size: number
filcdnUrl: string
backupUrls: string[]
duration: number
chunkIndex: number
timestamp: number
pdpVerified: boolean
dealCount: number
uploadReceipt: any
}Start Monitoring:
POST /api/livestream/start
Content-Type: application/json
{
"eventId": "123",
"creator": "0x...",
"startTime": 1640995200000,
"resolution": "1920x1080",
"bitrate": "2000k"
}Status Monitoring:
GET /api/livestream/123/status
Response:
{
"eventId": "123",
"status": "active",
"totalChunks": 3,
"uploadedChunks": 3,
"uploadProgress": 100,
"totalSizeMB": 150,
"duration": 180,
"chunks": [...],
"playbackUrls": {...}
}Stop Processing:
POST /api/livestream/stop
Content-Type: application/json
{
"eventId": "123"
}// Polling service for active events
class PollingService {
startPolling() {
this.interval = setInterval(async () => {
const { activeEvents } = lighthouseService.getState()
for (const [eventId, eventStatus] of activeEvents.entries()) {
if (eventStatus.status === 'active') {
await this.updateEventStatus(eventId)
}
}
}, 10000) // Poll every 10 seconds
}
}Processing Capacity:
- Single instance: Handles 10-15 concurrent streams
- CPU utilization: ~70% during peak encoding
- Memory usage: ~2GB for video buffer management
- Disk I/O: Sequential writes optimized for SSD
Network Optimization:
- Upload batching: Multiple chunks uploaded in parallel
- Retry logic: Exponential backoff for failed uploads
- Bandwidth management: Adaptive upload speeds based on available bandwidth
Storage Efficiency:
- Compression ratios: 60-70% size reduction through optimization
- Deduplication: Identical chunks stored once across events
- Lifecycle management: Automatic cleanup of temporary files
Filecoin Economics:
- Storage costs: ~$0.10 per GB per year
- Retrieval costs: Minimal for CDN-cached content
- Deal optimization: Batch uploads for better pricing
Stream Interruption:
- Graceful handling of mid-stream disconnections
- Partial chunk processing for incomplete segments
- Recovery mechanisms for resumed streams
- Manual intervention tools for edge cases
Key Metrics:
- Active streams being processed
- Upload queue length and processing time
- Storage usage and available capacity
- Error rates and retry statistics
- Network bandwidth utilization
haus²⁵ uses Simple Realtime Server (SRS) as the core streaming infrastructure, providing WebRTC-based live streaming with sub-second latency for real-time audience interaction.
The architecture eliminates traditional backend dependencies by generating streaming URLs deterministically on the frontend.
graph TB
A[Creator Browser] -->|WHIP| B[nginx Proxy]
B --> C[SRS Server]
C -->|WHEP| D[Viewer Browsers]
E[Frontend] -->|Generate URLs| E
F[Smart Contracts] -->|Event Data| E
C --> G[HLS Output]
G --> H[Storage Service]
H --> I[Filecoin]
style C fill:#ff6b6b
style E fill:#4ecdc4
style H fill:#96ceb4
style I fill:#feca57
Publishing (Creator):
- Frontend generates WHIP URL from
eventId - WebRTC connection established to SRS
- Browser captures media stream
- SRS distributes to subscribers
Viewing (Audience):
- Frontend generates WHEP URL from
eventId - WebRTC connection established to SRS
- SRS delivers stream to browser
- Real-time interaction enabled
SRS Configuration: Listen on port 1935, 1000 max connections, file logging with trace level. HTTP API on port 1985 with crossdomain enabled, HTTP server on 8080, RTC server on 8000 with candidate configuration.
WHIP/WHEP Support: Default vhost with RTC enabled, RTMP-to-RTC bidirectional conversion, HTTP remux for FLV streaming, HLS output with 10-second fragments and 60-second window.
Stream Monitoring:
GET /api/v1/streams/- List active streamsGET /api/v1/clients/- Connected client informationPOST /api/v1/clients/{id}- Client management operations
WebRTC Endpoints:
POST /rtc/v1/whip/- Publishing endpointPOST /rtc/v1/whep/- Subscription endpointOPTIONS /rtc/v1/*- CORS preflight handling
Frontend generates all streaming URLs using direct onchain scheduling, without backend coordination:
class StreamingService {
generateStreamUrls(eventId: string): StreamSession {
const sessionId = `event_${eventId}_${Date.now()}`
return {
// WebRTC publishing (creator)
whipUrl: `https://room.haus25.live/rtc/v1/whip/?app=live&stream=${eventId}`,
// WebRTC viewing (audience)
whepUrl: `https://room.haus25.live/rtc/v1/whep/?app=live&stream=${eventId}`,
// Fallback RTMP (OBS/streaming software)
streamUrl: `rtmp://room.haus25.live:1935/live/${eventId}`,
// HTTP-FLV fallback viewing
playUrl: `https://room.haus25.live:8080/live/${eventId}.flv`,
sessionId
}
}
}- No backend coordination required for URL management
- Immediate availability after event creation
- Persistent URLs that work across restarts
- Simplified debugging with predictable endpoints
- Reduced infrastructure complexity and costs
Latency Tracking:
- WebRTC typically achieves sub-500ms end-to-end latency
- SRS optimizations reduce buffering to sub-100ms
- Real-time interaction feels instantaneous for audiences
Scalability:
- Single SRS instance handles 100+ concurrent streams
- nginx proxy distributes load across multiple SRS instances
- Filecoin storage removes local storage constraints
SRS outputs HLS segments that trigger the storage service:
sequenceDiagram
participant SRS
participant FileSystem
participant StorageService
participant Filecoin
SRS->>FileSystem: Write .ts segments
StorageService->>FileSystem: Monitor for new files
StorageService->>StorageService: Process 60s chunks
StorageService->>Filecoin: Upload via Lighthouse
StorageService->>StorageService: Update metadata
Segment Aggregation:
- SRS outputs 10-second HLS segments
- Storage service combines 6 segments into 60-second chunks
- FFmpeg processes and optimizes video quality
- Filecoin storage provides permanent preservation
Stream Publishing:
- Only event creators can publish to their
eventIdstream - Frontend validates creator status before allowing publish attempts
- SRS configuration can add IP-based restrictions if needed
Stream Viewing:
- Ticket ownership verified before generating WHEP URLs
- Room-level access control prevents unauthorized viewing
- WebRTC encryption provides secure transmission
Connection Management:
- Reuse WebRTC connections when possible
- Implement connection pooling for multiple events
- Graceful degradation for connection failures
Bandwidth Adaptation:
- Automatic quality adjustment based on connection speed
- Manual quality selection for user preference
- Fallback to lower bitrates during congestion