Professional image and video processing toolkit for luxury real estate rendering, architectural visualization, and editorial post-production.
📊 Performance Dashboard | 📈 Latest Metrics
Transformation Portal v2.0.0 is the first stable release with production-ready contracts and preset governance.
Key improvements in v2.0.0:
- Versioned API contracts (schema-aligned payloads)
- Preset stability taxonomy (stable / canary / experimental) discoverable via CLI
- Service hardening including
/readyfor readiness checks
Quick discovery:
lux-depth-v2 --list-stable
lux-depth-v2 --describe-preset interior_luxury
# If console scripts aren't on PATH, run as module:
python -m lux_depth_v2 --list-stable
python -m lux_depth_v2 --describe-preset interior_luxuryInstall the release:
pip install "git+https://github.com/RC219805/Transformation_Portal.git@v2.0.0"Context-Aware Rendering extracts architectural intelligence from construction documents (floor plans, elevations, specifications) and uses that context to inform processing decisions.
- Architectural context extraction from PDFs (room types, dimensions, materials, design style)
- Room-specific strategy derivation (kitchen, bedroom, bath, living, outdoor)
- Dimension-aware depth decisions (proportion-respecting depth logic)
- Style-consistent color decisions aligned to design language
- Document provenance: explicit linkage from construction docs → final render decisions
Docs:
- docs/CONTEXT_AWARE_RENDERING.md
Core capabilities:
- Context-aware rendering workflows (document-informed processing)
- Depth-aware enhancement (monocular depth + depth-guided processing)
- PBR Map Generation (Physically Based Rendering maps: normal, roughness, AO)
- AI-powered refinement (optional ML stack)
- Material Response technology (surface-aware finishing)
- Professional grading looks (LUT library for film/location/material aesthetics)
- TIFF workflows (high bit-depth + metadata preservation, where supported)
- Video grading workflows (FFmpeg-based pipelines)
Transformation Portal supports depth models across two tiers with different licensing and use cases.
- Depth Anything V3 (V2 commercial variant): Fully supported, production-ready
- Use for: Commercial applications, products, revenue-generating services
- Licensing: Commercial-friendly licensing
- Default: All standard presets use this tier
- Depth Anything V3.1 (DA3 1.1, CC BY-NC 4.0): Available for research/academic use only
- Use for: Academic research, benchmarking, non-profit projects
- Licensing: CC BY-NC 4.0 (non-commercial research only)
- Enabled by: Setting
non_commercial_ok=Truein EnhanceConfig - Example Preset:
depth-anything-v3.1-research-m4(Apple Silicon optimized)
Important: DA3 1.1 is prohibited for commercial use. If you plan to use these models in a commercial product or service, use the commercial DA3 V2 variants instead. See ADR-0015: DA3 1.1 Non-Commercial Research Tier for detailed governance.
from transformation_portal.lux_depth_v3 import EnhanceConfig, Preset
# Non-commercial research (requires explicit opt-in)
config = EnhanceConfig(
preset=Preset.RESEARCH_DA31_M4,
non_commercial_ok=True, # Acknowledge CC BY-NC 4.0 restrictions
depth_device="mps", # Apple Silicon
)Lux Depth V3 supports multiple depth estimation backends with automatic fallback for robustness.
| Backend | Model | License | Focal Length | Metric Depth | Checkpoint Required |
|---|---|---|---|---|---|
da3 (default) |
Depth Anything V3 | MIT | ❌ | ❌ | No (auto-download) |
depth_pro |
Apple Depth Pro | Apple ML Research | ✅ | ✅ | Yes (1.9 GB) |
Default (DA3):
lux-depth-v3 --input-dir ./input --output-dir ./outputDepth Pro (requires license acceptance):
lux-depth-v3 \
--input-dir ./input \
--output-dir ./output \
--depth-backend depth_pro \
--accept-apple-depth-pro-research-license true \
--non-commercial-ok truePython API:
from transformation_portal.lux_depth_v3 import EnhanceConfig
from transformation_portal.lux_depth_v3.orchestrator import EnhanceOrchestrator
from pathlib import Path
# Using Depth Pro
config = EnhanceConfig(
depth_backend="depth_pro",
depth_pro_checkpoint_path="checkpoints/depth_pro.pt",
accept_apple_depth_pro_research_license=True,
non_commercial_ok=True,
depth_device="cpu",
enable_v2=False,
)
orchestrator = EnhanceOrchestrator(config, Path("./output"))If the requested backend is unavailable (missing checkpoint or dependencies), the system automatically falls back to DA3 with a warning logged. This ensures robustness in production environments.
All processing manifests include backend selection metadata:
requested_backend: User's requested backendresolved_backend: Actually used backendresolution_status: "success" or "fallback"resolution_reason: Explanation if fallback occurred
See ADR-019: Backend Registry Integration for architectural details.
Enable processing of RAW camera files (CR2, NEF, ARW, DNG, etc.) from professional cameras.
Installation:
pip install rawpy
# Or install with the RAW extras group:
pip install -e ".[raw]"Supported RAW Formats:
- Canon (CR2, CRW), Nikon (NEF, NRW), Sony (ARW, SRF, SR2)
- Adobe DNG, Olympus ORF, Fujifilm RAF, Pentax PEF
- Panasonic RW2, Phase One IIQ, Hasselblad 3FR
Usage:
# Process RAW files just like standard images
lux-depth-v3 --input-dir ./raw_images --output-dir ./output
# RAW files are automatically detected and converted to RGB
# High-quality settings: camera white balance, full resolution, sRGB color spaceTechnical Details:
- RAW → RGB conversion uses LibRaw via rawpy
- Default settings: camera white balance, full resolution, AHD demosaic
- Output: 8-bit sRGB (standard pipeline input)
- Graceful fallback: clear error message if rawpy not installed
Apple's Depth Pro model for metric depth estimation. Experimental tier - for research and evaluation only.
Installation:
pip install depth-proCheckpoint Download (1.9 GB):
mkdir -p checkpoints
curl -L https://ml-site.cdn-apple.com/models/depth-pro/depth_pro.pt -o checkpoints/depth_pro.ptLicense Requirements (Research-Only):
Depth Pro uses the Apple Machine Learning Research License (AMLR), which restricts usage to non-commercial research only. To use Depth Pro, you must explicitly acknowledge both:
from transformation_portal.lux_depth_v3 import EnhanceConfig
config = EnhanceConfig(
depth_backend="depth_pro",
non_commercial_ok=True, # Required: Acknowledge non-commercial use
accept_apple_depth_pro_research_license=True, # Required: Accept Apple AMLR license
depth_device="mps", # Apple Silicon (or "cpu" for fallback)
)- Commercial products or services
- Revenue-generating applications
- Paid client work
See Apple AMLR License for full terms.
Presets:
depth_pro_metric_mps.yaml- Apple Silicon optimizeddepth_pro_metric_cpu.yaml- CPU fallback
Hardware Requirements:
- Optimized for Apple Silicon (MPS device)
- Fallback to CPU supported
- Memory: ~2 GB for model + checkpoint
Tier Status: Experimental - use at your own risk. Default backend remains Depth Anything V3.
New in v2.0: Standalone PBR processor for generating Physically Based Rendering maps from depth data.
Generate PBR maps from existing depth:
from transformation_portal.lux_depth_v3 import PBRProcessor, get_preset
# Use premium quality preset
config = get_preset("premium").to_pbr_config()
# Generate from cached depth (2.3x faster than full pipeline)
paths = PBRProcessor.from_cached_depth(
depth_path="output/scene1_depth.npy",
config=config,
output_dir="output/pbr/",
base_name="scene1"
)
# Output: scene1_normal.png, scene1_roughness.png, scene1_ao.pngUse PBRProcessor (standalone) when:
- You already have depth maps and only need PBR
- Iterating on PBR parameters (2.3x faster than re-running depth)
- Integrating PBR into custom workflows
- Processing depth from external sources
Use Orchestrator (full pipeline) when:
- Starting from RGB images (need depth estimation)
- Running complete enhancement workflow
- Need depth + PBR + V2 enhancement in one pass
Quality Tiers:
standard- Balanced quality/speed (typical batch processing)premium- Maximum quality (hero shots, marketing)draft- Fast preview (internal review)
Material-Optimized:
wood- Emphasizes grain texturemetal- Lower roughness for polished surfacesglass- Heavy smoothing for flat surfacesstone- High detail for texturefabric- Moderate parameters for textiles
- PBR-only workflow: ~3,000 images/hour (vs ~1,277 for full pipeline)
- Memory-only mode: No file I/O overhead
- Iterative tuning: 2x faster when testing multiple presets
See PBR Processor Quick Start for detailed guide.
- Clone (recommended for development / local ops)
git clone https://github.com/RC219805/Transformation_Portal.git
cd Transformation_Portal
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate- Install (choose your environment)
Option A - Minimal runtime
pip install -r requirements.txt
pip install -e .Option B - Runtime + tests (CI-like)
pip install -r requirements-ci.txt
pip install -e .Option C - Full dev environment
pip install -r requirements-dev.txt
pip install -e .- Verify installation
python verify_core.pyThis repo uses two layers:
-
Convenience pinned files at repo root:
requirements.txtrequirements-ci.txtrequirements-dev.txtrequirements-lint.txt
-
Source-of-truth layered inputs in
requirements/for maintainers:
requirements/
├── base.in # Core runtime deps (human-editable)
├── base.txt # Compiled/pinned
├── ml.in # ML/AI deps (human-editable)
├── ml.txt # Compiled/pinned
├── dev.in # Dev deps (human-editable)
├── dev.txt # Compiled/pinned
├── ci.in # CI/test deps (human-editable)
└── ci.txt # Compiled/pinned
If you update .in files, recompile and commit both .in and .txt outputs:
cd requirements/
make compileassets/ # LUTs, branding, look assets
config/ # YAML presets and configuration
docs/ # Architecture, guides, reports
examples/ # Usage examples
requirements/ # Layered dependency sources (pip-tools style)
scripts/ # Operational scripts / pipeline runners
src/ # Installable package source
tests/ # pytest suite
tools/ # Dev/ops tools (manifests, audits, utilities)
workflows/ # Workflow artifacts / operational workflow utilities
Standard Image Formats:
- PNG, JPEG (
.jpg,.jpeg) - TIFF/TIF (
.tif,.tiff) - WebP, BMP (case-insensitive)
RAW Camera Formats (requires rawpy - optional):
- Canon:
.cr2,.crw - Nikon:
.nef,.nrw - Sony:
.arw,.srf,.sr2 - Adobe:
.dng(Digital Negative) - Olympus:
.orf - Fujifilm:
.raf - Pentax:
.pef - Panasonic:
.rw2 - Phase One, Hasselblad, and more
To enable RAW support:
pip install rawpy
# Or install with optional extras:
pip install -e ".[raw]"Video:
- MP4, MOV, AVI, MKV (codec/container dependent)
- HDR pipelines supported where FFmpeg metadata and filters allow (PQ/HLG workflows)
- Python: 3.11+
- FFmpeg: 6+ (for video workflows)
- Hardware: CPU-only supported; GPU/Apple Silicon acceleration optional depending on pipeline
CI note:
- Core tests run on Python 3.11 and 3.12
- ML tests run on Python 3.11
- Lint runs on Python 3.12
Fast local run (mirrors CI core suite):
pytest -v tests/ -ra -m "not ml and not slow" --maxfail=1ML tests (requires ML extras):
pytest -v tests/ -ra -m "ml and not slow" --maxfail=1All tests except slow:
pytest -v tests/ -ra -m "not slow" --maxfail=1Repo Make targets may exist (see Makefile):
make test-fast
make test-full
make ciTransformation Portal includes automated performance regression detection via the APEX Performance Observability Platform (integrated in CI) and the legacy Performance Ledger tool (for historical analysis).
Current Status (Phase 1): Shadow mode with synthetic data (informational only, non-blocking)
The APEX system runs automatically on every PR with:
- V1 vs V2 performance comparison (workflow baseline)
- Per-zone performance heatmaps (deployment topology awareness)
- Worst offenders detection (pinpoint regressions)
- Gate verdict reporting (pass/warn/fail with explanations)
- See
.github/workflows/apex_performance.yml
Phase 1 Configuration (Current):
- Mode: Shadow (reports but does not block)
- Data: Synthetic (dry-run mode validates contracts/schema)
- Purpose: Validate APEX infrastructure before real integration
Future (Phase 2 - Real Pipeline Integration): Once ML dependencies (torch/transformers, ~5GB) and model caching are deployed:
- Mode: Enforce (blocks merges on violations)
- Data: Real pipeline execution (actual performance measurements)
- Thresholds (to be calibrated from Phase 2 baseline):
- p95 > 10% worse: Tail latency regression (blocks)
- mean > 15% worse: Average performance regression (blocks)
- failure_rate > 0%: Any new failures (blocks)
Why Phased Rollout:
- Phase 1 validates data contracts and reporting without ML overhead
- Phase 2 adds real measurements and enforcement once infrastructure is ready
- Prevents false failures during scaffold validation phase
See APEX Real Pipeline Integration Plan and ADR-024 for details.
For local analysis and historical baselines:
python tools/performance_ledger.py \
--manifests-dir output/prod_run/manifests \
--output docs/performance/baselines/v2.1.0-baseline.json \
--version "v2.1.0" \
--backend "da3" \
--quality-tier "standard"python tools/performance_ledger.py \
--baseline docs/performance/baselines/v2.0.0-post-pr841.json \
--compare output/test_run/manifests \
--output perf_report.mdExit codes:
0: No regressions detected1: Regressions detected (blocks merge)
See Performance Monitoring Guide and ADR-024 for details.
📖 Start with: DOCUMENTATION_MAP.md
The Documentation Map is your single source of truth for finding guides, references, and technical documentation.
- DOCUMENTATION_MAP.md - Complete documentation index
- API Documentation - Full API reference (Sphinx)
- SETUP_GUIDE.md - Detailed installation
- ARCHITECTURE.md - System architecture
- CONTRIBUTING.md - How to contribute
- API Reference: docs/api/
- Pipelines: docs/pipeline/
- CI/CD: docs/ci/
- Troubleshooting: docs/TROUBLESHOOTING.md
- Lux Depth V3 CLI Guide: docs/LUX_DEPTH_V3_CLI_GUIDE.md
- Lux Depth V3 Troubleshooting: docs/LUX_DEPTH_V3_TROUBLESHOOTING.md
Professional use permitted with attribution.
Component licenses:
- Pipeline code: proprietary with attribution requirements
- Depth Anything V3 (commercial variant): Commercial-friendly licensing
- Depth Anything V3.1 (DA3 1.1): CC BY-NC-4.0 (non-commercial research only)
⚠️ - LUT collection: attribution required
Author: Richard Cheetham Brand: Carolwood Estates · RACLuxe Division Email: info@racluxe.com
Resources:
- GitHub Issues: bug reports and feature requests
- Documentation: docs/
- Examples: examples/
Last Updated: 2026-01-31