diff --git a/_pages/resume_deep_gen.html b/_pages/resume_deep_gen.html index 9ab0a48..bef38e6 100644 --- a/_pages/resume_deep_gen.html +++ b/_pages/resume_deep_gen.html @@ -5,19 +5,234 @@ - - - Ron Jailall - Resume + + + + - + + + + + + +
+ + +
+
+
+

Ron Jailall

+

Applied Research Engineer

+
+
+

Raleigh, NC | (608) 332-8605

+

rojailal@gmail.com

+

https://ironj.github.io/

+
+
+
+ + +
+

Profile

+

+ Engineer with 15+ years of experience bridging the gap between unsolved challenges and high-impact industry applications. Expert in Volumetric World Models (Gaussian Splatting), Generative Media, and Efficient Inference. Proven ability to thrive in ambiguity, rapidly prototyping novel solutions for Video Understanding and Multimodal AI while optimizing for deployment on resource-constrained hardware. Deeply experienced in the full ML lifecycle—from dataset curation and model training to performance optimization and production. +

+
+ + +
+

Core Competencies

+
+
+ Generative & Volumetric AI: + Diffusion Models, Gaussian Splatting (3D World Representations), NeRF concepts, Video Generation pipelines. +
+
+ Multimodal Understanding: + Vision Language Models (VLMs), Video Neural Search, Audio-Visual alignment, RAG pipelines. +
+
+ Model Optimization: + Efficient Inference (TensorRT, ONNX, CoreML), Quantization, Knowledge Distillation, MobileNet/Edge architectures. +
+
+ Frameworks & Engineering: + TensorFlow 2, PyTorch, JAX concepts, Python, C++, CUDA, Metal Shading Language. +
+
+
+ + +
+

Research & Technical Highlights

+ +
+
+

Volumetric World Representation & Physics Simulation (VisionOS Project)

+
+
    +
  • Physics-Aware World Modeling: Designed and implemented a volumetric renderer on VisionOS that assigns physical properties ("jiggle physics") to learned 3D Gaussian representations.
  • +
  • Research Application: Demonstrated core World Model principles by enabling static 3D reconstructions to react dynamically to environmental stimuli, simulating cause-and-effect within a learned volumetric space.
  • +
  • Efficient Inference: Optimized the rendering pipeline using custom Metal compute shaders to achieve real-time performance on mobile hardware, validating the feasibility of interactive volumetric video.
  • +
+
+ +
+
+

End-to-End Generative Media Pipeline (Matte Model)

+
+
    +
  • Dataset Curation to Deployment: Managed the full lifecycle of a human matting research project. Curated and augmented the P3M-10k dataset to improve robustness against diverse lighting conditions.
  • +
  • Model Training & Architecture: Trained a custom MobileNetV2-based architecture using TensorFlow 2, optimizing the backbone for CPU-efficient video processing.
  • +
  • Performance Optimization: Engineered the inference pipeline to run locally on consumer hardware via ONNX Runtime, replacing heavy cloud-dependent SDKs with a low-latency edge solution.
  • +
+
+ +
+
+

Video Understanding & Diffusion

+
+
    +
  • Controlled Media Generation: Accelerated Stable Diffusion models for real-time thumbnail generation and webcam re-rendering pipelines, reducing latency for live interactive video applications.
  • +
  • Video Neural Search: Prototyped neural search algorithms for massive video archives at Sonic Foundry, utilizing segmentation and classification models to enable semantic understanding of unstructured video data.
  • +
+
+
+ + +
+

Professional Experience

+ + +
+
+

ML Engineering Consultant / Applied Research Engineer

+ 2024 – Present +
+
Remote
+

Executing applied research and prototyping for diverse clients in GenAI and Computer Vision.

+
    +
  • Prototyping in Ambiguity: Rapidly validated and iterated on novel AI architectures, including high-speed agentic workflows (>1000 tokens/s) that require dynamic replanning and context management.
  • +
  • Multimodal Evaluations: Authored technical research on On-Device VLMs, evaluating the trade-offs between model size, quantization accuracy, and memory bandwidth for multimodal understanding on edge devices.
  • +
  • Hardware Optimization: Optimized Computer Vision models for Nvidia Jetson platforms using TensorRT, enabling real-time multi-view tracking and sensor fusion in resource-constrained environments.
  • +
+
+ + +
+
+

Lead Engineer, AI R&D

+ 2023 – 2024 +
+
Vidable.ai | Remote
+

Led the R&D function, evaluating and implementing cutting-edge Generative Media models.

+
    +
  • Applied Research: Collaborated with PhD researchers to evaluate emerging Diffusion Models and LLMs, translating theoretical advancements into functional product prototypes.
  • +
  • Model Optimization: Modified C/C++ inference engines (llama.cpp, Stable Diffusion Turbo) to run efficiently on varied hardware targets, enabling cost-effective scaling of generative features.
  • +
  • Cross-Functional Collaboration: Worked closely with product and engineering teams to define "success" for ambiguous AI features, establishing evaluation metrics for prompt engineering and model performance.
  • +
+
+ + +
+
+

Lead Engineer

+ 2014 – 2023 +
+
Sonic Foundry | Remote
+

Engineering leadership focused on large-scale video processing and data pipelines.

+
    +
  • Video at Scale: Architected data pipelines serving the company's largest enterprise customers, handling massive volumes of audio-visual data.
  • +
  • Innovation: Founded the company's internal AI reading group to explore early applications of Deep Learning in video, fostering a culture of research and experimentation.
  • +
+
+
+ + +
+

Education & Certifications

+
+
+ NC State University | Electrical & Computer Engineering (75 Credit Hours) +
+
+ Coursera Verified Certificates: +
    +
  • Neural Networks for Machine Learning (Geoffrey Hinton) | ID: 3MJACUGZ4LMA
  • +
  • Image and Video Processing | ID: E9JX646TTS
  • +
+
+
+
+ +
+