Our Dual-Stream Capture System
Alia 360° omnidirectional camera + GoPro egocentric camera — synchronized dual-stream capture with 32+ patents, 8 years of production deployment.
Alia Specs
Purpose-Built for Physical AI Data
Developed from breakthrough research at IIIT Hyderabad (published at CVPR 2016) and refined over 8 years of production deployment. The synchronized dual-stream system — Alia 360° omnidirectional plus GoPro egocentric — captures the complete spatial context that humanoid robots need. This proprietary technology creates a defensible moat: we capture data that no other company can replicate.
Complete Scene Coverage
Traditional cameras: 60-90° FOV. DreamVu: 360° — captures everything simultaneously, no blind spots, no repositioning.
Skill Transfer at Scale
When multiple humans and robots demonstrate tasks throughout an environment, one Alia captures all demonstrations happening anywhere in the space — no repositioning required.
3D Gaussian Splatting
The 360° coverage provides ideal input for photorealistic 3D reconstruction. All multimodal annotations propagate automatically from 2D frames to the 3D scene.
32+ Patents
Protected omnidirectional 3D vision technology with 8+ years of production deployment. A defensible competitive advantage that ensures unique data capture capabilities.
From Real World to Training Data
Our end-to-end platform transforms real-world captures into VLA-ready training datasets — delivered in Isaac Sim, LeRobot, and Open X-Embodiment formats.
Dual-Stream Capture
Synchronized Alia 360° exocentric + GoPro egocentric cameras with full RGB + depth in real environments
Multimodal Annotation
AI-assisted (SAM2, Grounding DINO) + human QA delivers vision, language, and action labels — 10× faster than traditional 3D annotation
3D Reconstruction
3D Gaussian Splatting creates photorealistic scenes with all annotations preserved — ready for simulation conversion
Simulation Conversion
Automated USD export with physics properties for NVIDIA Isaac Sim
Synthetic Generation
1,000+ frames/hour with domain randomization — all modalities and skill transfer demos preserved
Real-World Validation
Continuous verification: sim-to-real transfer rates, manipulation success, and skill transfer effectiveness
Vision Data
- Synchronized ego + 360° exo video with depth
- Object segmentation with instance IDs
- 6DOF object poses
- Manipulation affordances (grip types, approach vectors)
- Human and robot demonstrations in 360° view
Language Data
- QA pairs describing objects, actions, and scene elements
- Action summaries for every sequence
- Spatial relations between objects and actors
- Context descriptions for scene understanding
- Ready for VLA instruction following
Action Data
- Full trajectories for every actor in 360° scene
- Movement paths with timestamps
- Interaction sequences showing manipulation
- Demonstration labels for skill transfer
- Kinematic data where available