How do we carry the feel of a space (texture, light, and a sense of presence) into interactive formats without a heavy modeling phase? For cultural spaces, fidelity matters: scale, lighting continuity, and material cues need to survive the pipeline so the room reads as itself, not a stylized stand-in. This R&D explores 3D Gaussian Splats (3DGS) as a practical answer, using a capture of one gallery at the Montreal Museum of Fine Arts (MMFA) and delivering it to web, VR, and a gallery-floor installation.
What we explored
From a single on-site capture, we can deliver multiple outputs with minimal re-authoring. Constraints included limited time on the museum floor, discreet gear, and an efficient, robust pipeline capable of rendering scenes on devices ranging from mobile browsers and VR headsets to large-format displays.
Prototype approach
Capture. A synchronized three-mirrorless-camera rig; steady walk-throughs with overlapping coverage and consistent exposure.
Dataset preparation. Alignment, sparse point-cloud generation, and scaling in RealityCapture.
Training. Nerfstudio’s 3DGS pipeline with iterative monitoring as the splat emerges, followed by pruning and export for the target runtimes.


Three display modes
Web. We created a web-friendly version built on PlayCanvas using the SOGS v2 format. The result preserves the photographic quality of 3DGS with practical file sizes for desktop and mobile browsers.
Try it out here
VR. Built with Unity and Aras Pranckevičius’ 3DGS package. In-headset parallax conveys material richness and scale, creating a strong sense of presence and immersion.
Installation. A triptych of 75″ portrait displays delivers a full-scale, shareable, real-time experience. Example scenes include Redpath Museum, Notre-Dame de la Mer, Portland Japanese Garden, Royal Ontario Museum, and several Montreal alleyways.
Ongoing R&D
Over the past few years, we’ve run parallel efforts to harden the capture → dataset creation → training → runtime path. Highlights:
- Multi-intrinsics dataset handling and calibration routines.
- Alignment strategies and sparse-cloud clean-up/optimization.
- 360° workflows, including equirectangular → perspective conversion for photogrammetry/SfM.
- Runtime constraints (web + VR): splat budgets vs. file size, culling/streaming, and engine integration (Unity, PlayCanvas).
- Extensive field captures across varied lighting, scale, and access conditions.
For cultural spaces, 3DGS offers a pragmatic middle ground: photographic fidelity at interactive framerates, without the overhead of heavy retopology and UV workflows.