SplineGS

Robust Motion-Adaptive Spline for Real-Time Dynamic 3D Gaussians from Monocular Video (CVPR 2025)

SplineGS - Robust Motion-Adaptive Spline for Real-Time Dynamic 3D Gaussians from Monocular Video (CVPR 2025)

Jongmin Park, Minh-Quan Viet Bui, Juan Luis Gonzalez Bello, Jaeho Moon, Jihyong Oh, Munchurl Kim

paper :
https://arxiv.org/abs/2412.09982
project website :
https://kaist-viclab.github.io/splinegs-site/

핵심 :

  1. COLMAP-free :
    two-stage training strategy 사용
    즉, camera param.을 먼저 roughly estimate한 뒤 jointly optimize camera param. and 3DGS param.
  2. dynamic scenes from in-the-wild monocular videos :
    static 3DGS와 dynamic 3DGS의 union
  3. dynamic 3DGS’s mean :
    apply spline-based model (MAS) to each dynamic 3DGS mean (trajectories)
    이 때, depthmap과 camera param.를 이용해 2D track을 unproject하여 3D mean trajectories 초기화
  4. thousands time faster than SOTA :
    more efficient than MLP-based or grid-based
  5. loss :
    RGB image recon. loss
    depth recon. loss
    2D projection alignment loss
    3D alignment loss
    motion mask loss

Contribution

Method

Architecture

Motion-Adaptive Spline for 3DGS

time \(t\) 에서 each dynamic 3DGS의 mean \(\mu(t)\) (continuous trajectory)를 모델링하기 위해
cubic Hermite spline function with a set of learnable control points 사용 (MAS)
즉, each dynamic Gaussian마다 a set of control points가 있고 얘네들의 spline curve로 Gaussian mean \(\mu(t)\) 을 결정!

Camera Pose Estimation

Loss

Experiment

Result

Ablation Study

Skating scene처럼 simple motion인 경우에는 MACP 덕분에 최소한의 N_c로도 대부분의 dynamic 3DGS 표현 가능

Conclusion

Question