Date of Award

8-28-2024

Publication Type

Thesis

Degree Name

M.A.Sc.

Department

Electrical and Computer Engineering

Keywords

3D Gaussian Splatting;Multi-View Stereo;Point Cloud Upsampling

Supervisor

Jonathan Wu

Supervisor

Ning Zhang

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Abstract

Point cloud data has become an increasingly important representation of the real world, with applications ranging from autonomous driving to virtual reality. However, raw acquired point clouds often suffer from sparsity due to challenging environments or complex scenes. Densifying the spare point clouds can improve object recognition accuracy and enhance 3D reconstruction quality. This thesis proposes a novel point cloud upsampling method based on 3D Gaussian Splatting (3DGS), leveraging differentiable rendering to optimize interpolated new points. Our approach begins by creating a mesh from the raw point cloud and interpolating new points within its triangles. The position of each new point is determined by the weights of two edges of its containing triangle, allowing for parameterization of the point’s location. We then optimize these edge weights through the 3DGS differentiable rendering scheme. Unlike the original 3DGS that uses sparse point clouds reconstructed by Structure from Motion (SfM), our method initializes the point cloud from a Multi-View Stereo (MVS) neural network. After optimizing the positions of the newly inserted points by training the 3DGS representation, we combine these new points with the raw input point cloud to achieve upsampling. We also investigate the impact of a depth regularization loss on this upsampling method. Experiments conducted on the DTU dataset demonstrate the effectiveness of our 3DGS-based point cloud upsampling method in terms of accuracy and completeness metrics. Compared with existing MLS-based methods, our approach shows promising improvements, reducing the completeness error by 0.018 while maintaining geometric stability, offering a new direction for addressing the challenge of point cloud sparsity in 3D data processing and computer vision applications.

Available for download on Wednesday, August 27, 2025

Share

COinS